text
stringlengths
4
2.78M
--- abstract: | We compute rationally the topological (complex) K-theory of the classifying space $BG$ of a discrete group provided that $G$ has a cocompact $G$-$CW$-model for its classifying space for proper $G$-actions. For instance word-hyperbolic groups and cocompact discrete subgroups of connected Lie groups satisfy this assumption. The answer is given in terms of the group cohomology of $G$ and of the centralizers of finite cyclic subgroups of prime power order. We also analyze the multiplicative structure. Key words: topological $K$-theory, classifying spaces of groups.\ Mathematics Subject Classification 2000: 55N15. author: - | Wolfgang Lück[^1]\ Fachbereich Mathematik\ Universität Münster\ Einsteinstr. 62\ 48149 Münster\ Germany bibliography: - 'dbdef.bib' - 'dbpub.bib' - 'dbpre.bib' - 'dbtkcsrextra.bib' title: 'Rational Computations of the Topological [$K$]{}-Theory of Classifying Spaces of Discrete Groups' --- \[section\] \[theorem\][Lemma]{} \[theorem\][Proposition]{} \[theorem\][Definition]{} \[theorem\][Example]{} \[theorem\][Remark]{} \[theorem\][Corollary]{} \[theorem\][Conjecture]{} \[theorem\][Problem]{} [‘@=11@equation=@theorem]{} =§ Introduction and Statements of Results {#sec: Introduction and Statements of Results} ====================================== The main result of this paper is: \[the: main theorem\] Let $G$ be a discrete group. Denote by $K^*(BG)$ the topological (complex) K-theory of its classifying space $BG$. Suppose that there is a cocompact $G$-$CW$-model for the classifying space $\underline{E}G$ for proper $G$-actions. Then there is a ${{\mathbb Q}}$-isomorphism $$\begin{gathered} \overline{{\operatorname{ch}}}^n_G \colon K^n(BG) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}~ \xrightarrow{\cong} \\ \left(\prod_{i \in {{\mathbb Z}}} H^{2i+n}(BG;{{\mathbb Q}})\right) \times \prod_{p \text{ prime}} ~ \prod_{(g) \in {\operatorname{con}}_p(G)} \left(\prod_{i \in {{\mathbb Z}}} H^{2i+n}(BC_G\langle g \rangle;{{\mathbb Q}}\widehat{_p})\right),\end{gathered}$$ where ${\operatorname{con}}_p(G)$ is the set of conjugacy classes (g) of elements $g \in G$ of order $p^d$ for some integer $d \ge 1$ and $C_G\langle g \rangle$ is the centralizer of the cyclic subgroup $\langle g \rangle$ generated by $g$. The *classifying space $\underline{E}G$ for proper $G$-actions* is a proper $G$-$CW$-complex such that the $H$-fixed point set is contractible for every finite subgroup $H \subseteq G$. It has the universal property that for every proper $G$-$CW$-complex $X$ there is up to $G$-homotopy precisely one $G$-map $f \colon X \to \underline{E}G$. Recall that a $G$-$CW$-complex is proper if and only if all its isotropy groups are finite, and is finite if and only if it is cocompact. The assumption in Theorem \[the: main theorem\] that there is a cocompact $G$-$CW$-model for the classifying space $\underline{E}G$ for proper $G$-actions is satisfied for instance if $G$ is word-hyperbolic in the sense of Gromov, if $G$ is a cocompact subgroup of a Lie group with finitely many path components, if $G$ is a finitely generated one-relator group, if $G$ is an arithmetic group, a mapping class group of a compact surface or the group of outer automorphisms of a finitely generated free group. For more information about $\underline{E}G$ we refer for instance to [@Baum-Connes-Higson(1994)] and [@Lueck(2004a)]. We will prove Theorem \[the: main theorem\] in Section \[sec: Proof of the Main Result\]. We will also investigate the multiplicative structure on $K^n(BG) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ in Section \[sec: Multiplicative Structures\]. If one is willing to complexify, one can show: \[the: Multiplicative structure\] Let $G$ be a discrete group. Suppose that there is a cocompact $G$-$CW$-model for the classifying space $\underline{E}G$ for proper $G$-actions. Then there is a ${{\mathbb C}}$-isomorphism $$\begin{gathered} \overline{{\operatorname{ch}}}^n_{G,{{\mathbb C}}} \colon K^n(BG) \otimes_{{{\mathbb Z}}} {{\mathbb C}}~ \xrightarrow{\cong} \\ \left(\prod_{i \in {{\mathbb Z}}} H^{2i+n}(BG;{{\mathbb C}})\right) \times \prod_{p \text{ prime}} ~ \prod_{(g) \in {\operatorname{con}}_p(G)} \left(\prod_{i \in {{\mathbb Z}}} H^{2i+n}(BC_G\langle g \rangle;{{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}})\right),\end{gathered}$$ which is compatible with the standard multiplicative structure on $K^*(BG)$ and the one on the target given by $$\left(a, u_{p,(g)}\right) \cdot \left(b, v_{p,(g)}\right) ~ = ~ \left(a\cdot b, (a \cdot v_{p,(g)} + b \cdot u_{p,(g)} + u_{p,(g)} \cdot v_{p,(g)})\right)$$ for $$\begin{aligned} (g) & \in & {\operatorname{con}}_p(G); \\ a,b & \in & \prod_{i \in {{\mathbb Z}}} H^{2i+*}(BG;{{\mathbb C}}); \\ u_{p,(g)}, v_{p,(g)} & \in & \prod_{i \in {{\mathbb Z}}} H^{2i+*}(BC_G\langle g \rangle;{{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}}),\end{aligned}$$ and the structures of a graded commutative ring on $\prod_{i \in {{\mathbb Z}}} H^{2i+*}(BG;{{\mathbb C}})$ and $\prod_{i \in {{\mathbb Z}}} H^{2i+*}(BC_G\langle g \rangle;{{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}})$ coming from the cup-product and the obvious $\prod_{i \in {{\mathbb Z}}} H^{2i+*}(BG;{{\mathbb C}})$-module structure on $\prod_{i \in {{\mathbb Z}}} H^{2i+*}(BC_G\langle g \rangle;{{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}})$ coming from the canonical maps $BC_G\langle g \rangle \to BG$ and ${{\mathbb C}}\to {{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}}$. In Section \[sec: Weakening the Finiteness Conditions\] we will prove Theorem \[the: main theorem\] and Theorem \[the: Multiplicative structure\] under weaker finiteness assumptions than stated above. If $G$ is finite, we get the following integral computation of $K^*(BG)$. Throughout the paper $R(G)$ will be the complex representation ring and ${{\mathbb I}}_G$ be its augmentation ideal, i.e. the kernel of the ring homomorphism $R(G) \to {{\mathbb Z}}$ sending $[V]$ to $\dim_{{{\mathbb C}}}(V)$. If $G_p \subseteq G$ is a $p$-Sylow subgroup, restriction defines a map ${{\mathbb I}}(G) \to {{\mathbb I}}(G_p)$. Let ${{\mathbb I}}_p(G)$ be the quotient of ${{\mathbb I}}(G)$ by the kernel of this map. This is independent of the choice of the $p$-Sylow subgroup since two $p$-Sylow subgroups of $G$ are conjugate. There is an obvious isomorphism from ${{\mathbb I}}_p(G) \xrightarrow{\cong} {\operatorname{im}}({{\mathbb I}}(G) \to {{\mathbb I}}(G_p))$. We will prove in Section \[sec: The K-Theory of the Classifying Space of a Finite Group\] [**($K$-theory of $BG$ for finite groups $G$).**]{} \[the: computation of K\_\*(BG) and K\^\*(BG) for finite G\] Let $G$ be a finite group. For a prime $p$ denote by $r(p) = |{\operatorname{con}}_p(G)|$ the number of conjugacy classes $(g)$ of elements $g \in G$ whose order $|g|$ is $p^d$ for some integer $d \ge 1$. Then there are isomorphisms of abelian groups $$\begin{aligned} K^0(BG) & \cong & {{\mathbb Z}}\times \prod_{p\text{ prime}} {{\mathbb I}}_p(G) \otimes_{{{\mathbb Z}}} {{\mathbb Z}}\widehat{_p} ~ \cong ~ {{\mathbb Z}}\times \prod_{p\text{ prime}} ({{\mathbb Z}}\widehat{_p})^{r(p)}; \\ K^1(BG) & \cong & 0.\end{aligned}$$ The isomorphism $K^0(BG) \xrightarrow{\cong} {{\mathbb Z}}\times \prod_{p\text{ prime}} {{\mathbb I}}_p(G)\otimes_{{{\mathbb Z}}} {{\mathbb Z}}\widehat{_p}$ is compatible with the standard ring structure on the source and the ring structure on the target given by $$\left(m,u_p \otimes a_p\right) \cdot \left(n,v_p \otimes b_p\right) ~ = ~ \left(mn,(mv_p \otimes b_p + nu_p \otimes a_p + (u_pv_p) \otimes (a_pb_p)\right)$$ for $m,n \in {{\mathbb Z}}$, $u_p,v_p \in {{\mathbb I}}_p(G)$ and $a_p, b_p \in {{\mathbb Z}}\widehat{_p}$ and the obvious multiplication in ${{\mathbb Z}}$, ${{\mathbb I}}_p(G)$ and ${{\mathbb Z}}\widehat{_p}$. The additive version of Theorem \[the: computation of K\_\*(BG) and K\^\*(BG) for finite G\] has already been explained in [@Jackowski-Oliver(1996) page 125]. Inspecting [@Jackowski(1978) Theorem 2.2] one can also derive the ring structure. In [@Kuhn(1987)] the $K$-theory of $BG$ with coefficients in the field ${\ensuremath{\mathbf{F}}}_p$ of $p$ elements has been determined including the multiplicative structure. The proof of Theorem \[the: computation of K\_\*(BG) and K\^\*(BG) for finite G\] we will present here is based on the ideas of this paper. We will and need to show a stronger statement about the pro-group $\{{{\mathbb I}}_G/({{\mathbb I}}_G)^{n+1}\}$ in Theorem \[the: computation of AI\_G/(AI\_G)\^[n+1]{}\] . A version of Theorem \[the: main theorem\] for topological $K$-theory with coefficients in the $p$-adic integers has been proved by Adem [@Adem(1992)], [@Adem(1993b)] using the Atiyah-Segal completion theorem for the finite group $G/G'$ provided that $G$ contains a torsionfree subgroup $G'$ of finite index. Our methods allow to drop this condition, to deal with $K^*(BG) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ directly and study systematically the multiplicative structure for $K^*(BG) \otimes_{{{\mathbb Z}}} {{\mathbb C}}$. They are based on the equivariant cohomological Chern character of [@Lueck(2004i)]. For integral computations of the $K$-theory and $K$-homology of classifying spaces of groups we refer to [@Joachim-Lueck(2005)]. The paper is organized as follows: ----------------------------------------------------------------------------------------------------------------------------------- ------------ [ \[sec: Borel Cohomology and Rationalization\]. & Borel Cohomology and Rationalization ]{} [ \[sec: Some Preliminaries about Pro-Modules\]. & Some Preliminaries about Pro-Modules ]{} [ \[sec: The K-Theory of the Classifying Space of a Finite Group\]. & The K-Theory of the Classifying Space of a Finite Group ]{} [ \[sec: Proof of the Main Result\]. & Proof of the Main Result ]{} [ \[sec: Multiplicative Structures\]. & Multiplicative Structures ]{} [ \[sec: Weakening the Finiteness Conditions\]. & Weakening the Finiteness Conditions ]{} [ \[sec: Examples and Further Remarks\]. & Examples and Further Remarks ]{} References ----------------------------------------------------------------------------------------------------------------------------------- ------------ The author wants to the thank the Max Planck Institute for Mathematics in Bonn for its hospitality during his stay from April 2005 until July 2005 when this paper was written. Borel Cohomology and Rationalization {#sec: Borel Cohomology and Rationalization} ==================================== Denote by ${{\EuR}{GROUPOIDS}}$ the category of small groupoids. Let $\Omega\text{-}{{\EuR}{SPECTRA}}$ be the category of $\Omega$-spectra, where a morphism ${\ensuremath{\mathbf{f}}}\colon {\ensuremath{\mathbf{E}}}\to {\ensuremath{\mathbf{F}}}$ is a sequence of maps $f_n \colon E_n \to F_n$ compatible with the structure maps and we work in the category of compactly generated spaces (see for instance [@Davis-Lueck(1998) Section 1]). A contravariant ${{\EuR}{GROUPOIDS}}$-$\Omega$-spectrum is a contravariant functor ${\ensuremath{\mathbf{E}}}\colon {{\EuR}{GROUPOIDS}}\to \Omega\text{-}{{\EuR}{SPECTRA}}$. Let ${\ensuremath{\mathbf{E}}}$ be a (non-equivariant) $\Omega$-spectrum. We can associate to it a contravariant ${{\EuR}{GROUPOIDS}}$-$\Omega$-spectrum $$\begin{aligned} {\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}} \colon {{\EuR}{GROUPOIDS}}\to \Omega\text{-}{{\EuR}{SPECTRA}}; \quad {{\mathcal G}}& \mapsto & {\operatorname{map}}(B{{\mathcal G}};{\ensuremath{\mathbf{E}}}), \label{bfE_{Bor}}\end{aligned}$$ where $B{{\mathcal G}}$ is the classifying space associated to ${{\mathcal G}}$ and ${\operatorname{map}}(B{{\mathcal G}};{\ensuremath{\mathbf{E}}})$ is the obvious mapping space spectrum (see for instance [@Davis-Lueck(1998) page 208 and Definition 3.10 on page 224]). In the sequel we use the notion of an equivariant cohomology theory ${{\mathcal H}}^*_?$ with values in $R$-modules of [@Lueck(2004i) Section 1]. It assigns to each (discrete) group $G$ a $G$-cohomology theory ${{\mathcal H}}^*_G$ with values in the category of $R$-modules on the category of pairs of $G$-$CW$-complexes, where $*$ runs through ${{\mathbb Z}}$. Let $H^*_?(-,{\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}})$ be the to ${\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}$ associated equivariant cohomology theory with values in ${{\mathbb Z}}$-modules satisfying the disjoint union axiom, which has been constructed in [@Lueck(2004i) Example 1.8]. For a given discrete group $G$ and a $G$-$CW$-pair $(X,A)$ and $n \in {{\mathbb Z}}$ we get a natural identification $$\begin{aligned} H^n_G(X,A;{\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}) & = & H^n(EG \times_G (X,A);{\ensuremath{\mathbf{E}}}), \label{H^n_G(X,A;bfE_{Bor}) = H^n(EG times_G (X,A);bfE)}\end{aligned}$$ where $H^*(-;{\ensuremath{\mathbf{E}}})$ is the (non-equivariant) cohomology theory associated to ${\ensuremath{\mathbf{E}}}$. It is induced by the following composite of equivalences of $\Omega$-spectra $$\begin{gathered} {\operatorname{map}}_{{{\EuR}{Or}}(G)}\left({\operatorname{map}}_G(G/?,X)^G,{\operatorname{map}}\left(B{{\mathcal G}}^G(G/H),{\ensuremath{\mathbf{E}}}\right)\right) \\ ~ \to ~ {\operatorname{map}}_{{{\EuR}{Or}}(G)}\left({\operatorname{map}}_G(G/?,X)^G,{\operatorname{map}}\left(EG \times_G G/?,{\ensuremath{\mathbf{E}}}\right)\right) \\ ~ \to ~ {\operatorname{map}}\left({\operatorname{map}}_G(G/?,X)^G \otimes_{{{\EuR}{Or}}(G)} EG \times_G G/?,{\ensuremath{\mathbf{E}}}\right) ~ \to ~ {\operatorname{map}}\left(EG \times_G X,{\ensuremath{\mathbf{E}}}\right)\end{gathered}$$ using the notation of [@Lueck(2004i)]. In the literature $H^n(EG \times_G (X,A);{\ensuremath{\mathbf{E}}})$ is called *the equivariant Borel cohomology* of $(X,A)$ with respect to the (non-equivariant) cohomology theory $H^*(-;{\ensuremath{\mathbf{E}}})$. Our main example for ${\ensuremath{\mathbf{E}}}$ will be the topological $K$-theory spectrum ${\ensuremath{\mathbf{K}}}$, whose associated (non-equivariant) cohomology theory $H^*(-;{\ensuremath{\mathbf{K}}})$ is topological $K$-theory $K^*$. There is a functor $${\ensuremath{\mathbf{R}}\ensuremath{\mathbf{a}}\ensuremath{\mathbf{t}}}\colon \Omega\text{-}{{\EuR}{SPECTRA}}\to \Omega\text{-}{{\EuR}{SPECTRA}}, \quad {\ensuremath{\mathbf{E}}}\mapsto {\ensuremath{\mathbf{R}}\ensuremath{\mathbf{a}}\ensuremath{\mathbf{t}}}({\ensuremath{\mathbf{E}}}) = {\ensuremath{\mathbf{E}}}_{(0)},$$ which assigns to an $\Omega$-spectrum ${\ensuremath{\mathbf{E}}}$ its rationalization ${\ensuremath{\mathbf{E}}}_{(0)}$. The homotopy groups $\pi_k({\ensuremath{\mathbf{E}}}_{(0)})$ come with a canonical structure of a ${{\mathbb Q}}$-module. There is a natural transformation $$\begin{aligned} {\ensuremath{\mathbf{i}}}({\ensuremath{\mathbf{E}}}) \colon {\ensuremath{\mathbf{E}}}& \to & {\ensuremath{\mathbf{E}}}_{(0)} \label{bfi(bfE)}\end{aligned}$$ which induces isomorphisms $$\begin{aligned} \pi_k({\ensuremath{\mathbf{i}}}({\ensuremath{\mathbf{E}}})) \colon \pi_k({\ensuremath{\mathbf{E}}}) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}& \xrightarrow{\cong}& \pi_k({\ensuremath{\mathbf{E}}}_{(0)}). \label{iso pi_k(bfi(bfE))}\end{aligned}$$ Composing ${\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}$ with ${\ensuremath{\mathbf{R}}\ensuremath{\mathbf{a}}\ensuremath{\mathbf{t}}}$ yields a contravariant ${{\EuR}{Or}}(G)$-$\Omega$-spectrum denoted by $\left({\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}\right)_{(0)}$. We obtain an equivariant cohomology theory with values in ${{\mathbb Q}}$-modules by $H^*_?\left(-;\left({\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)$. The map ${\ensuremath{\mathbf{i}}}$ induces a natural transformation of equivariant cohomology theories $$\begin{aligned} i^*_?(-;{\ensuremath{\mathbf{E}}}) \colon H_?^*\left(-;{\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}\right) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}& \to & H_?^*\left(-;\left({\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}\right)_{(0)}\right). \label{i^*_?(-;bfE)}\end{aligned}$$ \[lem: i\^\*\_G(X;bfE) is bijective for finite X\] If $G$ is a group $G$ and $(X,A)$ is a relative finite $G$-$CW$-pair, then $$i^n_G(X,A;{\ensuremath{\mathbf{E}}}) \colon H_G^n\left(X,A;{\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}\right) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\to H_G^n\left(X,A;\left({\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)$$ is a ${{\mathbb Q}}$-isomorphism for all $n \in {{\mathbb Z}}$. The transformation $i^*_G(-;{\ensuremath{\mathbf{E}}})$ is a natural transformation of $G$-cohomology theories since ${{\mathbb Q}}$ is flat over ${{\mathbb Z}}$. One easily checks that it induces a bijection in the case $X = G/H$, since then there is a commutative square with obvious isomorphisms as vertical maps and the isomorphism of as lower horizontal arrow $\begin{CD} H_G^k\left(G/H;{\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}\right) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}@>i^*_G(G/H;{\ensuremath{\mathbf{E}}})>> H_G^k\left(G/H;\left({\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)\\ @V{\cong}VV @VV{\cong}V\\ \pi_{-k}\left({\operatorname{map}}(BH,{\ensuremath{\mathbf{E}}})\right) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}@>>\pi_{-k}\left({\ensuremath{\mathbf{i}}}({\operatorname{map}}(BH,{\ensuremath{\mathbf{E}}}))\right)> \pi_{-k}\left(\left({\operatorname{map}}(BH,{\ensuremath{\mathbf{E}}})\right)_{(0)}\right) \end{CD}$ By induction over the number of $G$-cells using Mayer-Vietoris sequences one shows that $i^*_G(X,A)$ is an isomorphism for all relative finite $G$-$CW$-pairs $(X,A)$. [**(Comparison of the various rationalizations).**]{} \[rem: Comparision of the various rationalizations\] *Notice that $i^*_G(X,A,{\ensuremath{\mathbf{E}}})$ of is not an isomorphism for all $G$-$CW$-pairs $(X,A)$ because the source does not satisfy the disjoint union axiom for arbitrary index sets in contrast to the target. The point is that $- \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ is compatible with direct sums but not with direct products.* Since $H_?^*\left(-;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)$ is an equivariant cohomology theory with values in ${{\mathbb Q}}$-modules satisfying the disjoint union axiom, we can use the equivariant cohomological Chern character of [@Lueck(2004i)] to compute $H_G^*\left(\underline{E}G;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)$ for all groups $G$. This is also true for the equivariant cohomology theory with values in ${{\mathbb Q}}$-modules satisfying the disjoint union axiom $H_?^*\left(-;\left({\ensuremath{\mathbf{K}}}_{(0)}\right)_{{\operatorname{Bor}}}\right)$. (Here we have changed the order of ${\operatorname{Bor}}$ and $(0)$.) But this a much worse approximation of $K^k(BG) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ than $H_G^*\left(BG;\left({\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)$. Namely, ${\ensuremath{\mathbf{i}}}$ induces using the universal property of ${\ensuremath{\mathbf{R}}\ensuremath{\mathbf{a}}\ensuremath{\mathbf{t}}}$ a natural map of contravariant ${{\EuR}{GROUPOIDS}}$-$\Omega$-spectra $$\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)} \to \left({\ensuremath{\mathbf{K}}}_{(0)}\right)_{{\operatorname{Bor}}}$$ and thus a natural map $$H_G^k\left(X;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right) \to H_G^k\left(X;\left({\ensuremath{\mathbf{K}}}_{(0)}\right)_{{\operatorname{Bor}}}\right)$$ but this map is in general not an isomorphism. Namely, it is not bijective for $X = G/H$ for finite non-trivial $H$ and $k = 0$. In this case the source turns out to be $$\pi_0\left(\left({\operatorname{map}}(BH;{\ensuremath{\mathbf{K}}})\right)_{(0)}\right) ~ \cong ~ K^0(BH) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}~ \cong ~ {{\mathbb Q}}\times \prod_{p |\; |H|} \left({{\mathbb Q}}\widehat{_p}\right)^{r(p)}$$ for $r(p)$ the number of conjugacy classes $(h)$ of non-trivial elements $h \in H$ of $p$-power order, and the target is $K^0(BH;{{\mathbb Q}})$ which turns out to be isomorphic to ${{\mathbb Q}}$ since the rational cohomology of $BH$ agrees with the one of the one-point-space. ** As mentioned before we want to use the equivariant cohomological Chern character of [@Lueck(2004i)] to compute $H_G^*\left(X;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)$. This requires a careful analysis of the contravariant functor $${{\EuR}{FGINJ}}\to {{\mathbb Q}}\text{ -}{\operatorname{MOD}}, \quad H \mapsto H_G^k\left(G/H;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right) = K^k(BH) \otimes_{{{\mathbb Z}}} {{\mathbb Q}},$$ from the category ${{\EuR}{FGINJ}}$ of finite groups with injective group homomorphisms as morphisms to the category ${{\mathbb Q}}\text{ -}{\operatorname{MOD}}$ of ${{\mathbb Q}}$-modules. It will be carried out in Section \[sec: The K-Theory of the Classifying Space of a Finite Group\] after some preliminaries in Section \[sec: Some Preliminaries about Pro-Modules\]. Some Preliminaries about Pro-Modules {#sec: Some Preliminaries about Pro-Modules} ==================================== It will be crucial to handle pro-systems and pro-isomorphisms and not to pass directly to inverse limits. In this section we fix our notation for handling pro-$R$-modules for a commutative ring $R$, where ring always means associative ring with unit. For the definitions in full generality see for instance [@Artin-Mazur(1969) Appendix] or [@Atiyah-Segal(1969)  2]. For simplicity, all pro-$R$-modules dealt with here will be indexed by the positive integers. We write $\{M_n,\alpha_n\}$ or briefly $\{M_n\}$ for the inverse system $$M_0 \xleftarrow{\alpha_1} M_1 \xleftarrow{\alpha_2} M_2 \xleftarrow{\alpha_3} M_3 \xleftarrow{\alpha_4} \ldots .$$ and also write $\alpha_n^m := \alpha_{m+1} \circ \cdots \circ \alpha_{n}\colon G_n \to G_m$ for $n > m$ and put $\alpha^n_n ={\operatorname{id}}_{G_n}$. For the purposes here, it will suffice (and greatly simplify the notation) to work with “strict” pro-homomorphisms $\{f_n\} \colon \{M_n,\alpha_n\} \to \{N_n,\beta_n\}$, i.e. a collection of homomorphisms $f_n \colon M_n \to N_n$ for $n \ge 1$ such that $\beta_{n}\circ f_n = f_{n-1}\circ\alpha_{n}$ holds for each $ n\ge 2$. Kernels and cokernels of strict homomorphisms are defined in the obvious way. A pro-$R$-module $\{M_n,\alpha_n\}$ will be called *pro-trivial* if for each $m \ge 1$, there is some $n\ge m$ such that $\alpha^m_n = 0$. A strict homomorphism $f\colon \{M_n,\alpha_n\} \to \{N_n,\beta_n\}$ is a *pro-isomorphism* if and only if $\ker(f)$ and ${\operatorname{coker}}(f)$ are both pro-trivial, or, equivalently, for each $m\ge 1$ there is some $n\ge m$ such that ${\operatorname{im}}(\beta_n^m) \subseteq {\operatorname{im}}(f_m)$ and $\ker(f_n) \subseteq \ker(\alpha_n^m)$. A sequence of strict homomorphisms $$\{M_n,\alpha_n\} \xrightarrow{\{f_n\}} \{M_n',\alpha_n'\} \xrightarrow{g_n} \{M_n'',\alpha_n''\}$$ will be called *exact* if the sequences of $R$-modules $M_n \xrightarrow{f_n} N_n \xrightarrow{g_n} M_n''$ is exact for each $n \ge 1$, and it is called *pro-exact* if $g_n \circ f_n = 0$ holds for $n \ge 1$ and the pro-$R$-module $\{\ker(g_n)/{\operatorname{im}}(f_n)\bigr\}$ is pro-trivial. The following results will be needed later. \[lem: pro-exactness and limits\] Let $0 \to \{M_n',\alpha_n'\} \xrightarrow{\{f_n\}} \{M_n,\alpha_n\} \xrightarrow{\{g_n\}} \{M_n'',\alpha_n''\} \to 0$ be a pro-exact sequence of pro-$R$-modules. Then there is a natural exact sequence $$\begin{gathered} 0 \to {{\underleftarrow{\lim}_{n \ge 1}^{}M_n'}} \xrightarrow{{{\underleftarrow{\lim}_{n \ge 1}^{}f_n}}} {{\underleftarrow{\lim}_{n \ge 1}^{}M_n}} \xrightarrow{{{\underleftarrow{\lim}_{n \ge 1}^{}g_n}}} {{\underleftarrow{\lim}_{n \ge 1}^{}M_n''}} \xrightarrow{\delta} \\ {\underleftarrow{\lim}_{n \ge 1}^{1}M_n'} \xrightarrow{{\underleftarrow{\lim}_{n \ge 1}^{1}f_n}} {\underleftarrow{\lim}_{n \ge 1}^{1}M_n} \xrightarrow{{\underleftarrow{\lim}_{n \ge 1}^{1}g_n}} {\underleftarrow{\lim}_{n \ge 1}^{1}M_n''} \to 0.\end{gathered}$$ In particular a pro-isomorphism $\{f_n\} \colon \{M_n,\alpha_n\} \to \{N_n,\beta_n\}$ induces isomorphisms $$\begin{array}{llcl} {{\underleftarrow{\lim}_{n \ge 1}^{}f_n}} \colon & {{\underleftarrow{\lim}_{n \ge 1}^{}M_n}} & \xrightarrow{\cong} & {{\underleftarrow{\lim}_{n \ge 1}^{}N_n}}; \\ {\underleftarrow{\lim}_{n \ge 1}^{1}f_n} \colon & {\underleftarrow{\lim}_{n \ge 1}^{1}M_n} & \xrightarrow{\cong} & {\underleftarrow{\lim}_{n \ge 1}^{1}N_n}. \end{array}$$ If $0 \to \{M_n',\alpha_n'\} \xrightarrow{\{f_n\}} \{M_n,\alpha_n\} \xrightarrow{g_n} \{M_n'',\alpha_n''\} \to 0$ is exact, the construction of the six-term sequence is standard (see for instance [@Switzer(1975) Proposition 7.63 on page 127]). Hence it remains to show for a pro-trivial pro-$R$-module $\{M_n,\alpha_n\}$ that ${{\underleftarrow{\lim}_{n \ge 1}^{}M_n}}$ and ${\underleftarrow{\lim}_{n \ge 1}^{1}M_n}$ vanish. This follows directly from the standard construction for these limits as the kernel and cokernel of the homomorphism $$\prod_{n \ge 1} M_n \to \prod_{n \ge 1} M_n, \quad (x_n)_{n \ge 1} ~ \mapsto (x_n - \alpha_{n+1}(x_{n+1}))_{n \ge 1}.$$ \[lem: pro-exactness and exactness\] Fix any commutative Noetherian ring $R$, and any ideal $I\subseteq R $. Then for any exact sequence $M' \to M \to M''$ of finitely generated $R$-modules, the sequence $$\{M'/I^nM'\} \to \{M/I^nM\} \to \{M''/I^nM''\}$$ of pro-$R$-modules is pro-exact. It suffices to prove this for a short exact sequence $0 \to M'\to M\to M'' \to 0$. Regard $M'$ as a submodule of $M$, and consider the exact sequence $$0 \to \left\{{\tfrac{(I^n M){\cap} M'}{I^n M'}}\right\} \to \{M'/I^nM'\} \to \{M/I^nM\} \to \{M''/I^nM''\} \to 0.$$ By [@Atiyah-McDonald(1969) Theorem 10.11 on page 107], the filtrations $\{(I^nM){\cap}M'\}$ and $\{I^nM'\}$ of $M'$ have “bounded difference”, i.e. there exists $k>0$ with the property that $(I^{n+k}M){\cap}M'\subseteq I^nM'$ holds for all $n \ge 1$. The first term in the above exact sequence is thus pro-trivial, and so the remaining terms define a short sequence of pro-$R$-modules which is pro-exact. The K-Theory of the Classifying Space of a Finite Group {#sec: The K-Theory of the Classifying Space of a Finite Group} ======================================================= Next we investigate the contravariant functor from the category ${{\EuR}{FGINJ}}$ of finite groups with injective group homomorphisms as morphisms to the category ${{\mathbb Z}}\text{ -}{\operatorname{MOD}}$ of ${{\mathbb Z}}$-modules $${{\EuR}{FGINJ}}\to {{\mathbb Z}}\text{ -}{\operatorname{MOD}}, \quad H \mapsto K^k(BH).$$ We need some input from representation theory. Recall that $R(G)$ denotes the complex representation ring. Let ${{\mathbb I}}_G$ be the kernel of the ring homomorphism ${\operatorname{res}}_G^{\{1\}}\colon R(G) \to R(\{1\}$ which is the same as the kernel of augmentation ring homomorphism $R(G) \to {{\mathbb Z}}$ sending $[V]$ to $\dim_{{{\mathbb C}}}(V)$. We will frequently use the so called *double coset formula* (see [@Serre(1977) Proposition 22 in Chapter 7 on page 58]). It says for two subgroups $H,K \subseteq G$ $$\begin{aligned} {\operatorname{res}}_G^K \circ {\operatorname{ind}}_H^G & = & \sum_{KgH \in K\backslash G/H} {\operatorname{ind}}_{c(g)\colon H\cap g^{-1}Kg \to K} \circ {\operatorname{res}}_{H}^{ H\cap g^{-1}Kg}, \label{double coset formula}\end{aligned}$$ where $c(g)$ is conjugation with $g$, i.e. $c(g)(h) = ghg^{-1}$, and ${\operatorname{ind}}$ and ${\operatorname{res}}$ denote induction and restriction. One consequence of it is that ${\operatorname{ind}}_H^G\colon R(H) \to R(G)$ sends ${{\mathbb I}}_H$ to ${{\mathbb I}}_G$. Obviously ${\operatorname{res}}_G^H\colon R(G) \to R(H)$ maps ${{\mathbb I}}_G$ to ${{\mathbb I}}_H$. For an abelian group $M$ let $M_{(p)}$ be the localization of $M$ at $p$. If ${{\mathbb Z}}_{(p)}$ is the subring of ${{\mathbb Q}}$ obtained from ${{\mathbb Z}}$ by inverting all prime numbers except $p$, then $M_{(p)} = M \otimes_{{{\mathbb Z}}} {{\mathbb Z}}_{(p)}$. Recall that the functor $ ? \otimes_{{{\mathbb Z}}} {{\mathbb Z}}_{(p)}$ is exact. \[lem: res circ ind and res have the same image\] Let $G$ be a finite group. Let $p$ be a prime number and denote by $G_p$ a $p$-Sylow subgroup of $G$. Then the composite $$R(G_p)_{(p)} \xrightarrow{{\operatorname{ind}}_{G_p}^G} R(G)_{(p)} \xrightarrow{{\operatorname{res}}_G^{G_p}} R(G_p)_{(p)}$$ has the same image as $${\operatorname{res}}_G^{G_p} \colon R(G)_{(p)} \to R(G_p)_{(p)}.$$ A subgroup $H \subseteq G$ is called *$p$-elementary* if it is isomorphic to $C \times P$ for a cyclic group $C$ of order prime to $p$ and a $p$-group $P$. Let $\{C_i \times P_i \mid i = 1,2, \ldots , r\}$ be a complete system of representatives of conjugacy classes of $p$-elementary subgroups of $G$. We can assume without loss of generality $P_i \subseteq G_p$. Define for $i = 1,2, \ldots , r$ a homomorphism of abelian groups $$\phi_i ~ := ~ \sum_{\substack{G_p \cdot g \cdot (C_i \times P_i) \in \\ G_p\backslash G/(C_i \times P_i)}} {\operatorname{ind}}_{c(g)\colon P_i \cap g^{-1}G_pg \to G_p} \circ {\operatorname{res}}_{P_i}^{P_i \cap g^{-1}G_pg} \colon R(P_i) ~ \to ~ R(G_p).$$ Since the order of $C_i$ is prime to $p$, we have $(C_i \times P_i) \cap g^{-1}G_p g = P_i \cap g^{-1}G_p g$ for $g \in G$. Hence the following diagram commutes (actually before localization) by the double coset formula $$\begin{CD} \bigoplus_{i=1}^r R(P_i)_{(p)} @> \bigoplus_{i=1}^r {\operatorname{ind}}_{P_i}^{G_p} >> R(G_p)_{(p)} \\ @V \bigoplus_{i=1}^r {\operatorname{ind}}_{P_i}^{C_i \times P_i} VV @VV {\operatorname{ind}}_{G_p}^G V \\ \bigoplus_{i=1}^r R(C_i \times P_i)_{(p)} @> \bigoplus_{i=1}^r {\operatorname{ind}}_{C_i \times P_i}^{G_p} >> R(G)_{(p)} \\ @V \bigoplus_{i=1}^r {\operatorname{res}}_{C_i \times P_i}^{P_i} VV @VV {\operatorname{res}}_G^{G_p} V \\ \bigoplus_{i=1}^r R(P_i)_{(p)} @> \bigoplus_{i=1}^r \phi_i >> R(G_p)_{(p)} \end{CD}$$ The middle horizontal arrow $\bigoplus_{i=1}^r {\operatorname{ind}}_{C_i \times P_i}^{G_p}$ is surjective by Brauer’s Theorem [@Serre(1977) Theorem 18 in Chapter 10 on page 75]. The composite of the left lower vertical arrow and the left upper vertical arrow $\bigoplus_{i=1}^r {\operatorname{res}}_{C_i \times P_i}^{P_i} \circ {\operatorname{ind}}_{P_i}^{C_i \times P_i}$ is $\bigoplus_{i=1}^r |C_i| \cdot {\operatorname{id}}$ and hence an isomorphism. Now the claim follows from an easy diagram chase. \[lem: res circ ind for different primes\] Let $p$ and $q$ be different primes. Then the composition $$R(G_p) \xrightarrow{{\operatorname{ind}}_{G_p}^G} R(G) \xrightarrow{{\operatorname{res}}_G^{G_q}} R(G_q)$$ agrees with $|G_q\backslash G / G_p| \cdot {\operatorname{ind}}_{\{1\}}^{G_q} \circ {\operatorname{res}}_{G_p}^{\{1\}}.$ This follows from the double coset formula since $G_p \cap g^{-1}G_qg = \{1\}$ for each $g \in G$. \[lem: computing R(G) to prod\_p R(G\_p)\] Let $G$ be a finite group and let ${{\mathbb I}}_G \subseteq R(G)$ be the augmentation ideal. Then the following sequence of $R(G)$-modules is exact $$0 \to \bigcap_{m \ge 1} ({{\mathbb I}}_G)^m \xrightarrow{i} {{\mathbb I}}_G \xrightarrow{\prod_{p} {\operatorname{res}}_{G}^{G_p}} \prod_{p \in {{\mathcal P}}(G)} {\operatorname{im}}\left({\operatorname{res}}_{G}^{G_p}\colon {{\mathbb I}}_G \to {{\mathbb I}}_{G_p}\right) \to 0,$$ where $i$ is the inclusion and ${{\mathcal P}}(G)$ is the set of primes dividing $|G|$. The kernel of $\prod_{p} {\operatorname{res}}_G^{G_p} \colon R(G) \to \prod_{p} R(G_p)$ is $\bigcap_{m \ge 1} ({{\mathbb I}}_G)^m$ by [@Atiyah(1961) Proposition 6.12 on page 269]. Hence it remains to show that $$\prod_{p} {\operatorname{res}}_{G}^{G_p} \colon {{\mathbb I}}_G \to \prod_{p} {\operatorname{im}}\left({\operatorname{res}}_{G}^{G_p}\colon {{\mathbb I}}_G \to {{\mathbb I}}_{G_p}\right)$$ is surjective. It suffices to show for a each prime number $q$ that its localization $$\prod_{p} {\operatorname{res}}_{G}^{G_p} \colon ({{\mathbb I}}_G)_{(q)} \to \prod_{p} {\operatorname{im}}\left({\operatorname{res}}_{G}^{G_p}\colon ({{\mathbb I}}_G)_{(q)} \to ({{\mathbb I}}_{G_p})_{(q)} \right)$$ is surjective. Next we construct the following commutative diagram $$\begin{CD} \bigoplus_{p \not= q} ({{\mathbb I}}_{G_p})_{(q)} @> \prod_{p \not= q} {\operatorname{res}}_G^{G_p} \circ {\operatorname{ind}}_{G_p}^G >> \prod_{p \not=q} {\operatorname{im}}\left({\operatorname{res}}_G^{G_p} \colon ({{\mathbb I}}_G)_{(q)} \to ({{\mathbb I}}_{G_p})_{(q)} \right) \\ @V \bigoplus_{p \not= q} {\operatorname{ind}}_{G_p}^{G} VV @V i VV \\ ({{\mathbb I}}_G)_{(q)} @> \prod_{p} {\operatorname{res}}_G^{G_p} >> \prod_{p} {\operatorname{im}}\left({\operatorname{res}}_G^{G_p} \colon ({{\mathbb I}}_G)_{(q)} \to ({{\mathbb I}}_{G_p})_{(q)} \right) \\ @V p_1 VV @V p_2 VV \\ {\operatorname{coker}}\left(\bigoplus_{p \not= q} {\operatorname{ind}}_{G_p}^{G}\right) @> f >> {\operatorname{im}}\left({\operatorname{res}}_G^{G_q} \colon ({{\mathbb I}}_G)_{(q)} \to ({{\mathbb I}}_{G_q})_{(q)} \right) \end{CD}$$ Here $i$ is the inclusion and $p_1$ and $p_2$ are the obvious projections. Since the composition $$\begin{gathered} \bigoplus_{p \not= q} ({{\mathbb I}}_{G_p})_{(q)} \xrightarrow{ \bigoplus_{p \not= q} {\operatorname{ind}}_{G_p}^{G}} ({{\mathbb I}}_G)_{(q)} \xrightarrow{\prod_{p} {\operatorname{res}}_G^{G_p}} \prod_{p} {\operatorname{im}}\left({\operatorname{res}}_G^{G_p} \colon ({{\mathbb I}}_G)_{(q)} \to ({{\mathbb I}}_{G_p})_{(q)} \right) \\ \xrightarrow{p_2} {\operatorname{im}}\left({\operatorname{res}}_G^{G_q} \colon ({{\mathbb I}}_G)_{(q)} \to ({{\mathbb I}}_{G_q})_{(q)} \right)\end{gathered}$$ agrees with $$\bigoplus_{p \not= q} {\operatorname{res}}_{G}^{G_q} \circ {\operatorname{ind}}_{G_p}^G \colon \bigoplus_{p \not= q} ({{\mathbb I}}_{G_p})_{(q)} \to {\operatorname{im}}\left({\operatorname{res}}_G^{G_q} \colon ({{\mathbb I}}_G)_{(q)} \to ({{\mathbb I}}_{G_q})_{(q)} \right)$$ and hence is trivial by Lemma \[lem: res circ ind for different primes\], there exists a map $$f\colon {\operatorname{coker}}\left(\bigoplus_{p \not= q} {\operatorname{ind}}_{G_p}^{G}\right) \to {\operatorname{im}}\left({\operatorname{res}}_G^{G_q} \colon ({{\mathbb I}}_G)_{(q)} \to ({{\mathbb I}}_{G_q})_{(q)} \right)$$ such that the diagram above commutes. Since $$p_2 \circ \prod_{p} {\operatorname{res}}_G^{G_p} = {\operatorname{res}}_G^{G_q}\colon ({{\mathbb I}}_G)_{(q)} \to {\operatorname{im}}\left({\operatorname{res}}_G^{G_q} \colon ({{\mathbb I}}_G)_{(q)} \to ({{\mathbb I}}_{G_q})_{(q)} \right)$$ is by definition surjective, $f$ is surjective. The upper horizontal arrow in the commutative diagram above is surjective by Lemma \[lem: res circ ind and res have the same image\]. Now the claim follows by an easy diagram chase. \[the: computation of AI\_G/(AI\_G)\^[n+1]{}\] Let $G$ be a finite group. Let ${{\mathcal P}}(G)$ be the set of primes dividing $|G|$. 1. \[the: computation of AI\_G/(AI\_G)\^[n+1]{}: p\^a I and i\^2\] There are positive integers $a$, $b$ and $c$ such that for each prime $p$ dividing the order of $|G|$ $$\begin{aligned} p^a \cdot {{\mathbb I}}_{G_p} & \subseteq & {{\mathbb I}}_{G_p}^2; \\ {{\mathbb I}}_{G_p}^b & \subseteq & p \cdot {{\mathbb I}}_{G_p}; \\ {{\mathbb I}}_G \cdot {{\mathbb I}}_{G_p} & \subseteq & {{\mathbb I}}_{G_p}^2; \\ ({{\mathbb I}}_{G_p})^c & \subseteq & {{\mathbb I}}_G \cdot {{\mathbb I}}_{G_p};\end{aligned}$$ 2. \[the: computation of AI\_G/(AI\_G)\^[n+1]{}: computing I/I\^n\] For a prime $p$ dividing $|G|$ let ${\operatorname{im}}({\operatorname{res}}_G^{G_p})$ be the image of ${\operatorname{res}}_G^{G_p}\colon {{\mathbb I}}_G \to {{\mathbb I}}_{G_p}$. We obtain a sequence of pro-isomorphisms of pro-${{\mathbb Z}}$-modules $$\begin{gathered} \{{{\mathbb I}}_G/({{\mathbb I}}_G)^{n+1}\} \xrightarrow{\cong} \prod_{p \in {{\mathcal P}}(G)} \{{\operatorname{im}}({\operatorname{res}}_G^{G_p})/({{\mathbb I}}_G)^n \cdot {\operatorname{im}}({\operatorname{res}}_G^{G_p})\} \\ \xrightarrow{\cong} \prod_{p \in {{\mathcal P}}(G)} \{{\operatorname{im}}({\operatorname{res}}_G^{G_p})/({{\mathbb I}}_{G_p})^n \cdot {\operatorname{im}}({\operatorname{res}}_G^{G_p})\} \\ \xleftarrow{\cong} \prod_{p \in {{\mathcal P}}(G)} \{{\operatorname{im}}({\operatorname{res}}_G^{G_p})/({{\mathbb I}}_{G_p})^{bn}\cdot {\operatorname{im}}({\operatorname{res}}_G^{G_p})\} \\ \xrightarrow{\cong} \prod_{p \in {{\mathcal P}}(G)} \{{\operatorname{im}}({\operatorname{res}}_G^{G_p})/p^n \cdot {\operatorname{im}}({\operatorname{res}}_G^{G_p})\} .\end{gathered}$$ 3. \[the: computation of AI\_G/(AI\_G)\^[n+1]{}: computing R/I\^n\] There is an isomorphism of pro-${{\mathbb Z}}$-modules $$\{{{\mathbb Z}}\} \oplus \{{{\mathbb I}}_G/({{\mathbb I}}_G)^{n}\} \xrightarrow{\cong} \{R(G)/({{\mathbb I}}_G)^n\},$$ where $\{{{\mathbb Z}}\}$ denotes the constant inverse system ${{\mathbb Z}}\xleftarrow{{\operatorname{id}}} {{\mathbb Z}}\xleftarrow{{\operatorname{id}}} \ldots.$ The existence of the integers $a$, $b$ and $c$ for which the inclusions appearing in the statement of Theorem \[the: computation of AI\_G/(AI\_G)\^[n+1]{}\] hold follows from results of [@Atiyah(1961) Theorem 6.1 on page 265] and [@Atiyah-Tall(1969) Proposition 1.1 in Part III on page 277].\ These inequalities of assertion imply that the second, third and fourth map of pro-${{\mathbb Z}}$-isomorphism appearing in the statement of Theorem \[the: computation of AI\_G/(AI\_G)\^[n+1]{}\] are indeed well-defined pro-isomorphisms. The first map $$\{{{\mathbb I}}_G/({{\mathbb I}}_G)^{n+1}\} \xrightarrow{\cong} \prod_{p \in {{\mathcal P}}(G)} \{{\operatorname{im}}({\operatorname{res}}_G^{G_p})/({{\mathbb I}}_G)^n \cdot {\operatorname{im}}({\operatorname{res}}_G^{G_p})\}$$ is a well-defined pro-isomorphism of pro-${{\mathbb Z}}$-modules by Lemma \[lem: pro-exactness and exactness\] and Lemma \[lem: computing R(G) to prod\_p R(G\_p)\] provided $\left\{\left(\bigcap_{m \ge 1} ({{\mathbb I}}_G)^m\right)/{{\mathbb I}}_G^n \cdot \left(\bigcap_{m \ge 1} ({{\mathbb I}}_G)^m\right)\right\}$ is pro-trivial. The latter statement follows from Lemma \[lem: pro-exactness and exactness\] applied to the exact sequence $$\begin{gathered} 0 \to \bigcap_{m \ge 1} ({{\mathbb I}}_G)^m \to {{\mathbb I}}_G \to {{\mathbb I}}_G/\bigcap_{m \ge 1} ({{\mathbb I}}_G)^m \to 0.\end{gathered}$$\ Consider the isomorphism of finitely generated free abelian groups $${{\mathbb Z}}\oplus {{\mathbb I}}_G \xrightarrow{\cong} R(G), \quad (m,x) \mapsto x + m \cdot [{{\mathbb C}}].$$ It becomes an isomorphism of rings if we equip the source with the multiplication $(m,x) \cdot (n,y) = (mn,my + nx + xy)$. In particular ${{\mathbb I}}_G^n \cdot ( {{\mathbb I}}_G \oplus{{\mathbb Z}}) \subseteq {{\mathbb I}}_G^n\oplus 0$ for $n \ge 1$. This finishes the proof of Theorem \[the: computation of AI\_G/(AI\_G)\^[n+1]{}\]. Now we can give the proof of Theorem \[the: computation of K\_\*(BG) and K\^\*(BG) for finite G\]. In the sequel we abbreviate ${\operatorname{im}}({\operatorname{res}}_G^{G_p}) = {\operatorname{im}}\left({\operatorname{res}}_G^{G_p}\colon {{\mathbb I}}_G \to {{\mathbb I}}_{G_p}\right)$. Notice that ${\operatorname{im}}({\operatorname{res}}_G^{G_p}) \subseteq R(G_p)$ is a finitely generated free ${{\mathbb Z}}$-module. We obtain from Lemma \[lem: pro-exactness and limits\] and Theorem \[the: computation of AI\_G/(AI\_G)\^[n+1]{}\] an isomorphism $${{\underleftarrow{\lim}_{n \ge 1}^{}R(G)/({{\mathbb I}}_G)^n}} ~ \cong ~ {{\mathbb Z}}\times \prod_{p \in {{\mathcal P}}(G)} {{\underleftarrow{\lim}_{n \ge 1}^{}{\operatorname{im}}({\operatorname{res}}_G^{G_p})/p^n\cdot {\operatorname{im}}({\operatorname{res}}_G^{G_p})}}.$$ Now the Atiyah-Segal Completion Theorem  [@Atiyah-Segal(1969)] yields an isomorphisms $$\begin{aligned} {{\underleftarrow{\lim}_{n \ge 1}^{}R(G)/({{\mathbb I}}_G)^n}} ~ \xrightarrow{\cong} ~ {{\underleftarrow{\lim}_{n \ge 1}^{}K}}^0((BG)_n) ~ \xleftarrow{\cong} ~ K^0(BG) & & \label{AS for finite groups}\end{aligned}$$ and $K^1(BG) = 0$. This implies $$\begin{aligned} K^0(BG) & \cong & {{\mathbb Z}}\oplus \bigoplus_{p \in {{\mathcal P}}(G)} {\operatorname{im}}\left({\operatorname{res}}_G^{G_p}\colon {{\mathbb I}}_G \to {{\mathbb I}}_{G_p}\right) \otimes_{{{\mathbb Z}}} {{\mathbb Z}}\widehat{_p}; \\ K^1(BG) & \cong & 0.\end{aligned}$$ Next we show that the rank of the finitely generated free abelian group ${\operatorname{im}}\left({\operatorname{res}}_G^{G_p}\colon {{\mathbb I}}_G \to {{\mathbb I}}_{G_p}\right) \subseteq R(G_p)$ is the number $r(p)$ of conjugacy classes $(g)$ of elements $g \in G$ whose order $|g|$ is $p^d$ for some integer $d \ge 1$. This follows from the commutative diagram $\begin{CD} {{\mathbb C}}\otimes_{{{\mathbb Z}}} R(G) @>{\operatorname{res}}_G^{G_p}>> {{\mathbb C}}\otimes_{{{\mathbb Z}}} R(G_p)\\ @V{\cong}VV @VV{\cong}V\\ {\operatorname{class}}_{{{\mathbb C}}}(G) @>>{\operatorname{res}}_G^{G_p}> {\operatorname{class}}_{{{\mathbb C}}}(G_p) \end{CD}$ where ${\operatorname{class}}_{{{\mathbb C}}}(G)$ denotes the complex vector space of class functions on $G$, i.e. functions $G \to {{\mathbb C}}$ which are constant on conjugacy classes of elements, (and analogous for $G_p$), the vertical isomorphisms are given by taking the character of a complex representation, and the lower horizontal arrow is given by restricting a function $G \to {{\mathbb C}}$ to $G_p$. Recall that ${{\mathbb I}}_p(G)$ is canonically isomorphic to ${\operatorname{im}}\left({\operatorname{res}}_G^{G_p}\colon {{\mathbb I}}_G \to {{\mathbb I}}_{G_p}\right)$. One easily checks that the isomorphisms obtained from the one appearing in Theorem \[the: computation of AI\_G/(AI\_G)\^[n+1]{}\] and by applying the inverse limit and the isomorphism are compatible with the obvious multiplicative structures. This finishes the proof of Theorem \[the: computation of K\_\*(BG) and K\^\*(BG) for finite G\]. Proof of the Main Result {#sec: Proof of the Main Result} ======================== In this section we want to prove our main Theorem \[the: main theorem\]. We want to apply the cohomological equivariant Chern character of [@Lueck(2004i)] to the equivariant cohomology theory $H^*_?\left(-;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)$. This requires to analyze the contravariant functor $$\begin{aligned} {{\EuR}{FGINJ}}& \to & {{\mathbb Q}}\text{ -}{\operatorname{MOD}}, \quad H \mapsto H_G^l\left(G/H;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right). \label{functor H_G^l(G/H;(bfK_{Bor})_{(0)})}\end{aligned}$$ From and Lemma \[lem: i\^\*\_G(X;bfE) is bijective for finite X\] we conclude that the contravariant functor is naturally equivalent to the contravariant functor $$\begin{aligned} {{\EuR}{FGINJ}}& \to & {{\mathbb Q}}\text{ -}{\operatorname{MOD}}, \quad H \mapsto K^l(BH) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}. \label{functor K^l(BH) otimes_Z Q}\end{aligned}$$ Theorem \[the: computation of K\_\*(BG) and K\^\*(BG) for finite G\] yields the contravariant functor is trivial for odd $l$ and is naturally equivalent to the contravariant functor $$\begin{aligned} {{\EuR}{FGINJ}}&\to &{{\mathbb Q}}\text{ -}{\operatorname{MOD}}\quad H \mapsto {{\mathbb Q}}\times \prod_{p} {{\mathbb I}}_p(H) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p} \label{splitting of K^l(B?) otimes_{bbZ} bbQ}\end{aligned}$$ for even $l$, where the factor ${{\mathbb Q}}$ is constant in $H$ and functoriality for the other factors is given by restriction. Given a contravariant functor $F \colon {{\EuR}{FGINJ}}\to {{\mathbb Q}}\text{ -}{\operatorname{MOD}}$, define the ${{\mathbb Q}}[{\operatorname{aut}}(H)]$-module $$\begin{aligned} T_H F(H) & := & \ker\left(\prod_{K \subsetneq H} F(K \hookrightarrow H) \colon F(H) ~ \to ~ \prod_{K \subsetneq H} F(K) \right). \label{def of T_HF(H)}\end{aligned}$$ Next we compute $T_H\left(K^0(BH) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\right)$. Since $T_H$ is compatible with direct products, we obtain from a canonical ${{\mathbb Q}}[{\operatorname{aut}}(H)]$-isomorphism $$\begin{aligned} T_H \left(K^0(BH) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\right) & = & T_H({{\mathbb Q}}) \times \prod_{p} T_H\left({{\mathbb I}}_p(H) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}\right). \label{splitting of T_HK^l(BH)}\end{aligned}$$ Since ${{\mathbb Q}}$ is the constant functor, we get $$\begin{aligned} T_H({{\mathbb Q}}) ~ := ~ \left\{\begin{array}{lll} 0& & \text{ if } H \not= \{1\}; \\ {{\mathbb Q}}& & \text{ if } H = \{1\}. \end{array}\right. \label{computation of T_H(qq)}\end{aligned}$$ Fix a prime number $p$. Since for any finite group $H$ the map given by restriction to finite cyclic subgroups $$R(H) \to \prod_{\substack{C \subseteq H\\C \text{ cyclic}}} R(C)$$ is injective, we conclude \[lem: T\_H(I\_p(H) otimes\_Z Q\_p) for H non finite cyclic p-group\] For a finite group $H$ $$T_H\left({{\mathbb I}}_p(H)\right) ~ = ~ 0,$$ unless $H$ is a non-trivial cyclic $p$-group. Let $C$ be a non-trivial finite cyclic $p$-group. Then we get $$\begin{aligned} T_C \left({{\mathbb I}}_p(C)\right) & = & \ker\left({\operatorname{res}}_C^{C'} \colon R(C) \to R(C')\right), \label{computation of T_C (bbI_p(C)) for C Z/p^k}\end{aligned}$$ where $C' \subseteq C$ is the unique cyclic subgroup of index $p$ in $C$. Recall that taking the character of a rational representation of a finite group $H$ yields an isomorphism $$\chi \colon R_{{{\mathbb Q}}}(H) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\xrightarrow{\cong} {\operatorname{class}}_{{{\mathbb Q}}}(H),$$ where $R_{{{\mathbb Q}}}(C)$ is the rational representation ring of $C$ and ${\operatorname{class}}_{{{\mathbb Q}}}(H)$ is the rational vector space of functions $f\colon H \to {{\mathbb Q}}$ for which $f(g_1) = f(g_2)$ holds if the cyclic subgroups generated by $g_1$ and $g_2$ are conjugate in $H$ (see [@Serre(1977) page 68 and Theorem 29 on page 102]). Hence there is an idempotent $\theta_C \in R_{{{\mathbb Q}}}(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ which is uniquely determined by the property that its character sends a generator of $C$ to $1$ and all other elements to $0$. Denote its image under the change of coefficients map $R_{{{\mathbb Q}}}(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\to R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ also by $\theta_C$. Let $ \theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p} \subseteq R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}$ be the image of the idempotent endomorphism $R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p} \to R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}$ given by multiplication with $\theta_C$. \[lem: T\_C(I\_p(C) otimes\_Z Q\_p) for C a finite cyclic p-group\] For every non-trivial cyclic $p$-group $C$ the inclusion induces a ${{\mathbb Q}}[{\operatorname{aut}}(C)]$-isomorphism $$\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}~ \xrightarrow{\cong} ~ T_C \left({{\mathbb I}}_p(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\right).$$ Since the map ${\operatorname{res}}_C^{C'} \colon R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\to R(C') \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ sends $\theta_C$ to zero, $\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ is contained in $\ker\left({\operatorname{res}}_C^{C'} \colon R(C) \to R(C')\right) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$. For $x \in \ker\left({\operatorname{res}}_C^{C'} \colon R(C) \to R(C')\right) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ one gets $\theta_C \cdot x - x = 0$ by the calculation appearing in the proof of [@Lueck(2002d) Lemma 3.4 (b)]. \[lem: rational computation of H\^\*\_G(X;(bfK\_[Bor]{})\_[(0)]{})\] For every proper $G$-$CW$-complex $X$ and $n \in {{\mathbb Z}}$ there is an isomorphism, natural in $X$, $$\begin{gathered} \overline{{\operatorname{ch}}}_G^n \colon H^*_G\left(X;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right) \xrightarrow{\cong} \\ \prod_{i \in {{\mathbb Z}}} H^{2i+n}(G\backslash X;{{\mathbb Q}}) ~ \times ~ \prod_{p} ~ \prod_{(C)\in {{\mathcal C}}_p(G)} ~ \prod_{i \in {{\mathbb Z}}} H^{2i+n}_ {W_GC}(C_GC \backslash X^C;\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}),\end{gathered}$$ where ${{\mathcal C}}_p(C)$ is the set of conjugacy classes of non-trivial cyclic $p$-subgroups of $G$ and $W_GC = N_GC/C_GC$ is considered as a subgroup of ${\operatorname{aut}}(C)$ and thus acts on $\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}$. This follows from [@Lueck(2004i) Theorem 5.5 (c) and Example 5.6] using , Lemma \[lem: T\_H(I\_p(H) otimes\_Z Q\_p) for H non finite cyclic p-group\] and Lemma \[lem: T\_C(I\_p(C) otimes\_Z Q\_p) for C a finite cyclic p-group\]. For a generator $t \in C$ let ${{\mathbb C}}_t$ be the ${{\mathbb C}}$-representation with ${{\mathbb C}}$ as underlying complex vector space such that $t$ operates on ${{\mathbb C}}$ by multiplication with $\exp\left(\frac{2\pi i}{|C|}\right)$. Let ${\operatorname{Gen}}(C)$ be the set of generators. Notice that ${\operatorname{aut}}(C)$ acts in an obvious way on ${\operatorname{Gen}}(C)$ such that the ${\operatorname{aut}}(C)$-action is transitive and free, and acts on $R(C)$ by restriction. In the sequel $\chi_V$ denotes for a complex representation $V$ its character. \[lem: identifying Q\[Gen(C)\] and theta\_C cdot R(C) otimes\_Z Q)\] Let $C$ be a finite cyclic group. Then 1. \[lem: identifying Q\[Gen(C)\] and theta\_C cdot R(C) otimes\_Z Q): C\] The map $$v(C) \colon \theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb C}}~ \xrightarrow{\cong} ~ \prod_{{\operatorname{Gen}}(C)} {{\mathbb C}}, \quad [V] \mapsto \left(\chi_V(t)\right)_{t \in {\operatorname{Gen}}(C)}$$ is a ${{\mathbb C}}[{\operatorname{aut}}(C)]$-isomorphism if ${\operatorname{aut}}(C)$ acts on the target by permuting the factors. The map $v(C)$ is compatible with the ring structure on the source induced by the tensor product of representations and the product ring structure on the target; 2. \[lem: identifying Q\[Gen(C)\] and theta\_C cdot R(C) otimes\_Z Q): Q\] There is an isomorphism of ${{\mathbb Q}}[{\operatorname{aut}}(C)]$-modules $$u(C) \colon {{\mathbb Q}}[{\operatorname{Gen}}(C)] ~ \xrightarrow{\cong} ~ \theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}.$$ The map $$R(C) \otimes_{{{\mathbb Z}}} {{\mathbb C}}\xrightarrow{\cong} \prod_{g \in C} {{\mathbb C}}, \quad [V] \mapsto (\chi_V(g))_{g \in C}$$ is an isomorphism of rings. One easily checks that it is compatible with the ${\operatorname{aut}}(C)$ actions. Now the assertion follows from the fact that the character of $\theta_C$ sends a generator of $C$ to $1$ and any other element of $C$ to $0$.\ Obviously ${{\mathbb Q}}[{\operatorname{Gen}}(C)]$ is ${{\mathbb Q}}[{\operatorname{aut}}(C)]$-isomorphic to the regular representation ${{\mathbb Q}}[{\operatorname{aut}}(C)]$ since ${\operatorname{Gen}}(C)$ is a transitive free ${\operatorname{aut}}(C)$-set. It remains to show that $\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ is ${{\mathbb Q}}[{\operatorname{aut}}(C)]$-isomorphic to the regular representation ${{\mathbb Q}}[{\operatorname{aut}}(C)]$. By character theory it suffices to show that $\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb C}}$ is ${{\mathbb C}}[{\operatorname{aut}}(C)]$-isomorphic to the regular representation ${{\mathbb C}}[{\operatorname{aut}}(C)]$. This follows from assertion . \[lem: refined non-multiplicative rational computation of H\^\*\_G(X;(bfK\_[Bor]{})\_[(0)]{})\] For every proper $G$-$CW$-complex $X$ and $n \in {{\mathbb Z}}$ there is an isomorphism, natural in $X$, $$\begin{gathered} \overline{\overline{{\operatorname{ch}}}}_G^n \colon H^*_G\left(X;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right) \xrightarrow{\cong} \\ \prod_{i \in {{\mathbb Z}}} H^{2i+n}(G\backslash X;{{\mathbb Q}}) \times \prod_{p} \prod_{(g) \in {\operatorname{con}}_p(G)} H^{2i+n}(C_G\langle g \rangle\backslash X^{\langle g \rangle};{{\mathbb Q}}\widehat{_p}).\end{gathered}$$ Fix a prime $p$. Let $C$ be a cyclic subgroup of $G$ of order $p^d$ for some integer $d \ge 1$. The obvious $N_GC$-action on $C$ given by conjugation induces an embedding of groups $W_GC \to {\operatorname{aut}}(C)$. The obvious action of ${\operatorname{aut}}(C)$ on ${\operatorname{Gen}}(C)$ is free and transitive. Thus we obtain an isomorphism of ${{\mathbb Q}}\widehat{_p}[W_GC]$-modules $${{\mathbb Q}}\widehat{_p}[{\operatorname{Gen}}(C)] \cong \prod_{W_GC\backslash {\operatorname{Gen}}(C)} {{\mathbb Q}}\widehat{_p}[W_GC].$$ This induces a natural isomorphism $$H^k_{W_GC}(C_GC\backslash X^C;{{\mathbb Q}}\widehat{_p}[{\operatorname{Gen}}(C)]) ~ \xrightarrow{\cong} \prod_{W_GC\backslash {\operatorname{Gen}}(C)} H^k(C_GC\backslash X^C;{{\mathbb Q}}\widehat{_p}),$$ which comes from the adjunction $(i^*,i_!)$ of the functor restriction $i^*$ and coinduction $i_!$ for the ring homomorphism $i \colon {{\mathbb Q}}\widehat{_p} \to {{\mathbb Q}}\widehat{_p}[W_GC]$ and the obvious identification $i_!({{\mathbb Q}}\widehat{_p}) = {{\mathbb Q}}\widehat{_p}[W_GC]$. There is an obvious bijection between the sets $$\coprod_{(C) \in {{\mathcal C}}_p} W_GC\backslash{\operatorname{Gen}}(C) \cong {\operatorname{con}}_p(G).$$ Now the claim follows from Lemma \[lem: rational computation of H\^\*\_G(X;(bfK\_[Bor]{})\_[(0)]{})\] and Lemma \[lem: identifying Q\[Gen(C)\] and theta\_C cdot R(C) otimes\_Z Q)\] . \[the: rational computation of K\^\*(EG times\_GX)\] For every finite proper $G$-$CW$-complex $X$ and $n \in {{\mathbb Z}}$ there is a natural isomorphism $$\begin{gathered} \overline{{\operatorname{ch}}}_G^n \colon K^n(EG \times_G X) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\\ \xrightarrow{\cong} ~ \prod_{i \in {{\mathbb Z}}} H^{2i+n}(G\backslash X;{{\mathbb Q}}) \times \prod_{p} \prod_{(g) \in {\operatorname{con}}_p(G)} H^{2i+n}(C_G\langle g \rangle\backslash X^{\langle g \rangle};{{\mathbb Q}}\widehat{_p}).\end{gathered}$$ This follows from Lemma \[lem: i\^\*\_G(X;bfE) is bijective for finite X\] and Lemma \[lem: refined non-multiplicative rational computation of H\^\*\_G(X;(bfK\_[Bor]{})\_[(0)]{})\]. \[lem: comparison of G backslash Y and BG\] Let $Y \not= \emptyset$ be a proper $G$-$CW$-complex such that $\widetilde{H}_p(Y;{{\mathbb Q}})$ vanishes for all $p$. Let $f \colon Y \to \underline{E}G$ be a $G$-map. Then $G\backslash f\colon G\backslash Y \to G\backslash \underline{E}G$ induces for all $k$ isomorphisms $$\begin{aligned} H_k(G\backslash f;{{\mathbb Q}}) \colon H_k(G\backslash Y;{{\mathbb Q}}) & \xrightarrow{\cong} & H_k(G\backslash \underline{E}G;{{\mathbb Q}}); \\ H^k(G\backslash f;{{\mathbb Q}}) \colon H^k(G\backslash \underline{E}G;{{\mathbb Q}}) & \xrightarrow{\cong} & H^k(G\backslash Y;{{\mathbb Q}}); \\ H^k(G\backslash f;{{\mathbb C}}) \colon H^k(G\backslash \underline{E}G;{{\mathbb C}}) & \xrightarrow{\cong} & H^k(G\backslash Y,{{\mathbb C}}); \\ H^k(G\backslash f;{{\mathbb Q}}\widehat{_p}) \colon H^k(G\backslash \underline{E}G;{{\mathbb Q}}\widehat{_p}) & \xrightarrow{\cong} & H^k(G\backslash Y,{{\mathbb Q}}\widehat{_p}); \\ H^k(G\backslash f;{{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}}) \colon H^k(G\backslash \underline{E}G;{{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}}) & \xrightarrow{\cong} & H^k(G\backslash Y,{{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}}).\end{aligned}$$ The map $C_*(f) \otimes_{{{\mathbb Z}}} {\operatorname{id}}_{{{\mathbb Q}}} \colon C_*(G\backslash Y) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\to C_*(\underline{E}G) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ is ${{\mathbb Q}}$-chain map of projective ${{\mathbb Q}}G$-chain complexes and induces an isomorphism on homology. Hence it is a ${{\mathbb Q}}G$-chain homotopy equivalence. This implies that $C_*(f) \otimes_{{{\mathbb Q}}G} M$ and $\hom_{{{\mathbb Q}}G}(C_*(f),M)$ are chain homotopy equivalences and induce isomorphisms on homology and cohomology respectively for every ${{\mathbb Q}}$-module $M$. Now we can give the proof of Theorem \[the: main theorem\]. We conclude from Lemma \[lem: comparison of G backslash Y and BG\] that for any $g \in {\operatorname{con}}_p(G)$ the up to $C_G\langle g \rangle$-homotopy unique $C_G\langle g \rangle$-map $f_g \colon EC_G\langle g \rangle \to \underline{E}C_G\langle g \rangle$ and the up to $G$-homotopy unique $G$-map $f \colon EG \to \underline{EG}$ induce isomorphisms $$\begin{aligned} H^k(G\backslash f;{{\mathbb Q}}) \colon H^k(G \backslash \underline{E}G;{{\mathbb Q}}) & \xrightarrow{\cong} & H^k(BG;{{\mathbb Q}}); \label{identifying cohomology of underline{B}G and BG} \\ \hspace{-5mm} H^k(C_G\langle g \rangle \backslash f_g;{{\mathbb Q}}\widehat{_p}) \colon H^k(C_G\langle g \rangle \backslash \underline{E}C_G\langle g \rangle;{{\mathbb Q}}\widehat{_p}) & \xrightarrow{\cong} & H^k(BC_G\langle g \rangle,{{\mathbb Q}}\widehat{_p}). \label{identifying cohomology of underline{B}C_G langle g rangle g and BC_Glangle g rangle}\end{aligned}$$ Now apply Theorem \[the: rational computation of K\^\*(EG times\_GX)\] to $X = \underline{E}G$ and use and together with the fact that $\underline{E}G^{\langle g \rangle}$ is a model for $\underline{E}C_G\langle g \rangle$. Multiplicative Structures {#sec: Multiplicative Structures} ========================= In this section we want to deal with multiplicative structures and prove Theorem \[the: Multiplicative structure\]. [**(Ring structures and multiplicative structures).**]{} \[rem: ring spectra and Borel cohomology\] *Suppose that the $\Omega$-spectrum ${\ensuremath{\mathbf{E}}}$ comes with the structure of a ring spectrum $\mu \colon {\ensuremath{\mathbf{E}}}\wedge {\ensuremath{\mathbf{E}}}\to {\ensuremath{\mathbf{E}}}$. It induces a multiplicative structure on the (non-equivariant) cohomology theory $H^*(-;{\ensuremath{\mathbf{E}}})$ associated to ${\ensuremath{\mathbf{E}}}$. Thus the equivariant cohomology theory given by the equivariant Borel cohomology $H^*_?(E? \times_?-;{\ensuremath{\mathbf{E}}})$ associated to ${\ensuremath{\mathbf{E}}}$ inherits a multiplicative structure the sense of [@Lueck(2004i) Section 6].* If the contravariant ${{\EuR}{GROUPOIDS}}$-$\Omega$-spectrum ${\ensuremath{\mathbf{F}}}$ comes with a ring structure of contravariant ${{\EuR}{GROUPOIDS}}$-$\Omega$-spectra $\mu \colon {\ensuremath{\mathbf{F}}}\wedge {\ensuremath{\mathbf{F}}}\to {\ensuremath{\mathbf{F}}}$, then the associated equivariant cohomology theory $H^*_?(-;{\ensuremath{\mathbf{F}}})$ inherits a multiplicative structure. A ring structure on the $\Omega$-spectrum ${\ensuremath{\mathbf{E}}}$ induces a ring structure of contravariant ${{\EuR}{GROUPOIDS}}$-$\Omega$-spectra on ${\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}$. The induced multiplicative structure on $H^*_?(-;{\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}})$ and the one on $H^*_?(E? \times_?-;{\ensuremath{\mathbf{E}}})$ are compatible with the natural identification . A ring structure on the $\Omega$-spectrum ${\ensuremath{\mathbf{E}}}$ induces in a natural way a ring structure on its rationalization ${\ensuremath{\mathbf{R}}\ensuremath{\mathbf{a}}\ensuremath{\mathbf{t}}}({\ensuremath{\mathbf{E}}})$. Thus a ring structure on the contravariant ${{\EuR}{GROUPOIDS}}$-$\Omega$-spectra on ${\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}$ induces a ring structure on the contravariant ${{\EuR}{GROUPOIDS}}$-$\Omega$-spectra on $\left({\ensuremath{\mathbf{E}}}_{{\operatorname{Bor}}}\right)_{(0)}$. The natural transformation of equivariant cohomology theories appearing in is compatible with the induced multiplicative structures. In this discussion we are rather sloppy concerning the notion of a smash product. Since we are not dealing with higher structures and just want to take homotopy groups in the end, one can either use the classical approach in the sense of Adams or the more advanced new constructions such as symmetric spectra. ** \[lem: multiplicative structures and Chern character\] The isomorphism appearing in Lemma \[lem: rational computation of H\^\*\_G(X;(bfK\_[Bor]{})\_[(0)]{})\] is compatible with the multiplicative structure on the source and the one on the target given by $$(a,u_{p,(C)}) \cdot (b,v_{p,(C)}) ~ = ~ (a \cdot b,a \cdot v_{p,(C)´} + b \cdot v_{p,(C)} + u_{p,(C)} \cdot v_{p,(C)}),$$ for $$\begin{aligned} (C) & \in & {{\mathcal C}}_p(G); \\ a,b & \in & H^*(BG;{{\mathbb Q}}); \\ u_{p,(C)}, v_{p,(C)} & \in & H^{*}_ {W_GC}(C_GC \backslash X^C;\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}),\end{aligned}$$ and the structures of a graded commutative ring on $\prod_{i \in {{\mathbb Z}}} H^{2i+*}(BG;{{\mathbb Q}})$ and $\prod_{i \in {{\mathbb Z}}}H^{2i+*}_ {W_GC}(C_GC \backslash X^C;\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p})$ coming from the cup-product and the multiplicative structure on $\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}$ and the obvious $\prod_{i \in {{\mathbb Z}}} H^{2i+*}(BG;{{\mathbb Q}})$-module structure on $\prod_{i \in {{\mathbb Z}}}H^{2i+*}_ {W_GC}(C_GC \backslash X^C;\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p})$ coming from the canonical maps $C_GC\backslash X\to G\backslash X$ and ${{\mathbb Q}}\to {{\mathbb Q}}\widehat{_p}$. The proof consists of a straightforward calculation which is essentially based on the following ingredients. In the sequel we use the notation of [@Lueck(2004i)]. The equivariant Chern character of [@Lueck(2004i) Theorem 6.4] is compatible with the multiplicative structures. In Theorem \[the: computation of K\_\*(BG) and K\^\*(BG) for finite G\] we have analyzed for every finite group $H$ the multiplicative structure on $$K^0(BH) ~ \cong ~ {{\mathbb Z}}\times \prod_{p} {{\mathbb I}}_p(H) \otimes_{{{\mathbb Z}}} {{\mathbb Z}}\widehat{_p}.$$ Thus the Bredon cohomology group appearing in the target of the Chern character whose source is $H_G^*\left(X;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)$ can be identified with $$\left(\prod_{i \in {{\mathbb Z}}} H^{* + 2i}(G\backslash X;{{\mathbb Q}})\right) \times \prod_p ~ \prod_{i \in {{\mathbb Z}}} ~ H^{2i +*}_{{{\mathbb Q}}\widehat{_p}{{\EuR}{Sub}}(G;{{\mathcal F}})}(X;{{\mathbb I}}_p(?) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p})$$ with respect to the multiplicative structure analogously defined to the one appearing in Theorem \[the: computation of K\_\*(BG) and K\^\*(BG) for finite G\] taking the obvious multiplicative structures on the factors and the module structures of the factor for $p$ over $\prod_{i \in {{\mathbb Z}}} H^{* + 2i}(G\backslash X;{{\mathbb Q}})$ into account. Fix a prime $p$. The ${{\mathbb Q}}\widehat{_p}[{\operatorname{aut}}(C)]$-map $R(C)\otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p} \to \theta_C \cdot R(C)\otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}$ given by multiplication with the idempotent $\theta_C$ is compatible with the multiplicative structures. Using the identification of Lemma \[lem: T\_C(I\_p(C) otimes\_Z Q\_p) for C a finite cyclic p-group\] we obtain for each cyclic $p$-group $C$ a retraction compatible with the multiplicative structures. $$\rho_C \colon {{\mathbb I}}_p(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p} ~ \to ~ T_C \left({{\mathbb I}}_p(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}\right)$$ Recall that $T_K\left({{\mathbb I}}_p(K) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}\right)$ is trivial unless $K$ is a non-trivial cyclic $p$-group. Use these retractions as the maps $\rho_K$ in the definition of the isomorphism $\nu$ of ${{\mathbb Q}}\widehat{_p}{{\EuR}{Sub}}(G;{{\mathcal F}})$-modules for $M = {{\mathbb I}}_p(?) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}$ in [@Lueck(2004i) (5.1)]. Then we obtain using the identification of Lemma \[lem: T\_C(I\_p(C) otimes\_Z Q\_p) for C a finite cyclic p-group\] an isomorphism of ${{\mathbb Q}}\widehat{_p}{{\EuR}{Sub}}(G;{{\mathcal F}})$-modules $${{\mathbb I}}_p(?) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p} ~ \xrightarrow{\cong} ~ \prod_{(C) \in {{\mathcal C}}_p} i(C)_!\left(\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}\right),$$ which is compatible with the obvious multiplicative structure on the source and the one on the target given by the product of the multiplicative structures on the factors $i(C)_!\left(\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}\right)$ coming from the obvious one on $\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}$. Using the adjunction $(i(C)^*,i(C)_!)$ this isomorphism induces an ${{\mathbb Q}}\widehat{_p}$-isomorphism compatible with the multiplicative structures $$H^n_{{{\mathbb Q}}\widehat{_p}{{\EuR}{Sub}}(G;{{\mathcal F}})}(X;{{\mathbb I}}_p(?) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}) ~ \xrightarrow{\cong} ~ \prod_{(C) \in {{\mathcal C}}_p(G)} H^n_{W_GC}(C_GC\backslash X;\theta_C \cdot R(C) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\widehat{_p}).$$ Because the isomorphism in Lemma \[lem: identifying Q\[Gen(C)\] and theta\_C cdot R(C) otimes\_Z Q)\] is compatible with the multiplicative structures, it implies together with Lemma \[lem: multiplicative structures and Chern character\] \[lem: complex computation of H\^\*\_G(X;(bfK\_[Bor]{})\_[(0)]{})\] For every proper $G$-$CW$-complex $X$ and $n \in {{\mathbb Z}}$ there is a ${{\mathbb C}}$-isomorphism, natural in $X$, $$\begin{gathered} \overline{{\operatorname{ch}}}^n_{G,{{\mathbb C}}} \colon H^*_G(X;({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}})_{(0)})\otimes_{{{\mathbb Q}}} {{\mathbb C}}\xrightarrow{\cong} \\ \left(\prod_{i \in {{\mathbb Z}}} H^{2i+n}(G\backslash X;{{\mathbb C}})\right) \times \prod_{p} ~ \prod_{(g) \in {\operatorname{con}}_p(G)} \left(\prod_{i \in {{\mathbb Z}}} H^{2i+n}(C_G\langle g \rangle\backslash X^{\langle g \rangle};{{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}})\right),\end{gathered}$$ which is compatible with the multiplicative structure on the target given by $$\left(a, u_{p,(g)}\right) \cdot \left(b, v_{p,(g)}\right) ~ = ~ \left(a\cdot b, (a \cdot v_{p,(g)} + b \cdot u_{p,(g)} + u_{p,(g)} \cdot v_{p,(g)})\right)$$ for $$\begin{aligned} (g) & \in & {\operatorname{con}}_p(G); \\ a,b & \in & \prod_{i \in {{\mathbb Z}}} H^{2i+*}(G\backslash X;{{\mathbb C}}), \\ u_{p,(g)}, v_{p,(g)} & \in & \prod_{i \in {{\mathbb Z}}} H^{2i+*}(C_G\langle g \rangle\backslash X^{\langle g \rangle};{{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}}),\end{aligned}$$ and the structures of a graded commutative ring on $\prod_{i \in {{\mathbb Z}}} H^{2i+*}(G\backslash X;{{\mathbb C}})$ and $\prod_{i \in {{\mathbb Z}}} H^{2i+*}(C_G\langle g \rangle\backslash X^{\langle g \rangle};{{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}})$ coming from the cup-product and the obvious $\prod_{i \in {{\mathbb Z}}} H^{2i+*}(G\backslash X;{{\mathbb C}})$-module structure on $\prod_{i \in {{\mathbb Z}}} H^{2i+*}(C_G\langle g \rangle\backslash X^{\langle g \rangle};{{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}})$ coming from the canonical map $BC_G\langle g \rangle \to BG$. Now we are ready to prove Theorem \[the: Multiplicative structure\] The isomorphism appearing in Lemma \[lem: i\^\*\_G(X;bfE) is bijective for finite X\] is compatible with the multiplicative structures. This is also true for the versions of isomorphisms and , where the coefficients ${{\mathbb Q}}$ and ${{\mathbb Q}}\widehat{_p}$ are replaced by ${{\mathbb C}}$ and ${{\mathbb Q}}\widehat{_p} \otimes_{{{\mathbb Q}}} {{\mathbb C}}$. Now put these together with the isomorphism appearing in Lemma \[lem: complex computation of H\^\*\_G(X;(bfK\_[Bor]{})\_[(0)]{})\]. [**(Difference between rationalization and complexification).**]{} \[rem: Difference between rationalization and complexification\] *First of all we want to emphasize that the isomorphism appearing in Theorem \[the: Multiplicative structure\] is *not* obtained from the isomorphism appearing in Theorem \[the: main theorem\] by applying $ - \otimes_{{{\mathbb Q}}} {{\mathbb C}}$ since the corresponding statement is already false for the two isomorphisms appearing in Lemma \[lem: identifying Q\[Gen(C)\] and theta\_C cdot R(C) otimes\_Z Q)\]. Moreover, the isomorphism appearing in Theorem \[the: main theorem\] is *not* compatible with the standard multiplicative structures on the source and the multiplicative structure on the target which is defined analogously to the one on the complexified target in Theorem \[the: Multiplicative structure\]. The reason is that the isomorphism appearing in Lemma \[lem: identifying Q\[Gen(C)\] and theta\_C cdot R(C) otimes\_Z Q)\]  cannot be chosen to be compatible with the obvious multiplicative structure on its target if we use on the source the multiplicative structure coming from the obvious identification ${{\mathbb Q}}\widehat{_p}[{\operatorname{Gen}}(C)] = \prod_{{\operatorname{Gen}}(C)} {{\mathbb Q}}\widehat{_p}$ and the product ring structure on $\prod_{{\operatorname{Gen}}(C)} {{\mathbb Q}}\widehat{_p}$.* One can easily check by hand that there is *no* ${{\mathbb Q}}\widehat{_3}[{\operatorname{aut}}({{\mathbb Z}}/3)]$-isomorphism compatible with the multiplicative structures $$\theta_{{{\mathbb Z}}/3} \cdot R({{\mathbb Z}}/3) \otimes {{\mathbb Q}}\widehat{_3} = {{\mathbb I}}_{{{\mathbb Z}}/3} \otimes {{\mathbb Q}}\widehat{_3} ~ \xrightarrow{\cong} ~ {{\mathbb Q}}\widehat{_3} \times {{\mathbb Q}}\widehat{_3}$$ if we equip the target with the ${\operatorname{aut}}({{\mathbb Z}}/3) \cong {{\mathbb Z}}/2$-action given by flipping the factors and the product ${{\mathbb Q}}$-algebra structure. The point is that ${{\mathbb Q}}\widehat{_3}$ does not contain a primite $3$-rd root of unity (in contrast to ${{\mathbb C}}$, see Lemma \[lem: identifying Q\[Gen(C)\] and theta\_C cdot R(C) otimes\_Z Q)\]  ). ** \[exa: including multiplicative structures\] *In general we can give a simple formula for the multiplicative structure only after complexifying as explained in Remark \[rem: Difference between rationalization and complexification\]. In the following special case this can be done already after rationalization. Suppose that for any non-trivial cyclic subgroup $C$ of prime power order $\widetilde{H}^n(BC_GC;{{\mathbb Q}}) = 0$ holds for all $n \in {{\mathbb Z}}$ and that $W_GC = {\operatorname{aut}}(C)$. The latter means that any automorphism of $C$ is given by conjugation with some element in $N_GC$. Suppose furthermore that there is a finite model for $\underline{E}G$. Then we obtain ${{\mathbb Q}}$-isomorphisms $$\begin{aligned} K^0(BG) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}& \cong & \prod_{i \in {{\mathbb Z}}} H^{2i}(BG;{{\mathbb Q}}) \times \prod_p ({{\mathbb Q}}\widehat{_p})^{r_p(G)}; \\ K^1(BG) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}& \cong & \prod_{i \in {{\mathbb Z}}} H^{2i+1}(BG;{{\mathbb Q}}),\end{aligned}$$ where $r_p(G)$ is the number of conjugacy classes of non-trivial cyclic subgroups of $p$-power order what is in this situation the same as $|{\operatorname{con}}_p(G)|$. The isomorphisms above are compatible with the multiplicative structure on the target given by $$\begin{aligned} (a,u) \cdot (b,v) & = & (a \cup b, a_0 \cdot v + b_0 \cdot u + u \cdot v); \\ (a,u) \cdot c & = & a \cup c; \\ c \cdot d & = & c \cup d,\end{aligned}$$ for $a,b \in \prod_{i \in {{\mathbb Z}}} H^{2i}(BG;{{\mathbb Q}})$, $c,d \in \prod_{i \in {{\mathbb Z}}} H^{2i+1}(BG;{{\mathbb Q}})$ and $u,v \in \prod_p ({{\mathbb Q}}\widehat{_p})^{r_p(G)}$, where $a_0 \in {{\mathbb Q}}$ and $b_0 \in {{\mathbb Q}}$ are the components of $a$ and $b$ in $H^0(BG;{{\mathbb Q}}) = {{\mathbb Q}}\cdot 1$ and we equip $\prod_p ({{\mathbb Q}}\widehat{_p})^{r_p(G))}$ with the structure of a ${{\mathbb Q}}$-algebra coming from the product of the obvious ${{\mathbb Q}}$-algebra structures on the various factors ${{\mathbb Q}}\widehat{_p}$. This follows from Lemma \[lem: multiplicative structures and Chern character\] and the conclusion from the formula $\theta_C \cdot \theta_C = \theta_C$ and Lemma \[lem: identifying Q\[Gen(C)\] and theta\_C cdot R(C) otimes\_Z Q)\] that $\left(\theta_C \cdot R(C)\right)^{{\operatorname{aut}}(C)}$ is generated as ${{\mathbb Q}}$-vector space by $\theta_C$ and hence is as ${{\mathbb Q}}$-algebra isomorphic to ${{\mathbb Q}}$.* If we furthermore assume that $\widetilde{H}_n(BG;{{\mathbb Q}}) = 0$ for all $n \in {{\mathbb Z}}$, the formula simplifies to $$\begin{aligned} K^0(BG) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}& \cong & {{\mathbb Q}}\times \prod_p ({{\mathbb Q}}\widehat{_p})^{r_p(G)}; \\ K^1(BG) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}& \cong & 0.\end{aligned}$$ The first isomorphism is compatible with the multiplicative structures if we put on the target the one given by $$(m,a) \cdot (m,b) ~ = ~ (mn,m\cdot b + n \cdot a + a \cdot b)$$ for $m,n \in {{\mathbb Q}}$, $a,b \in \prod_p ({{\mathbb Q}}\widehat{_p})^{r_p(G)}$ and we equip $\prod_p ({{\mathbb Q}}\widehat{_p})^{r_p(G)}$ with the structure of a ${{\mathbb Q}}$-algebra coming from the product of the obvious ${{\mathbb Q}}$-algebra structures on the various factors ${{\mathbb Q}}\widehat{_p}$. ** Weakening the Finiteness Conditions {#sec: Weakening the Finiteness Conditions} =================================== In this section we want to weaken the finiteness assumption occurring in Theorem \[the: main theorem\] and Theorem \[the: Multiplicative structure\]. A ${{\mathbb Z}}$-module $M$ is *almost trivial* if there is an element $r \in {{\mathbb Z}}, r \not=0$ such that $rm = 0$ holds for all $m \in M$. A ${{\mathbb Z}}$-module $M$ is *almost finitely generated* if $M/{\operatorname{tors}}(M)$ is a finitely generated ${{\mathbb Z}}$-module and ${\operatorname{tors}}(M)$ is almost trivial. A ${{\mathbb Z}}$-homomorphism is an *almost isomorphism* if its kernel and cokernel are almost trivial. An almost isomorphism becomes an isomorphism after rationalization. The full subcategories of the category of ${{\mathbb Z}}$-modules given by almost trivial submodules and by almost finitely generated submodules are Serre-subcategories, i.e. are closed under subobjects, quotients, and extensions. In particular there is a Five-Lemma for almost isomorphisms. These notions and facts are introduced and proved in [@Lueck-Reich-Varisco(2003) Section 4]. The main result of this section is: \[the: Weakening the finiteness assumption\] The conclusions of Theorem \[the: main theorem\] and Theorem \[the: Multiplicative structure\] remain true if we replace the condition that there is a cocompact $G$-$CW$-model for the classifying space $\underline{E}G$ for proper $G$-actions by the following weaker set of conditions: There exists a $G$-$CW$-complex $X$ satisfying: 1. \[the: Weakening the finiteness assumption: condition 1\] The $G$-$CW$-complex $X$ is proper and finite dimensional. There is an upper bound on the orders of its isotropy groups. The set of conjugacy classes $(C)$ of finite cyclic subgroups $C \subseteq G$ of prime power order with $X^C \not= \emptyset$ is finite; 2. \[the: Weakening the finiteness assumption: condition 2\] For all for $k \in {{\mathbb Z}}$ we have $H_k(X;{{\mathbb Z}}) \cong H_k({\{\bullet\}};{{\mathbb Z}})$; 3. \[the: Weakening the finiteness assumption: condition 3\] For any finite cyclic subgroup of prime power order $C \subseteq G$ and integer $k$ the ${{\mathbb Z}}$-module $H_k(X^C;{{\mathbb Z}})$ is almost finitely generated; 4. \[the: Weakening the finiteness assumption: condition 4\] For any finite cyclic subgroup of prime power order $C \subseteq G$ and integer $k$ the ${{\mathbb Z}}$-module $H_k(C_GC\backslash X^C;{{\mathbb Z}})$ is almost finitely generated. If $X$ satisfies conditions , and above, then the condition is satisfied if and only if for any finite cyclic subgroup of prime power order $C \subseteq G$ and integer $k$ the ${{\mathbb Z}}$-module $H_k(BC_GC;{{\mathbb Z}})$ is almost finitely generated. \[rem: discussing the finiteness assumptions\] *Notice that the conditions , , and in Theorem \[the: Weakening the finiteness assumption\] are satisfied, if the set of conjugacy classes of finite subgroups of $G$ is finite, there is a finite dimensional model for $\underline{E}G$ and for any finite cyclic subgroup of prime power order $C \subseteq G$ and integer $k$ the ${{\mathbb Z}}$-module $H_k(BC_GC;{{\mathbb Z}})$ is almost finitely generated. *** \[rem: type of underline EG\] *Suppose that $G$ contains a torsionfree subgroup $H \subseteq G$ of finite index. If there is a finite dimensional model for $BH$, then there exists a finite dimensional model for $\underline{E}G$  [@Serre(1971)]). However, if there is a finite model for $BH$, this does not implies that $G$ has only finitely many conjugacy classes of subgroups or that there is a cocompact model for $\underline{E}G$ or that the centralizers $C_GC$ of finite cyclic subgroups are finitely generated [@Leary-Nucinkis(2003) Section 7]. *** The proof of Theorem \[the: Weakening the finiteness assumption\] needs some preparation. \[lem: almost iso induced by EG times\_G X to X/G\] Let $X$ be a proper $G$-$CW$-complex. Let ${\operatorname{pr}}\colon EG \times_G X \to G\backslash X$ be the projection. Fix an integer $n \in {{\mathbb Z}}$. 1. \[lem: almost iso induced by EG times\_G X to X/G: almost isomorphism\] Suppose that there exists for each $m \ge 0$ a positive integer $d(m)$ such that for any isotropy group $H$ of $X$ multiplication with $d(m)$ annihilates $\widetilde{H}_m(BH;{{\mathbb Z}})$. Then the induced map $$H_n({\operatorname{pr}};{{\mathbb Z}}) \colon H_n(EG \times_G X;{{\mathbb Z}}) \to H_n(G\backslash X;{{\mathbb Z}}).$$ is an almost isomorphism for all $n \in {{\mathbb Z}}$; 2. \[lem: almost iso induced by EG times\_G X to X/G: rational isomorphism\] The induced map $$H_n({\operatorname{pr}};{{\mathbb Q}}) \colon H_n(EG \times_G X;{{\mathbb Q}}) \to H_n(G\backslash X;{{\mathbb Q}}).$$ is a ${{\mathbb Q}}$-isomorphism. This is proved in [@Lueck-Reich-Varisco(2003) Lemma 8.1].\ The proof is analogous to the one of . The next result is a generalization of Lemma \[lem: i\^\*\_G(X;bfE) is bijective for finite X\] in the case ${\ensuremath{\mathbf{E}}}= {\ensuremath{\mathbf{K}}}$. \[lem: i\^\*\_G(X;bfK) is bijective for certain X\] Let $X$ be a finite dimensional proper $G$-$CW$-complex such that there is a bound on the orders of finite subgroups. Let $J$ be the set of conjugacy classes $(C)$ of finite cyclic subgroups $C \subseteq G$ of prime power order with $X^C \not= \emptyset$. Suppose that $|J|$ is finite. Furthermore assume that $H_k(C_GC\backslash X^C;{{\mathbb Z}})$ is almost finitely generated for every $k \in {{\mathbb Z}}$ and every finite cyclic subgroup of prime power order $C \subseteq G$. Then the map $$i^n_G(X;{\ensuremath{\mathbf{K}}}) \colon H_G^n\left(X;{\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\to H_G^n\left(X;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)$$ is bijective for all $n \in {{\mathbb Z}}$. In the sequel we use the notation of  [@Lueck(2004i)]. Let ${{\mathcal F}}(X)$ be the set of conjugacy classes of subgroups of $H \subseteq G$ with $X^H \not= \emptyset$. Since $X$ is proper and has finite orbit type, ${{\mathcal F}}(X)$ is finite and $(H) \in {{\mathcal F}}(X)$ implies that $H$ is finite. Since $X$ is proper and finite dimensional, there is a spectral sequence converging to $H_G^{s+t}\left(X;{\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)$ whose $E_2$-term is $E_2^{s,t} = H^s_{{{\EuR}{Sub}}(G;{{\mathcal F}}(X))}(X;K^t(BH))$ and a spectral sequence converging to $H_G^{s+t}\left(X;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)$ whose $E_2$-term is $E_2^{s,t} = H^s_{{{\EuR}{Sub}}(G;{{\mathcal F}}(X))}(X;K^t(BH)\otimes_{{{\mathbb Z}}} {{\mathbb Q}})$. Since ${{\mathbb Q}}$ is flat over ${{\mathbb Z}}$, it suffices to show that the canonical map $$H^s_{{{\EuR}{Sub}}(G;{{\mathcal F}}(X))}(X;K^t(BH))\otimes_{{{\mathbb Z}}} {{\mathbb Q}}\to H^s_{{{\EuR}{Sub}}(G;{{\mathcal F}}(X))}(X;K^t(BH)\otimes_{{{\mathbb Z}}} {{\mathbb Q}})$$ is bijective for all $s$ and $t$. We have already explained that the contravariant ${{\mathbb Z}}{{\EuR}{Sub}}(G;{{\mathcal F}}(X))$-module sending $H$ to $K^t(BH)$ is zero for odd $t$ and given for even $t$ by $$K^t(BH) ~ \cong ~ {{\mathbb Z}}\times \prod_{p} {{\mathbb I}}_p(H) \otimes_{{{\mathbb Z}}} {{\mathbb Z}}\widehat{_p}.$$ One easily checks using Lemma \[lem: T\_H(I\_p(H) otimes\_Z Q\_p) for H non finite cyclic p-group\] and for any finite group $H$ $$T_H\left(K^0(BH)\right) \cong \left\{ \begin{array}{lll} {{\mathbb Z}}& & H = \{1\}; \\ \ker\left({\operatorname{res}}_H^{H'} \colon R(H) \to R(H')\right) \otimes_{{{\mathbb Z}}} {{\mathbb Z}}\widehat{_p} & & H \text{ cyclic } p\text{-group}, \\ & & H' \subseteq H, [H:H'] = p; \\ 0& & \text{ otherwise}. \end{array}\right.$$ For every finite cyclic subgroup $K \subseteq G$ of order $p^r$ for some prime $p$ and integer $r \ge 1$ choose a retraction $r'(K) \colon K^0(BK) \to T_K( K^0(BK))$ of the ${{\mathbb Z}}$-homomorphism $j(K)\colon T_K( K^0(BK)) \to K^0(BK)$ given by inclusion. Such $r'(K)$ exists since $R(K')$ and hence the image of ${\operatorname{res}}_K^{K'} \colon R(K) \to R(K')$ is a finitely generated free ${{\mathbb Z}}$-module what implies that $\ker\left({\operatorname{res}}_K^{K'} \colon R(K) \to R(K')\right)$ is a direct summand of the finitely generated free ${{\mathbb Z}}$-module ${{\mathbb I}}_p(K) = {{\mathbb I}}(K)$. Since $W_GK$ is finite, we can define a ${{\mathbb Z}}[W_GK]$-map $$r(K) \colon K^0(BK) \to T_K( K^0(BK)), \quad x \mapsto \sum_{g \in W_GK} g \cdot r'(K)(g^{-1} \cdot x).$$ Then $r(K) \circ j(K) = |W_GK| \cdot {\operatorname{id}}$. For $K = \{1\}$ let $r(K) \colon K^0(BK) \xrightarrow{\cong} {{\mathbb Z}}$ be the obvious isomorphism which is for trivial reasons a $W_GK$-map. Define a map of contravariant ${{\EuR}{Sub}}(G;{{\mathcal F}}(X))$-modules $$\nu \colon K^0(B?) \to \prod_{(K) \in J} i(K)_!T_K(K^0(B?))$$ by requiring that the composite of $\nu$ with the projection onto the factor belonging to $(K) \in J$ is the adjoint for the pair $(i(K)^*,i(K)_!)$ of the $W_GK$-map $r(K)$. Analogously to the proof of [@Lueck(2004i) Theorem 2.14 (b)] one shows that $\nu(H)$ is injective for all objects $H \in {{\EuR}{Sub}}(G;{{\mathcal F}}(X))$. Here we use the fact that $r(K) \circ j(K)$ is injective for all $K$ with $(K) \in J$ since $r(K) \circ j(K) = |W_GK| \cdot {\operatorname{id}}$ and $ K^0(BK)$ and hence $T_K( K^0(BK))$ is torsionfree. Then one constructs analogously to the proof of [@Lueck(2004i) Theorem 5.2] for each object $(H) \in {{\EuR}{Sub}}(G;{{\mathcal F}}(X))$ a ${{\mathbb Z}}$-homomorphism $$\mu(H) \colon \left(\prod_{(K) \in J} i(K)_!T_K(K^0(B?))\right)(H) ~ \to ~ K^0(BH)$$ and checks that $\nu(H) \circ \mu(H)$ can be written as a diagonal matrix $A(H)$ which has upper triangular form and has maps of the shape $r \cdot{\operatorname{id}}$ as diagional entry, where each $r$ divides a certain integer $M(|H|)$ depending only on the order of $|H|$. There is an integer $N(|H|)$ depending only on the order of $|H|$ such that the size of the square matrix $A(H)$ is bounded by $N(|H|)$. The existence of the numbers $M(|H|)$ and $N(|H|)$ follows from the finiteness of $J$. Hence for each object $H \in {{\EuR}{Sub}}(G;{{\mathcal F}}(X))$ the cokernel of $\nu(H)$ is annihilated by $M(|H|)^{N(|H|)}$. Since there is an upper bound on the orders of finite subgroups of $G$, we can find an integer $L$ such that for each object $H \in {{\EuR}{Sub}}(G;{{\mathcal F}}(X))$ the cokernel of $\nu(H)$ is annihilated by $L$. The short exact sequence of ${{\mathbb Z}}{{\EuR}{Sub}}(G;{{\mathcal F}}(X))$-modules $$0 \to K^0(B?) \xrightarrow{\nu} \prod_{(K) \in J} i(K)_!T_K(K^0(B?)) \xrightarrow{{\operatorname{pr}}} {\operatorname{coker}}(\nu) \to 0$$ induces a long exact sequence $$\begin{gathered} \ldots \to H^{s-1}_{{{\EuR}{Sub}}(G;{{\mathcal F}}(X))}(X;{\operatorname{coker}}(\nu)) \to H^{s}_{{{\EuR}{Sub}}(G;{{\mathcal F}}(X))}(X;K^0(B?)) \\\to H^{s}_{{{\EuR}{Sub}}(G;{{\mathcal F}}(X))}(X;\prod_{(K) \in J} i(K)_!T_K(K^0(B?))) \to H^{s}_{{{\EuR}{Sub}}(G;{{\mathcal F}}(X))}(X;{\operatorname{coker}}(\nu)) \to \ldots\end{gathered}$$ Since multiplication with $L$ induces the zero map ${\operatorname{coker}}(\nu) \to {\operatorname{coker}}(\nu)$, multiplication with $L$ induces also the zero map on $H^{s}_{{{\EuR}{Sub}}(G;{{\mathcal F}}(X))}(X;{\operatorname{coker}}(\nu))$. Hence $H^{s}_{{{\EuR}{Sub}}(G;{{\mathcal F}}(X))}(X;{\operatorname{coker}}(\nu)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ is trivial. Since $J$ is finite, and $- \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ is an exact functor which commutes with finite products, we obtain from the adjunction $(i(K)^*.i(K)_!)$ a natural isomorphism $$H^{s}_{{{\EuR}{Sub}}(G;{{\mathcal F}}(X))}(X;K^0(B?)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}~ \xrightarrow{\cong} ~ \prod_{(K) \in J} H^s_{W_GK}(C_GK\backslash X^K;T_KK^0(B?)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}.$$ Similarly we get an isomorphism $$H^{s}_{{{\EuR}{Sub}}(G;{{\mathcal F}}(X))}(X;K^0(B?)\otimes_{{{\mathbb Z}}} {{\mathbb Q}}) ~ \xrightarrow{\cong} ~ \prod_{(K) \in J} H^s_{W_GK}(C_GK\backslash X^K;T_KK^0(B?) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}) .$$ Hence it remains to show for each $(K) \in J$ and $s \ge 0$ that the canonical map $$H^s_{W_GK}(C_GK\backslash X^K;T_KK^0(B?)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}~ \to ~ H^s_{W_GK}(C_GK\backslash X^K;T_KK^0(B?) \otimes_{{{\mathbb Z}}} {{\mathbb Q}})$$ is bijective. Abbreviate $C_* = C_*(C_GK\backslash Y^K)$, $L = W_GK$ and $M = T_KK^0(B?)$. Then $L$ is a finite group, $C_*$ is a ${{\mathbb Z}}L$-chain complex which is free over ${{\mathbb Z}}$ and for which there exists an integer $n \ge 1$ such that ${\operatorname{tors}}(H_s(C_*))$ is annihilated by $n$ and $H_s(C_*)/{\operatorname{tors}}(H_s(C_*))$ is a finitely generated ${{\mathbb Z}}$-module for all $s$. It remains to show for the ${{\mathbb Z}}L$-module $M$ that the canonical map $$H^s(\hom_{{{\mathbb Z}}L}(C_*,M)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}~ \to ~ H^s(\hom_{{{\mathbb Z}}L}(C_*,M \otimes_{{{\mathbb Z}}} {{\mathbb Q}}))$$ is bijective for all $s$. Let $i \colon \{1\} \to L$ be the inclusion. Since $L$ is finite, induction $i_*$ and coinduction $i_!$ agree. Hence we get natural ${{\mathbb Z}}L$-chain maps $a_* \colon C_* \to i_*i^*C_*$ and $b_*\colon i_*i^*C_* \to C_*$ such that $b_* \circ a_*$ is multiplication with $|L|$. They are explicitly given by $$\begin{aligned} a_s \colon C_s \to {{\mathbb Z}}L \otimes_{{{\mathbb Z}}} C_s, & \quad & x \mapsto \sum_{l \in L} l \otimes l^{-1} \cdot x; \\ b_s \colon {{\mathbb Z}}L \otimes_{{{\mathbb Z}}} C_s \to C_s, & \quad & l \otimes y \mapsto l \cdot y.\end{aligned}$$ Hence we obtain a commutative diagram $$\begin{CD} H^s(\hom_{{{\mathbb Z}}L}(C_*,M)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}@>>> H^s(\hom_{{{\mathbb Z}}L}(C_*,M \otimes_{{{\mathbb Z}}} {{\mathbb Q}})) \\ @V H^s(b_*) \otimes_{{{\mathbb Z}}} {\operatorname{id}}_{{{\mathbb Q}}} VV @V H^s(b_*) VV \\ H^s(\hom_{{{\mathbb Z}}L}({{\mathbb Z}}L \otimes_{{{\mathbb Z}}} C_*,M)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}@>>> H^s(\hom_{{{\mathbb Z}}L}({{\mathbb Z}}L \otimes_{{{\mathbb Z}}} C_*,M\otimes_{{{\mathbb Z}}} {{\mathbb Q}})) \\ @V H^s(a_*) \otimes_{{{\mathbb Z}}} {\operatorname{id}}_{{{\mathbb Q}}} VV @V H^s(a_*) VV \\ H^s(\hom_{{{\mathbb Z}}L}(C_*,M)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}@>>> H^s(\hom_{{{\mathbb Z}}L}(C_*,M \otimes_{{{\mathbb Z}}} {{\mathbb Q}})) \end{CD}$$ where the horizontal arrows are the canonical maps and the composite of the two left vertical maps and the composite of the two right vertical maps are isomorphisms. Hence it suffices to show that the middle horizontal arrow is an isomorphism. It can be identified with the canonical map $$H^s(\hom_{{{\mathbb Z}}}( C_*,M)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}~ \to ~ H^s(\hom_{{{\mathbb Z}}}(C_*,M \otimes_{{{\mathbb Z}}} {{\mathbb Q}})).$$ Notice for the sequel that $C_*$ is ${{\mathbb Z}}$-free. By the universal coefficient theorem we get a commutative diagram with exact rows and the canonical maps as horizontal arrows $$\begin{CD} 0 & & 0 \\ @VVV @VVV \\ {\operatorname{ext}}_{{{\mathbb Z}}}(H_{s-1}(C_*),M) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}@>>> {\operatorname{ext}}_{{{\mathbb Z}}}(H_{s-1}(C_*),M \otimes_{{{\mathbb Z}}} {{\mathbb Q}}) \\ @VVV @VVV \\ H^s(\hom_{{{\mathbb Z}}}(C_*,M)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}@>>> H^s(\hom_{{{\mathbb Z}}}(C_*,M \otimes_{{{\mathbb Z}}} {{\mathbb Q}})) \\ @VVV @VVV \\ \hom_{{{\mathbb Z}}}(H_s(Z),M) \otimes_{{{\mathbb Z}}}{{\mathbb Q}}@>>> \hom_{{{\mathbb Z}}}(H_s(Z),M \otimes_{{{\mathbb Z}}} {{\mathbb Q}})) \\ @VVV @VVV \\ 0 & & 0 \end{CD}$$ Since $H_s(C_*)/{\operatorname{tors}}(H_s(C_*)$ is a finitely generated free abelian group and there is an integer $n$ which annihilates ${\operatorname{tors}}(H_s(C_*))$, the rational vector spaces ${\operatorname{ext}}_{{{\mathbb Z}}}(H_{s-1}(C_*),M) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ and ${\operatorname{ext}}_{{{\mathbb Z}}}(H_{s-1}(C_*),M \otimes_{{{\mathbb Z}}} {{\mathbb Q}})$ vanish and the lower vertical arrow is bijective. Hence the middle arrow is bijective. This finishes the proof of Lemma \[lem: i\^\*\_G(X;bfK) is bijective for certain X\]. \[H\^\*,-,(K\_Bor)\_(0))-iso\] Let $X$ be a proper $G$-$CW$-complex such that for any cyclic subgroup $C \subseteq G$ of prime power order and any $k \in {{\mathbb Z}}$ we have $H_k(X^C;{{\mathbb Q}}) \cong H_k({\{\bullet\}};{{\mathbb Q}}) $. Then the up to $G$-homotopy unique $G$-map $f \colon X \to \underline{E}G$ induces for every $n \in {{\mathbb Z}}$ an isomorphism $$H_G^n\left(f;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right) \colon H_G^n\left(\underline{E}G;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right) \xrightarrow{\cong} H_G^n\left(X;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right).$$ Because of Lemma \[lem: refined non-multiplicative rational computation of H\^\*\_G(X;(bfK\_[Bor]{})\_[(0)]{})\] it suffices to show for every $n \in {{\mathbb Z}}$ and every cyclic subgroup of prime power order that the map $$H^n(C_GC\backslash f^C;M) \colon H^n(C_GC\backslash (\underline{E}G)^C;M) \to H^n(C_GC\backslash X^C;M)$$ is bijective for any ${{\mathbb Q}}$-module $M$. The Atiyah-Hirzebruch spectral sequence for the fibration $X^C \to EC_GC \times_{C_GC} X^C \to BC_GC$ together with the vanishing of $\widetilde{H}_*(X^C;{{\mathbb Q}})$ implies that the projection ${\operatorname{pr}}\colon EC_GC \times_{C_GC} X^C \to BC_GC$ induces for all $n \in {{\mathbb Z}}$ isomorphisms $$H_n({\operatorname{pr}};{{\mathbb Q}}) \colon H_n(EC_GC \times_{C_GC} X^C;{{\mathbb Q}}) \to H_n(BC_GC;{{\mathbb Q}}).$$ The projection ${\operatorname{pr}}' \colon EC_GC \times_{C_GC} X^C \to C_GC \backslash X^C$ induces for all $n \in {{\mathbb Z}}$ isomorphisms $$H_n({\operatorname{pr}}';{{\mathbb Q}}) \colon H_n(EC_GC \times_{C_GC} X^C ;{{\mathbb Q}}) \to H_n(C_GC\backslash X^C;{{\mathbb Q}})$$ by Lemma \[lem: almost iso induced by EG times\_G X to X/G\]  . The same is also true for $\underline{E}G$ instead of $X$. This implies that $$H_n(C_GC\backslash f^C;{{\mathbb Q}}) \colon H_n(C_GC\backslash X^C;{{\mathbb Q}}) \to H_n(C_GC\backslash (\underline{E}G)^C;{{\mathbb Q}})$$ is bijective for all $n \in {{\mathbb Z}}$. Hence $H^n(C_GC\backslash f^C;M)$ is bijective for any ${{\mathbb Q}}$-module $M$. \[lem: Smith theory\] Let $X$ be a proper finite dimensional $G$-$CW$-complex such that $H_k(X;{{\mathbb Z}}) \cong H_k({\{\bullet\}};{{\mathbb Z}})$ holds for any $k \in {{\mathbb Z}}$. Let $C \subseteq G$ be a cyclic group of prime power order. Suppose that $\widetilde{H}_n(X^C;{{\mathbb Z}})$ is almost finitely generated for each $n \in {{\mathbb Z}}$. Then: 1. \[lem: Smith theory: almost trivial\] The ${{\mathbb Z}}$-module $\widetilde{H}_n(X^C;{{\mathbb Z}})$ is almost trivial and the ${{\mathbb Q}}$-module $\widetilde{H}_n(X^C;{{\mathbb Q}})$ is trivial for all $n \in {{\mathbb Z}}$; 2. \[lem: Smith theory: almost isomorphism\] The map $H_n({\operatorname{pr}};{{\mathbb Z}}) \colon H_n(EC_GC \times_{C_GC} X^C;{{\mathbb Z}}) \to H_n(BC_GC;{{\mathbb Z}})$ induced by the projection ${\operatorname{pr}}\colon EC_GC \times_{C_GC} X^C \to BC_GC$ is an almost isomorphism for all $n \in {{\mathbb Z}}$. Suppose that $C$ has order $p^k$ for $k \ge 1$. Then $\widetilde{H}_n(X;{{\mathbb F}}_p) = 0$ for all $n \in {{\mathbb Z}}$ if ${{\mathbb F}}_p$ is the finite field of order $p$. By Smith theory $\widetilde{H}_n(X^C;{{\mathbb F}}_p) = 0$ for all $n \in {{\mathbb Z}}$ [@Bredon(1972) Theorem 5.2]. This implies by the Bockstein sequence associated to $0 \to {{\mathbb Z}}\xrightarrow{p \cdot {\operatorname{id}}} {{\mathbb Z}}\to {{\mathbb F}}_p \to 0$ that $p \cdot {\operatorname{id}}\colon \widetilde{H}_n(X^C;{{\mathbb Z}}) \to \widetilde{H}_n(X^C;{{\mathbb Z}})$ is bijective for $n \in {{\mathbb Z}}$. Since $\widetilde{H}_n(X^C;{{\mathbb Z}})$ is almost finitely generated, it must be almost trivial for $n \ge 1$. This implies that $\widetilde{H}_n(X^C;{{\mathbb Q}}) = 0$ for all $n \in {{\mathbb Z}}$.\ This follows from the Lerray-Serre spectral sequence of the fibration $X^C \to EC_GC \times X^C \to BC_GC$ whose $E^2$-term is $H_s(BC_GC;\widetilde{H}_t(X^C;{{\mathbb Z}}))$ and which converges to $H_{s+t}({\operatorname{pr}}\colon EC_GC \times_{C_GC} X^C \to BC_GC;{{\mathbb Z}})$ and the fact that the full subcategory of almost trivial ${{\mathbb Z}}$-modules is a Serre subcategory of the abelian category of ${{\mathbb Z}}$-modules. Now we can give the proof of Theorem \[the: Weakening the finiteness assumption\]. If one goes through the proofs of Theorem \[the: main theorem\] and Theorem \[the: Multiplicative structure\] one sees that the finiteness condition about $\underline{E}G$ enters only, when we apply Lemma \[lem: i\^\*\_G(X;bfE) is bijective for finite X\] to $\underline{E}G$. Hence it suffices to show that under the assumptions appearing in Theorem \[the: Weakening the finiteness assumption\] the map $$i^n_G(\underline{E}G;{\ensuremath{\mathbf{K}}}) \colon H_G^n\left(\underline{E}G;{\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\to H_G^n\left(\underline{E}G;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)$$ is a ${{\mathbb Q}}$-isomorphism for all $n \in {{\mathbb Z}}$. Let $f \colon X \to \underline{E}G$ be the up to $G$-homotopy unique $G$-map. We obtain the following commutative diagram $\begin{CD} H_G^n\left(\underline{E}G;{\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}@>i^n_G(\underline{E}G;{\ensuremath{\mathbf{K}}})>> H_G^n\left(\underline{E}G;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)\\ @V{H_G^n\left(f;{\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}}VV @VV{H_G^n\left(\underline{E}G;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right)}V\\ H_G^n\left(X;{\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}@>>i^n_G(X;{\ensuremath{\mathbf{K}}})> H_G^n\left(X;\left({\ensuremath{\mathbf{K}}}_{{\operatorname{Bor}}}\right)_{(0)}\right) \end{CD}$ Since $H_k(f,{{\mathbb Z}}) \colon H_k(X;{{\mathbb Z}}) \to H_k(\underline{E}G;{{\mathbb Z}})$ is bijective for all $k \in {{\mathbb Z}}$, we conclude from the Lerray-Serre spectral sequence that $H_k(EG \times_G f,{{\mathbb Z}}) \colon H_k((EG \times_G X;{{\mathbb Z}}) \to H_k((EG \times_G\underline{E}G;{{\mathbb Z}})$ is bijective for all $k \in {{\mathbb Z}}$. This implies that the left vertical arrow in the commutative square above which can be identified with $K^n(EG \times_G f) \colon K_n(EG \times_G \underline{E}G) \to K_n(EG \times_G X)$ is bijective. The lower horizontal arrow is bijective by Lemma \[lem: i\^\*\_G(X;bfK) is bijective for certain X\]. The right vertical arrow is bijective by Lemma \[H\^\*,-,(K\_Bor)\_(0))-iso\] and Lemma \[lem: Smith theory\] . Hence the upper horizontal arrow is bijective. The claim about the equivalent reformulation of condition follows from Lemma \[lem: almost iso induced by EG times\_G X to X/G\] and Lemma \[lem: Smith theory\] . This finishes the proof of Theorem \[the: Weakening the finiteness assumption\]. Examples and Further Remarks {#sec: Examples and Further Remarks} ============================ Some finiteness conditions such as appearing in Theorem \[the: Weakening the finiteness assumption\] are necessary as the following example shows. [**(Necessity of the finiteness conditions).**]{} \[exa: some finiteness conditions are crucial.\] *Consider $G = \ast_{i=1}^{\infty} {{\mathbb Z}}/p$ for a prime number $p$. Then $BG \simeq \bigvee_{i=1}^{\infty} B{{\mathbb Z}}/p$ and we get $$K^0(BG) ~ \cong ~ K^0({\{\bullet\}}) \times \prod_{i=1}^{\infty} \widetilde{K}^0(B{{\mathbb Z}}/p) ~ \cong ~ {{\mathbb Z}}\times \prod_{i=1}^{\infty} ({{\mathbb Z}}\widehat{_p})^{p-1},$$ if ${\{\bullet\}}$ is the one-point-space. Since $H^n(BG;M) \cong \prod_{i=1}^{\infty} H^n(B{{\mathbb Z}}/p;M) = 0$ for any ${{\mathbb Q}}$-module $M$ and $n \ge 2$, the cohomological dimension of $G$ over ${{\mathbb Q}}$ is $\le 1$ and hence $G$ acts on a tree $T$ with finite stabilizers [@Dunwoody(1979)]. Then $T$ is a $1$-dimensional model for $\underline{E}G$ (see [@Serre(1980) page 20] or [@Dicks-Dunwoody(1989) Proposition 4.7 on page 17]). By the Kurosh Subgroup Theorem [@Lyndon-Schupp(1977) Theorem 1.10 on page 178]) any non-trivial finite subgroup of $G$ is conjugated to precisely one of the summands ${{\mathbb Z}}/p$ and is equal to its centralizer. Hence $p$ is an upper bound on the orders of finite subgroups of $G$. Obviously ${{\mathbb Z}}\widehat{_p} \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ is canonically isomorphic to ${{\mathbb Q}}\widehat{_p}$. If the conclusion of Theorem \[the: main theorem\] would be true for $G$, it would predict that the canonical map $$\left(\prod_{i=1}^{\infty} ({{\mathbb Z}}\widehat{_p})^{p-1}\right) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}~ \to ~ \prod_{i=1}^{\infty} \left(({{\mathbb Z}}\widehat{_p})^{p-1} \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\right) ~ = ~ \prod_{i=1}^{\infty} ({{\mathbb Q}}\widehat{_p})^{p-1}$$ is bijective, what is not true. For instance, the element $(p^{-i})_{i=1}^{\infty}$ is not contained in its image.* Notice that in this example all conditions appearing in Theorem \[the: Weakening the finiteness assumption\] are satisfied except the condition that the set of conjugacy classes $(C)$ of finite cyclic subgroups $C \subseteq G$ of prime power order with $T^C \not= \emptyset$ is finite; We emphasize that no restriction (except properness) occur in Lemma \[lem: refined non-multiplicative rational computation of H\^\*\_G(X;(bfK\_[Bor]{})\_[(0)]{})\]. The problem is in Lemma \[H\^\*,-,(K\_Bor)\_(0))-iso\] some additional finiteness assumptions are needed. ** \[exa: SL\_3(Z)\] *Consider the group $G = SL_3({{\mathbb Z}})$. It is well-known that its rational cohomology satisfies $\widetilde{H}^n(BSL_3({{\mathbb Z}});{{\mathbb Q}}) = 0$ for all $n \in {{\mathbb Z}}$. Actually, we conclude from  [@Soule(1978) Corollary on page 8] that for $G = SL_3({{\mathbb Z}})$ the quotient space $G\backslash\underline{E}G$ is contractible and compact. From the classification of finite subgroups of $SL_3({{\mathbb Z}})$ we see that $SL_3({{\mathbb Z}})$ contains up to conjugacy two elements of order $2$, two elements of order $4$ and two elements of order $3$ and no further conjugacy classes of non-trivial elements of prime power order. The rational homology of all the centralizers of elements in ${\operatorname{con}}_2(G)$ and ${\operatorname{con}}_3(G)$ agree with the one of the trivial group (see [@Adem(1993b) Example 6.6]). Hence Theorem \[the: main theorem\] shows $$\begin{aligned} K^0(BSL_3({{\mathbb Z}})) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}& \cong & {{\mathbb Q}}\times ({{\mathbb Q}}\widehat{_2})^4 \times ({{\mathbb Q}}\widehat{_3})^2; \\ K^1(BSL_3({{\mathbb Z}})) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}& \cong & 0.\end{aligned}$$ The identification of $K^0(BSL_3({{\mathbb Z}})) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ above is compatible with the multiplicative structure on the target described in Example \[exa: including multiplicative structures\].* Actually the computation using Brown-Petersen cohomology and the Conner-Floyd relation by Tezuka and Yagita [@Tezuka-Yagita(1992)] gives the integral computation $$\begin{aligned} K^0(BSL_3({{\mathbb Z}})) & \cong & {{\mathbb Z}}\times ({{\mathbb Z}}\widehat{_2})^4 \times ({{\mathbb Z}}\widehat{_3})^2; \\ K^1(BSL_3({{\mathbb Z}})) & \cong & 0.\end{aligned}$$ ** \[exa: conditions M and NM\] *Let $G$ be a discrete group. Consider the following assertions concerning $G$:* - Every non-trivial finite subgroup of $G$ is contained in a unique maximal finite subgroup; - If $M \subseteq G$ is maximal finite, then $N_GM = M$; - There is a cocompact model for $\underline{E}G$. The conditions (M) and (NM) imply the following: For any non-trivial finite subgroup $H \subseteq G$ we have $N_GH = N_MH$ if $M$ is a maximal finite subgroup containing $H$. Let $\{M_i \mid i \in I\}$ be a complete set of representatives of the conjugacy classes of maximal finite subgroups of $G$. Fix a prime $p$. Then the obvious map $$\coprod_{i \in I} {\operatorname{con}}_p(M_i) \xrightarrow{\cong} {\operatorname{con}}_p(G)$$ is a bijection. Let $r_p(M_i) = |{\operatorname{con}}_p(M_i)|$ be the number of conjugacy classes of elements in $M_i$ of order $p^k$ for some $k \ge 1$. Theorem \[the: main theorem\] yields for a group satisfying conditions (M), (NM) and (C) above rational isomorphisms $$\begin{aligned} K^0(BG) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}& \cong & \prod_{i \in {{\mathbb Z}}} H^{2i}(BG;{{\mathbb Q}}) \times \prod_{p} \left({{\mathbb Q}}\widehat{_p}\right)^{\sum_{i \in I} r_p(M_i)} \\ K^1(BG) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}& \cong & \prod_{i \in {{\mathbb Z}}} H^{2i+1}(BG;{{\mathbb Q}}).\end{aligned}$$ Here are some examples of groups $Q$ which satisfy conditions (M), (NM) and (C) - Extensions $1 \to {{\mathbb Z}}^n \to G \to F \to 1$ for finite $F$ such that the conjugation action of $F$ on ${{\mathbb Z}}^n$ is free outside $0 \in {{\mathbb Z}}^n$.\ The conditions (M), (NM) are satisfied by [@Lueck-Stamm(2000) Lemma 6.3]. There are models for $\underline{E}G$ whose underlying space is ${{\mathbb R}}^n$. The quotient $G\backslash\underline{E}G$ looks like the quotient of $T^n$ by a finite group. - Fuchsian groups $F$\ See for instance [@Lueck-Stamm(2000) Lemma 4.5]). The quotients $G\backslash \underline{E}G$ are closed orientable surfaces. In  [@Lueck-Stamm(2000)] the larger class of cocompact planar groups (sometimes also called cocompact NEC-groups) is treated. - Finitely generated one-relator groups $G$\ Let $G = \langle (q_i)_{i \in I} \mid r \rangle$ be a presentation with one relation. We only have to consider the case, where $Q$ contains torsion. Let $F$ be the free group with basis $\{q_i \mid i \in I\}$. Then $r$ is an element in $F$. There exists an element $s \in F$ and an integer $m \ge 2$ such that $r = s^m$, the cyclic subgroup $C$ generated by the class $\overline{s} \in Q$ represented by $s$ has order $m$, any finite subgroup of $G$ is subconjugated to $C$ and for any $q \in Q$ the implication $q^{-1}Cq \cap C \not= 1 \Rightarrow q \in C$ holds. These claims follows from [@Lyndon-Schupp(1977) Propositions 5.17, 5.18 and 5.19 in II.5 on pages 107 and 108]. Hence $Q$ satisfies (M) and (NM). There are explicit two-dimensional models for $\underline{E}G$ with one $0$-cell $G/C \times D^0 $, as many free $1$-cells $G \times D^1$ as there are elements in $I$ and one free $2$-cell $G \times D^2$ (see [@Brown(1982) Exercise 2 (c) II. 5 on page 44]). For the three examples above one can make $H^*(BG;{{\mathbb Q}}) = H^*(G\backslash\underline{E}G;{{\mathbb Q}})$ more explicit. ** \[exa: extensions by Z/p\] *Suppose that $G$ can be written as an extension $1 \to A \to G \to {{\mathbb Z}}/p \to 1$ for some fixed prime number $p$ and for $A = {{\mathbb Z}}^n$ for some integer $n \ge 0$ and that $G$ is not torsionfree. The conjugation action of $G$ on the normal subgroup $A$ yields the structure of a ${{\mathbb Z}}[{{\mathbb Z}}/p]$-module on $A$. Every non-trivial element $g \in G$ of finite order $G$ has order $p$ and satisfies $$N_G\langle g \rangle = C_G\langle g \rangle = A^{{{\mathbb Z}}/p} \times \langle g \rangle.$$ There is a bijection $$\mu \colon H^1({{\mathbb Z}}/p;A) \times ({{\mathbb Z}}/p)^\times ~ \xrightarrow{\cong} ~ {\operatorname{con}}_p(G),$$ where $H^1({{\mathbb Z}}/p;A)$ is the first cohomology of ${{\mathbb Z}}/p$ with coefficients in the ${{\mathbb Z}}[{{\mathbb Z}}/p]$-module $A$. If we fix an element $g \in G$ of order $p$ and a generator $s \in {{\mathbb Z}}/p$, the bijection $\mu$ sends $([u],\overline{k}) \in H^1({{\mathbb Z}}/p;A) \times ({{\mathbb Z}}/p)^\times$ to the conjugacy class $(ug^k)$ of $ag^k$ if $[u] \in H^1({{\mathbb Z}}/p;A)$ is represented by the element $u$ in the kernel of the second differential $A \to A, ~ a \mapsto \sum_{i=0}^{p-1} s^i \cdot a$ and $k \in {{\mathbb Z}}$ represents $\overline{k}$. There is a cocompact model for $\underline{E}G$ with $A \otimes_{{{\mathbb Z}}} {{\mathbb R}}$ as underlying space. Hence Theorem \[the: main theorem\] yields for $G$ as above rational isomorphisms $$K^n(BG) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}~ \cong ~ \prod_{i \in {{\mathbb Z}}} H^{2i+n}(BA;{{\mathbb Q}})^{{{\mathbb Z}}/p} \times \prod_{k=1}^{r} ~ \prod_{i \in {{\mathbb Z}}} H^{2i+n}\left(B(A^{{{\mathbb Z}}/p});{{\mathbb Q}}\widehat{_p}\right),$$ if we put $r = (p-1) \cdot |H^1({{\mathbb Z}}/p;A)|$.* Take for instance $A$ to be the cokernel of the inclusion of ${{\mathbb Z}}[{{\mathbb Z}}/p]$-modules ${{\mathbb Z}}\to {{\mathbb Z}}[{{\mathbb Z}}/p], n \mapsto n \cdot \sum_{i=0}^{p-1} t^i$, where ${{\mathbb Z}}$ carries the trivial ${{\mathbb Z}}/p$-action and $t \in {{\mathbb Z}}/p$ is a fixed generator. One can identify $A$ with the extension of ${{\mathbb Z}}$ by adjoining a primitive $p$-th root of unity. From the long exact cohomology sequence associated to the short exact sequence of ${{\mathbb Z}}[{{\mathbb Z}}/p]$-modules $0 \to {{\mathbb Z}}\to {{\mathbb Z}}[{{\mathbb Z}}/p] \to A \to 0$ one concludes that $H^1({{\mathbb Z}}/p;A)$ is a cyclic group of order $p$. One easily checks that $A^{{{\mathbb Z}}/p} = 0$. Hence we obtain for the semi-direct product $A \rtimes {{\mathbb Z}}/p$ $$\begin{aligned} K^0(B(A \rtimes {{\mathbb Z}}/p)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}& \cong & {{\mathbb Q}}\times ({{\mathbb Q}}\widehat{_p})^{p^2-p}; \\ K^1(B(A \rtimes {{\mathbb Z}}/p)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}& \cong & 0.\end{aligned}$$ The identification of $K^0(B(A \rtimes {{\mathbb Z}}/p)) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ is compatible with the multiplicative structure on the target described in Example \[exa: including multiplicative structures\]. ** \[rem: comparison with Adem\] *The results and the examples appearing in this paper are consistent with the ones by Adem  [@Adem(1993b)]. Adem needs that $G$ contains a normal torsionfree subgroup $G' \subseteq G$ of finite index and uses the Atiyah-Segal Completion Theorem for the finite group $G/G'$ to compute rationally the $K$-theory with coefficients in the $p$-adic integers ${{\mathbb Z}}\widehat{_p}$. His condition that in his notation $\Gamma'\backslash X$ is compact is precisely the condition that there is a cocompact model for $\underline{E}G$. We can drop the condition of the existence of the normal torsionfree subgroup $G' \subseteq G$ of finite index with our methods.* One can get Adem’s local computations from ours by replacing for a fixed prime $p$ the cohomology $H^*(BG;{{\mathbb Q}})$ by $H^*(BG;{{\mathbb Q}}\widehat{_p})$ and ignoring in the product running over all primes all the contributions coming from primes different from $p$. For instance Example  \[exa: SL\_3(Z)\] implies that the ${{\mathbb Q}}\widehat{_3}$-algebra $K^0(BSL(3,{{\mathbb Z}});{{\mathbb Z}}\widehat{_3}) \otimes_{{{\mathbb Z}}\widehat{_3}} {{\mathbb Q}}\widehat{_3}$ is given by $${{\mathbb Q}}\widehat{_3}[u,v]/(u^2 = u, v^2= v, uv = 0).$$ If one makes the change of variables $u = (\alpha - 2)/3$ and $v = (\beta - 2)/3$, one obtains the presentation in [@Adem(1993b) Example 6.6] $${{\mathbb Q}}\widehat{_3}[u,v]/(\alpha^2 - \alpha -2, \beta^2 -\beta - 2, \alpha\beta-2(\alpha + \beta -2)).$$ Recall that after complexification we can determine the multiplicative structure in general (see Theorem \[the: Multiplicative structure\]). There are interesting discussions about Euler characteristics and maps of groups inducing isomorphisms on homology in [@Adem(1992)] and  [@Adem(1993b)] which also apply to our setting. ** \[rem: Hodgkins computation\] *Let $\Gamma^n$ be the mapping class group of the sphere $S^2$ with $n$ punctures for $n \ge 3$. Hodgkin computes rationally the $K$-theory of $B\Gamma^n$ with coefficients in the $p$-adic integers ${{\mathbb Z}}\widehat{_p}$ using Adem’s formula in [@Adem(1992)]. The main work done in the paper by Hodgkin [@Hodgkin(1995) Proposition 2.2 and Theorem 2] is to figure out the set of conjugacy classes of elements of order $p^s$ for each prime $p$ and integer $s \ge 1$ and the rank of $K^k(BC_G \langle g \rangle) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}\cong \prod_{i \in {{\mathbb Z}}} H^{2i + k}(BC_G\langle g \rangle;{{\mathbb Q}})$ for each element $g \in \Gamma^n$ of prime power order. One can identify $K^*(BC_G \langle g \rangle) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ with $(K^*(BK^r) \otimes_{{{\mathbb Z}}} {{\mathbb Q}})^{\Sigma_r}$ or with $(K^*(BK^r) \otimes_{{{\mathbb Z}}} {{\mathbb Q}})^{\Sigma_{r-2} \times \Sigma_2}$ for appropriate integers $r$ depending only on the order of $g$, where $K^r$ is the pure mapping class group of $S^2$ with $r$ punctures and $\Sigma_t$ denotes the symmetric group of permutation of the set consisting of $t$ elements. Thus one obtains with Theorem \[the: main theorem\] the precise structure of the ${{\mathbb Q}}$-vector spaces $K^k(B\Gamma^n) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$. It may be worthwhile to investigate the product structure on the cohomology $H^*(C_G\langle g \rangle;{{\mathbb C}})$ since this would lead to a computation of $K^*(B\Gamma^n) \otimes_{{{\mathbb Z}}} {{\mathbb C}}$ including its multiplicative structure by Theorem \[the: Multiplicative structure\]. *** \[rem: characterization of torsionfree\] *Let $G$ be a discrete group with a finite model for $\underline{E}G$. Then the following assertions are equivalent:* 1. $G$ is torsionfree; 2. The abelian group $K^k(BG)$ is finitely generated for $k \in {{\mathbb Z}}$; 3. The rational vector space $K^k(BG) \otimes_{{{\mathbb Z}}} {{\mathbb Q}}$ is finite dimensional for $k \in {{\mathbb Z}}$. An application of the Atiyah-Hirzebruch spectral sequence proves the implication (a) $\Rightarrow$ (b). The implication (b) $\Rightarrow $(c) is obvious. The implication (c) $\Rightarrow $(a) follows from Theorem \[the: main theorem\] since ${{\mathbb Q}}\widehat{_p}$ is an infinite dimensional ${{\mathbb Q}}$-vector space. ** *Suppose that there is a finite model for $\underline{E}G$. Let the ring $\Lambda^G$ be the subring of ${{\mathbb Q}}$ obtained by inverting the orders of finite subgroups of $G$. We state without giving the details of the proof but referring to [@Joachim-Lueck(2005)] that one can improve Theorem \[the: main theorem\] to the statement that there is a $\Lambda^G$-isomorphism $$\begin{gathered} \overline{{\operatorname{ch}}}^n_{G,\lambda^G} \colon K^n(BG) \otimes_{{{\mathbb Z}}} \Lambda^G ~ \xrightarrow{\cong} \\ K^n(G\backslash \underline{E}G) \otimes_{{{\mathbb Z}}} \Lambda^G \times \prod_{p \text{ prime}} ~ \prod_{(g) \in {\operatorname{con}}_p(G)} \left(\prod_{i \in {{\mathbb Z}}} H^{2i+n}(BC_G\langle g \rangle;{{\mathbb Q}}\widehat{_p})\right).\end{gathered}$$ Consider a prime $q$ for which there exists no element order $q^s$ for some $s \ge 1$ in $G$, in other words, $q$ is not invertible in $\Lambda^G$. Then the projection $BG = EG \times_G \underline{E}G \to G\backslash \underline{E}G$ induces an isomorphism $${\operatorname{tors}}_q\left(K^n(G\backslash \underline{E}G)\right) ~ \xrightarrow{\cong} {\operatorname{tors}}_q\left(K^n(BG)\right),$$ if for an abelian group $A$ we denote by ${\operatorname{tors}}_q(A)$ the subgroup of elements $a \in A$ which are annihilated by some power of $q$. In particular $K^n(BG)$ contains $q$-torsion if and only if $K^n(G\backslash \underline{E}G)$ contains $q$-torsion. It can occur that $K^n(G\backslash \underline{E}G)$ contains elements of order $q^s$ for some $s \ge 1$ (see [@Leary-Nucinkis(2001a)].) We will explain in [@Joachim-Lueck(2005)] that the subgroup of torsion elements in $K^n(BG)$ is finite. *** [^1]: email: lueck@math.uni-muenster.de\ www:  http://www.math.uni-muenster.de/u/lueck/\ FAX: 49 251 8338370
--- abstract: 'In order to understand the Barium abundance distribution in the Galactic disk based on Cepheids, one must first be aware of important effects of the corotation resonance, situated a little beyond the solar orbit. The thin disk of the Galaxy is divided in two regions that are separated by a barrier situated at that radius. Since the gas cannot get across that barrier, the chemical evolution is independent on the two sides of it. The barrier is caused by the opposite directions of flows of gas, on the two sides, in addition to a Cassini-like ring void of HI (caused itself by the flows). A step in the metallicity gradient developed at corotation, due to the difference in the average star formation rate on the two sides, and to this lack of communication between them. In connection with this, a proof that the spiral arms of our Galaxy are long-lived (a few billion years) is the existence of this step. When one studies the abundance gradients by means of stars which span a range of ages, like the Cepheids, one has to take into account that stars, contrary to the gas, have the possibility of crossing the corotation barrier. A few stars born on the high metallicity side are seen on the low metallicity one, and vice-versa. In the present work we re-discuss the data on Barium abundance in Cepheids as a function of Galactic radius, taking into account the scenario described above. The \[Ba/H\] ratio, plotted as a function of Galactic radius, apparently presents a distribution with two branches in the external region (beyond corotation). One can re-interpret the data and attribute the upper branch to the stars that were born on the high metallicity side. The lower branch, analyzed separately, indicates that the stars born beyond corotation have a rising Barium metallicity as a function of Galactic radius.' --- Introduction ============ The “main stream" chemical evolution models do not recognize the existence of a step of 0.3 dex in the metallicity gradient of the Galactic disk near the solar orbit radius. Moreover, since in the literature one can find many observational papers that ignore this step as well, and fit the radial gradient of the disk by a straight line across the entire range of radius, everything seems to be in order. We have to agree that this step is not easily observed; errors in the measurements and the migration of stars tend to hide it a little bit. However, this step in the gradient reveals the existence of a major effect of the corotation resonance in the disk of the Galaxy. One cannot understand the basic physic that govern the chemical evolution of the disk without taking this resonance into account. We shall first describe the main effects of this resonance before proceeding to the discussion of the Barium abundance distribution. The corotation resonance as a barrier between two independent worlds ==================================================================== The corotation resonance is the place where the rotation speed of the spiral pattern coincides with the rotation speed of the material of the disk. In our Galaxy both are well known. They are shown in Figure 1, in linear (not angular) velocities. Since the spiral pattern rotates with constant angular velocity, it appears in the figure as a straight line with positive slope. The corotation radius lies only slightly beyond the solar radius. ![The rotation curve of the Galaxy. The data points are CO observations by Clemens (1985) fitted by a smooth function. The rotation speed of the spiral pattern is indicated by a dashed line. The adopted galactic parameters here are $r_0$ =7.5 kpc,$V_0$= 235 km/s and $\Omega_p$ = 25 km/s/kpc. The rotation curve shows a minimum close to the corotation radius, which is itself a consequence of the presence of the resonance.[]{data-label="fig:fig1"}](fig1.eps) ![The HI surface density of M83 (NGC5236), according to Crosthwaite et al. (2002). At the center, the CO surface density is also shown. The dashed ellipse represent the corotation circle projected on the plane of the galaxy. One can see that it coincides with a minimum of HI density.[]{data-label="fig:"}](fig2.eps){width="40.00000%"} What happens at corotation is that the relative velocity of the gas with respect to the spiral arms reverses its direction. Consequently, the radial flux of gas induced by the spiral potential perturbation also reverses its direction. The gas of the disk flows outwards in the outer regions and inwards in the inner regions (inside corotation). Such flows are observed in external galaxies. For instance Elmegreen et al. (2009), wrote: “In summary, the gas in NGC 1365 is observed to stream outward outside of corotation and inward inside of corotation, as expected from numerous models and observations of other galaxies". A consequence of the gas flows in opposite directions is the formation of a ring void of gas at corotation. This is illustrated in Figure 2 in the case of M83. The figure, taken from Crosthwaite et al. (2002) shows the gas distribution in the disk of that galaxy; we superimposed on it a dashed ellipse which indicates the position of the corotation radius. We see the void of gas at that radius. Precisely at the same radius, in M83, a step has been observed in the Oxygen abundance gradient measured by Bresolin et al.(2009), as reproduced in Figure 3. The reason for the step is that the chemical evolution of the gas on one side of corotation is independent from that of the other, because there is no contact between them; the ring void of gas and the gas flow in opposite directions constitute a barrier. And since the star formation rate is larger inside corotation, the rate of growth of metallicity is larger in the inner region. After a few billion years, a difference in metallicity of a few dex builds up. For a discussion of the radius of corotation of M83 see Scarano & Lépine (2013). The case of M83 is not an isolated one. Scarano & Lépine (2013) collected the corotation radii of a sample of galaxies from the literature, and looked for the presence of breaks or steps in the gradients of Oxygen abundances. A strong correlation was found between the corotation radii and the radii of the breaks. The presence of a break (or change of slope of the gradient) is also an indication of independent evolution on the two sides of corotation. ![The Oxygen abundance as a function of radius in M83, measured by Bresolin et al. (2009). The galatocentric distances are in units of R25, equal to 8.4 kpc according to the authors. The O abundance was measured in HII regions of the galaxy.[]{data-label="fig:fig3"}](fig3.eps) The corotation resonance in our Galaxy ====================================== The corotation radius of our Galaxy has been determined by many different methods, with good agreement between them; see for instance a list in the paper by Junqueira et al (2013). Of course, one can find in the literature a few discrepant values, but these are based on indirect methods (like N-body simulations, for instance) that depend on many hypotheses and uncertain input parameters. Only direct methods must be considered here. One example of such a method is the integration of the orbits of the open clusters backwards to their birthplace in the spiral arms (Dias & Lépine, 2005) which gave the result $R_c$ = 1.08 $\pm$ 0.06 $R_0$. ![The Fe abundance of Open Clusters as a function of Galactic radius, taken from Lépine et al. (2011). It is possible to see the gap in the density of clusters at corotation (about 8.5 kpc for $R_0$ = 7.5 kpc), and the step down in metallicity at the same radius.[]{data-label="fig:fig4"}](fig4.eps) The existence of a ring void of gas at the corotation radius was shown by Amores et al. (2009). The technique used was to observe the presence of very deep minima in the HI spectra of the LAB survey, that reach almost zero antenna temperature. By computing the kinematic distances of such minima, these authors showed that they are distributed along a circle situated slightly beyond the solar circle. Note that the paper did not present a map of HI, but only the position of the voids for a large number of line of sights over the whole range of longitudes. This result is in principle more robust than a map. Maps are usually based on kinematic distances of the peaks that appear in HI the spectra, and are constructed ignoring that the width of the peaks in the spectra are largely due to turbulent velocities, not to the physical widths of the arms. The circular shape of the ring void of gas shows that it cannot be interpreted as an inter-arm region. Like in M83, the ring-shaped void of gas is associated with a step in the metallicity distribution. This is shown in Figure 4, where the Fe abundance of Open Clusters is plotted as a function of Galactic radius. One can also see in this figure the gap in the density distribution of the Open Clusters, at corotation. A simple explanation for the gap is that the clusters cannot be born in a region with very low gas density. A more sophisticated cause will be discussed below. Note that the existence of the step in the metallicity distribution of the clusters was discovered by Twarog et al. (1997), but no good explanation for it was available at that time. ![The radial motion of a star born close to the corotation resonance. The star alternates between regions inside and outside corotation, but passes quickly at the exact resonance radius. This behaviour contributes to the formation of a minimum of stellar density at corotation. []{data-label="fig:fig5"}](fig5.eps) ![The abundance of $\alpha$-elements in Cepheids as a function of Galactic radius. The average of the abundances of the elements Si, S, Ca and Ti, normalized to the Solar abundances, are shown.[]{data-label="fig:fig6b"}](fig6.eps) ![The Barium abundance in Cepheids as a function of Galactic radius. The lower branch on the right side is the one that really represents the stars born on that side, and therefore, the abundance of the local gas. One concludes that the Barium abundance increases with Galactic radius, beyond corotattion. In the last two figures the corotation radius is about 9 kpc.[]{data-label="fig:fig7"}](fig7.eps) Stars are forced to cross the corotation ======================================== Figure 5 shows the radial motion of a star born close to the corotation radius, in this case, on the external side. This result was obtained by integrating the orbit of the star in the presence of a spiral potential perturbation, using the new description of this potential proposed by Junqueira et al. (2013). The corotation resonance acts on the stars that are in its neighborhood, making them to cross the resonance from time to time, with a short crossing time compared to the time that the star stays on each side. This behaviour maintains a smaller stellar density in a ring around the resonance. A detailed study of the stellar orbits near corotation was performed by Barros et al. (2013). Re-visiting the Barium abundance gradient ========================================= The Barium abundance gradient in the Galaxy was recently investigated by Andrievsky et al. (2013) based on a large sample of Cepheids (270 stars). That work is a continuation of a long term effort conducted by Andrievsky and collaborators to investigate chemical abundances in the Galaxy (see the references in that last paper). The conclusion of the paper was that the Ba abundance gradient becomes flat in the outer parts of the disk. We present here an alternative conclusion based on the ideas presented above. Figure 6 shows the gradient of $\alpha$-elements in the Galactic disk, based on the same series of papers. We took an average of 4 elements in order to reduce the scattering due to errors of measurements. On the right of the corotation resonance one can see two branches in the abundance distribution. The upper branch with larger metallicities is due to the Cepheids that were born on the left side and have moved to the right side as explained in the previous section. The lower branch (smaller abundances) have the real abundances that correspond to the local gas where they are. One can see that the real gradient of the low metallicity side is flat. The overlap of the two sets (high and low metallicity) over a range of radius is due to the migration of stars. Note that this is totally different from the small overlap seen in Figure 3. HII regions have so short lifetimes that they do not migrate; in that case the overlap is related to a small error in the radial distances of the HII regions due to the choice of the inclination of M83. The Barium gradient is shown in Figure 7. By similarity with Figure 6, we recognize the set of stars that were born on the left side (Galactic radii smaller than corotation) and constitute the upper branch on the right side. The lower branch is due to stars that really represent the local (low) abundances. If one focus on the lower branch, one can see that beyond corotation the Barium abundance in the gas increases 0.13 dex/kpc. This result is possibly explained by Travaglio et al. (1997). The r s-process distribution of elements strongly depends on stellar metallicity in the interval of \[Fe/H\] from 0 to -0.2; the more metal poor stars tend to produce slightly more baryons. Another hypothesis is the possible excess of AGB stars (which produce Barium) with respect to hydrogen gas, since the H density decreases with Galactic radius, while the AGB stars can reach large radii due to migration. , 2013, *MNRAS*, 428, 3252 2009, *MNRAS* 400, 1768 , 2013, submitted to *MNRAS* , 2009,*ApJ*, 695, 580 1985, *ApJ*, 295, 422 , 2002, *AJ*, 123, 1892 2005, *ApJ*, 629, 825 , 2009, *ApJ*, 703, 1297 2013, *A&A* 550, 91 2011, *MNRAS*, 417, 698 2013, *MNRAS*, 428, 625 , 2001, *MemSAI*, 72, 38 1997, *Astron. J*, 114, 2556
--- abstract: 'The evidence for the low mass $J^{PC}=0^{++}$ states is reconsidered. We suggest classifying the isoscalars $f_0(980)$ and $f_0(1500)$ as members of the $0^{++}$ nonet, with a mixing rather similar to that of the pseudoscalars $\eta''$ and $\eta$. The broad state called $f_0(400-1200)$ or “sigma” and the state $f_0(1370)$ are considered as different signals from a single broad resonance, which we take to be the lowest-lying $0^{++}$ glueball. The main arguments in favor of these hypotheses are presented and compared with theoretical expectations.' author: - | Peter Minkowski\ [*Institute for Theoretical Physics, Univ. of Bern, CH - 3012 Bern, Switzerland*]{}\ Wolfgang Ochs\ [*Max Planck Institut für Physik, D - 80805 Munich, Germany*]{} title: | THE $J^{PC}=0^{++}$ SCALAR MESON NONET AND GLUEBALL\ OF LOWEST MASS --- =14.5pt Introduction ============ This session of the workshop is devoted to the study of the “sigma” particle, which is related to the large $S$-wave $\pi\pi$ scattering amplitude; it peaks around 800 MeV and again near 1300 MeV. The nature of this S wave enhancement was under discussion since the very beginning of $\pi\pi$ interaction studies[^1] and the interpretation is still developing. Its role in S-matrix and Regge theory, chiral theories and $q\overline q$ spectroscopy is considered since; after the advent of QCD the possibility of glueball spectroscopy[@HFPM] has opened up as well which is in the focus of our attention. In order to obtain the proper interpretation of the “sigma”, a classification of all low lying $J^{PC}=0^{++}$ states into the $q\overline q$ nonet and glueball states appears necessary. To this end we first discuss the evidence for the low mass scalar states ($\leq$ 1600 MeV) and then proceed with an attempt of their classification as quarkonium or glueball states from their properties in production and decay. We will argue that the “sigma” is actually the lightest glueball. The main arguments for our classifications will be presented, further details of this study can be found in the recent publication.[@mo] Evidence for light $0^{++}$ states with $I=0$ ============================================= The Particle Data Group[@PDG] lists the following $I=0$ scalar states: $f_0(400-1200)$ which is related to the “sigma”, $f_0(980)$, $f_0(1370)$ and $f_0(1500)$, not all being firmly established. The existence of a resonance is not only signaled by a peak in the mass spectrum but it requires in addition that the scattering amplitude moves along a full circle in the complex plane (“Argand diagram”). The first two states have been studied in detail in the phase shift analysis of elastic $\pi^+\pi^-$ scattering. As discussed by K. Rybicki[@Rybicki], the results from high statistics experiments with unpolarized[@CERN-Munich] and polarized target[@CKM] have led to an almost unique solution up to 1400 MeV out of the total of four. On the other hand, recent data on the $\pi^0\pi^0$ final state from GAMS[@GAMS] show a different behaviour of the S-D wave phase differences above 1200 MeV. A complete phase shift analysis would provide an important consistency check with the previous $\pi^+\pi^-$ results. Another experiment on $\pi^0\pi^0$ pair production is in progress (BNL–E852[@gunter]), the preliminary mass spectrum is shown in fig.1a. One can see a broad spectrum with two or three peaks (which we refer to as the “red dragon”). There is no question about the existence of $f_0(980)$ which causes the first dip near 1 GeV by its interference with the smooth “background”. More controversial is the interpretation of the second peak which appears in the region 1200-1400 MeV in different experiments. If we remove the $f_0(980)$ from a global resonance fit of the spectrum the remaining amplitude phase shift moves slowly through $90{^{\rm o}}$ near 1000 MeV and continues rising up to 1400 MeV where it has largely completed a full resonance circle (see also[@mp]). A local Breit-Wigner approximation to these phase shifts yields $$\textrm{``sigma'':} \qquad\qquad m\ \sim \ 1000\ \textrm{MeV}, \qquad \Gamma \ \sim \ 1000\ \textrm{MeV}. \qquad \label{sigma}$$ In this interpretation the second peak does not correspond to a second resonance – $f_0(1370)$ – but is another signal from the broad object. A second resonance would require a second circle which is not seen.[@CERN-Munich; @CKM] Therefore, a complete phase shift analysis of the $\pi^0\pi^0$ data in terms of resonances is important for consolidation. We also investigated whether the state $f_0(1370)$, instead, appears with sizable coupling in the inelastic channels $\pi\pi \to K\overline K,\ \eta\eta$ where peaks in the considered mass region occur as well, although not all at the same position, see fig.1b,c. To this end we constructed the Argand diagrams for these channels in fig.2. A similar result for $K\overline K$ has been found already from earlier data.[@argonne] The movement of the amplitudes in the complex plane (fig.2) can be interpreted in terms of a superposition of a resonance and a slowly varying background. We identify the circles with the $f_0(1500)$ state which has been studied in great detail by Crystal Barrel.[@CB] This resonance can be seen to interfere with opposite sign in the two channels in figs.2a,b with the background and this also explains the shift of the peak positions in fig.1b,c. Thus, the structures in the 1300 MeV region do not correspond to additional circles, therefore no additional Breit-Wigner resonance $f_0(1370)$ is associated with the respective peaks. The $J^{PC}=0^{++}$ nonet of lowest mass ======================================== As members of the nonet we take the two isoscalars $f_0(980)$ and $f_0(1500)$ which are mixtures of flavor singlet and octet states. Furthermore we include the isovector $a_0(980)$ and the strange $K^*(1430)$. Then the only scalar states with mass below $\sim$ 1600 MeV left out up to now are the broad “sigma” to which we come back later and the $a_0(1450)$, which could be a radially excited state. We find the mixing of the $f_0$ states like the one of the pseudoscalars, namely, with flavour amplitudes $(u\overline u,d\overline d,s\overline s)$, approximately as $$\label{mixing} \begin{array}{l} \begin{array}{llllll} f_0(980) &\leftrightarrow & \eta^{\prime} (958) \ & \sim & \frac{1}{\sqrt{6}}(1,\ 1,\ 2) & \quad \textrm{(near}\ \textrm{singlet)} \vspace{2mm}\\ f_0(1500)& \leftrightarrow & \eta (547) \ & \sim & \frac{1}{\sqrt{3}}(1,\ 1,\ -1) & \quad \textrm{(near}\ \textrm{octet)} \end{array} \end{array}$$ We have been lead to this classification and mixing by a number of observations: [*1. $J/\psi\to \omega,\varphi +X$ decays*]{}\ The branching ratios of $J/\psi$ into $\varphi \ \eta^{ \prime} (958)$ and $\varphi \ f_{ 0} (980)$ are of similar size and about twice as large as $\omega \ \eta^{ \prime} (958)$ and $\omega \ f_{ 0} (980)$ which is reproduced by the above flavor composition. [*2. Gell-Mann-Okubo mass formula*]{}\ This formula predicts the mass of the octet member $f_0^{(8)}$. With our octet members $a_0$ and $K^*_0$ as input one finds $m(f_0^{(8)})=1550$ MeV, or, with the $\eta$-$\eta^{'}$ type mixing included $m(f_0^{(8)})=1600$ MeV. The small deviation of $\sim$10% in $m^2$ from the mass of the $f_0(1500)$ is tolerable and can be attributed to strange quark mass effects. [*3. Two body decays of scalars*]{}\ Given the flavor composition eq.(\[mixing\]) we can derive the decay amplitudes into pairs of pseudoscalars whereby we allow for a $s\overline s$ relative amplitude $S$ (for a similar analysis, see[@as1]). In particular, the branching ratios $$f_0(980)\to \pi\pi, K\overline K;\quad\ f_0(1500)\to \pi\pi, K\overline K, \eta\eta, \eta\eta^{'};\quad\ a_0(980),f_0(980)\to \gamma\gamma \nonumber$$ are found in satisfactory agreement with the data for values $S$ around 0.5. [*4. Relative signs of decay amplitudes*]{}\ A striking prediction is the relative sign of the decay amplitudes of the $f_0(1500)$ into pairs of pseudoscalars: because of the negative sign in the $s\overline s$ component, see eq.(\[mixing\]), the sign of the $K\overline K$ decay amplitude is negative with respect to $\eta\eta$ decay and also to the respective $f_2(1270)$ and glueball decay amplitudes. This prediction is indeed confirmed by the amplitudes in fig.2a,b which show circles pointing in upward and downward directions, respectively. If $f_0(1500)$ were a glueball, then both circles should have positive sign as in fig.2b, but the experimental results are rather orthogonal to such an expectation. Further tests of our classification are provided by the predictions on the decays $J/\psi\to\varphi/\omega+f_0(1500)$ and the $\gamma\gamma$ decay modes of the scalars. The lightest $0^{++}$ glueball ============================== In the previous analysis we have classified the scalar mesons in the PDG tables below 1600 MeV with the exception of $f_0(400-1200)$ and also of $f_0(1370)$ which we did not accept as standard Breit-Wigner resonance. We consider the broad spectrum in fig.1a with its two or three peaks as a single very broad object which interferes with the $f_0$ resonances. This “background” with slowly moving phase appears also in the inelastic channels (see fig.2). It is our hypothesis that this very broad object with parameters eq.(\[sigma\]) is the lightest glueball. We do not exclude some mixing with the scalar nonet states but it should be sufficiently small such as to preserve the main characteristics outlined before. We discuss next, how this glueball assignment fits with phenomenological expectations.[@Close; @mo] [*1. The large width*]{}\ The unique feature of this state is its large width. There are two qualitative arguments[@mo] why this is natural for a light glueball:\ a) For a heavy glueball one expects a small width as the perturbative analysis involves a small coupling constant $\alpha_s$ at high masses (“gluonic Zweig rule[@HFPM]). For a light glueball around 1 GeV this argument doesn’t hold any more and a large $\alpha_s$ could yield a large width.\ b) The light $0^{++}$ states are coupled mainly to pairs of pseudoscalar particles. Then, for a scattering process through a $0^{++}$ channel the external particles are in an S-wave state; an intermediate $q\overline q$ resonance will be in a P-wave state but an intermediate $gg$ system in an S-wave again. Therefore the overlap of wave functions in the glueball case is larger and we expect $$\Gamma_{gb_0}\ \gg \ \Gamma_{q\overline q-hadron}. \label{gammaglu}$$ [*2. Reactions favorable for glueball production*]{}\ a) The “red dragon” shows up also in the centrally produced systems in high energy $pp$ collisions[@central] which are dominated by double Pomeron exchange, with new results presented by A. Kirk.[@Kirk] Because of the gluonic nature of the Pomeron, this strong production coincides with the expectations.\ b) The broad low mass $\pi\pi$ spectrum is also observed in decays of radially excited states $\psi'\to\psi(\pi\pi)_s$ and $Y',Y''\to Y(\pi\pi)_s$ which are expected to be mediated by gluonic exchanges.\ c) The hadrons in the decay $J/\psi \to \gamma+\textrm{hadrons}$ are expected to be produced through 2-gluon intermediate states which could form a scalar glueball. However, in the low mass region $m<$ 1 GeV only little S-wave in the $\pi\pi$ channel is observed. [*3. Flavour properties*]{}\ The branching ratios of the $f_0(1370)$ – which we consider as part of the glueball – into $K\overline K$ and $\eta\eta$ compare favorably with expectations. [*4. Suppression in $\gamma\gamma$ collisions*]{}\ If the mixing of the glueball with charged particles is small it should be weakly produced in $\gamma\gamma$ collisions. In the process $\gamma\gamma\to \pi^0\pi^0$ there is a dominant peak related to $f_2(1270)$ but, in comparison, a very small cross section in the low mass region around 600 MeV. This could be partially due to hadronic rescattering and absorption, partly due to the smallness of the 2 photon coupling of the intermediate states. Unfortunately, the data in the $f_2$ region leave a large uncertainty on the S-wave fraction ($<19$%[@Cball]). In a fit to the data which takes into account the one-pion-exchange Born terms and $\pi\pi$ rescattering the two photon width of the states $f_2(1270)$ and $f_0(400-1200)$ have been determined[@BP] as 2.84$\pm$0.35 and 3.8$\pm$ 1.5 keV, respectively. If the $f_0$ were a light quark state like the $f_2$ we might expect comparable ratios of $\gamma\gamma$ and $\pi\pi$ decay widths, but we find (in units of $10^{-6}$) $$\displaystyle R_2= \frac{\Gamma(f_2(1270)\to\gamma\gamma)}{\Gamma(f_2(1270)\to\pi\pi)}\ \sim \ 15 ;\quad R_0= \frac{\Gamma(f_0(400-1200)\to\gamma\gamma)} {\Gamma(f_0(400-1200)\to\pi\pi)}\ \sim \ 4-6, \label{R02}$$ thus, for the scalar state, this ratio is 3-4 times smaller, and it could be smaller by another factor 3 at about the 2$\sigma$ level.[^2] A more precise measurement of the S-wave cross section in the $f_2$ region would be very important for this discussion. At present, we conclude that the $2\gamma$ width of the scalar state is indeed surprisingly small. In this model[@BP] an intermediate glueball would couple to photons through the intermediate $\pi^+\pi^-$ channel. [*5. Quark-antiquark and gluonic components in $\pi\pi$ scattering*]{}\ In the dual Regge picture the $2\to 2$ scattering amplitude is built either from the sequence of s-channel resonances or from the sequence of t-channel Regge poles. There is a second component (“two component duality”[@fh]) which corresponds to the Pomeron in the t-channel and is dual to a “background” in the direct s-channel. If the Pomeron is related to glueballs, then one should have, by crossing, a third component with a glueball in the direct s-channel, dual to exotic exchange.[@mo] The existence of the “background” process can be demonstrated by constructing the amplitudes for definite t-channel isospin $I_t$. Such an analysis has been carried out by Quigg[@quigg] for $\pi\pi$ scattering and is shown in fig.3. Similar to what has been found in $\pi N$ scattering[@hz] there are essentially background-free resonance circles for $I_t\neq 0$, but in the $I_t=0$ amplitude (Pomeron exchange) the background rises with energy and is sizable already below 1 GeV. We take this result as a further hint that low energy $\pi\pi$ scattering is not dominated by $q\overline q$ resonances alone. Theoretical expectations ======================== QCD results on glueballs ------------------------ [*1. Lattice QCD*]{}\ In the calculation without sea-quarks (“quenched approximation”) one finds the lightest glueball in the $0^{++}$ channel at masses 1500-1700 MeV (recent review[@Teper]). These results have motivated various recent searches and scenarios for the lightest glueball. The identification with the well established $f_0(1500)$ state, either with or without mixing with other states, has some phenomenological difficulties, especially the negative amplitude sign into $K\overline K$ (fig.2a). Some changes of these QCD predictions may occur if the full unquenched calculation is carried out. The first results by Bali et al.[@lattunq] indicate a decrease of the glueball mass with the quark masses; the latter are still rather large and correspond to $m_{\pi}\sim 700 \ldots 1000 $ MeV. For the moment we conclude that our light glueball hypothesis is not necessarily in conflict with the lattice QCD results. [*2. QCD sum rules*]{}\ The saturation of the sum rules for the $0^{++}$ glueball was found impossible with a single state near 1500 MeV alone in a recent analysis.[@Nar] Rather, the inclusion of a light glueball component was required and assumed to be coupled to states $\sigma_B(1000)$ and $\sigma_{B'}(1370)$. Already before, a sum rule solution with a light glueball $\sim 500$ MeV was proposed.[@bagan] [*3. Bag model*]{}\ In a model which consideres quarks and gluons to be confined in a bag of comparable size and with radiative QCD corrections included,[@Barnes] the lightest glueball was suggested for $0^{++}$ at around 1 GeV mass. Scalar nonet and effective Sigma variables ------------------------------------------ An important precondition for the assignment of glueball states is the understanding of the low mass $q\overline q$ spectroscopy. [*1. Renormalizable linear sigma models*]{}\ These models realize the spontaneous chiral symmetry breakdown and represent an attractive theoretical approach to the scalar and pseudoscalar mesons. An example is the approach by Törnqvist[@Torn] which starts from a “bare” nonet respecting the OZI rule while the observed hadron spectrum is strongly distorted by unitarization effects. In an alternative approach,[@njl] one starts from a 3-flavor Nambu-Jona-Lasinio model but includes a renormalizable effective action for the sigma fields with an instanton induced axial $U(1)$ symmetry-breaking term along the suggestion by t’Hooft.[@thooft] In this model $f_0(1500)$ is near the octet and the light isoscalar near the singlet state; different options are pursued[@njl] for $f_0(980)$ and $a_0(980)$, at least one of them should be a non-$q\overline q$ state. This suggestion of a large singlet-octet mixing and the classification of the $f_0(1500)$ is close to our phenomenological findings in sect.3. [*2. General effective QCD potential*]{}\ In our approach[@mo] we do not restrict ourselves to renormalizable interaction terms. In this way the consequences of chiral symmetry in different limits for the quark masses can be explored in a general QCD framework.[@PM] In particular, it is possible to keep both $f_0(980)$ and $a_0(980)$ as $q\overline q$ states. Their degeneracy in mass can be obtained, although not predicted. An expansion to first order in the strange quark mass is investigated. The Gell-Mann-Okubo formula is obtained in this approximation; with an $\eta$-$\eta'$ type mixing the observed states discussed in sect.3, with $f_0(1500)$ as the heaviest member of the nonet near the octet state, can be realized. Conclusions =========== We found a classification of the low lying $J^{PC}=0^{++}$ states which explains a large body of experimental and phenomenological results. The $q\overline q$ nonet includes $f_0(980)$ and $f_0(1500)$ with mixing similar to the pseudoscalars $\eta'$ and $\eta$, furthermore $a_0(980)$ and $K^*(1430)$; $\eta'$ and $f_0(980)$ appear as genuine parity doublet. The lightest glueball is identified with the broad “sigma” corresponding to $f_0(400-1200)$ and $f_0(1370)$ of the PDG. The basic triplet of light binary glueballs is completed[@mo] by the states $\eta(1440)$ with $0^{-+}$ and $f_J(1710)$ with $2^{++}$, not discussed here. It will be important to further study production and decay of the states under discussion. Some particular questions we came across here include: a) unique phase shift solution for $\pi\pi$ scattering above 1 GeV for both charge modes ($+-$ and $00$), b) production of $f_0(1500)$ in $J/\psi$ decays, c) S-waves in radiative $J/\psi$ decays and d) $\gamma\gamma$ widths of the scalar particles. It remains an open question in this approach, though, what the physical origin of the $a_0 - f_0$ mass degeneracy is and where the mirror symmetry of the mass patterns in the scalar and pseudoscalar nonets comes from. A possible explanation for the latter structure is suggested by a renormalizable model with an instanton induced $U_A(1)$-breaking interaction. [99]{} B. R. Martin, D. Morgan and G. Shaw, Pion-Pion Interactions in Particle Physics (Academic Press, London, 1976). H. Fritzsch and P. Minkowski, Nuovo Cim. [**30A**]{}, 393 (1975). P. Minkowski and W. Ochs, Eur. Phys. J. C (1999) DOI 10.1007/s100529900044, hep-ph/9811518v2. Particle Data Group, C. Caso [*et al*]{}, Eur. Phys. J. [**C3**]{}, 1 (1998). K. Rybicki, these proceedings. B. Hyams [*et al*]{}, Nucl. Phys. [**64B**]{}, 4 (1973); W. Ochs, LMU Munich, thesis 1973. H. Becker [*et al*]{}, Nucl. Phys. [**B150**]{}, 301 (1979); [**B151**]{}, 46 (1979). GAMS Coll., D. Alde [*et al*]{}, Z. Phys. [**C66**]{}, 375 (1995). BNL – E852 Coll., J. Gunter [*et al*]{}, hep-ex/9609010. D.M. Binnie [*et al*]{}, Phys. Rev. Lett. [**31**]{}, 1534 (1973). D. Morgan and M. R. Pennington, Phys. Rev. [**D48**]{}, 1185 (1993). V. V. Anisovich [*et al*]{}, Phys. Lett. [**B389**]{}, 388 (1996); [**B382**]{}, 429 (1996). BNL Coll., A. Etkin [*et al*]{}, Phys. Rev. [**D25**]{}, 1786 (1982). IHEP-IISN-LAPP Coll., F. Binon [*et al*]{}, Nuov. Cim. [**78A**]{}, 313 (1983). Argonne Coll., D. Cohen [*et al*]{}, Phys. Rev. [**D22**]{}, 2595 (1980). C. Amsler [*et al*]{}, Phys. Lett. [**B342**]{}, 433 (1995); [**B355**]{}, 425 (1995). V.V. Anisovich and A.V. Sarantsev, Phys. Lett. [**B382**]{}, 429 (1996). F.E. Close, Rep. Prog. Phys. [**51**]{}, 833 (1988). AFS Coll., T. Akesson [*et al*]{}, Nucl. Phys. [**B264**]{}, 154 (1986);\ GAMS Coll., D. Alde [*et al*]{}, Phys. Lett. [**B397**]{}, 350 (1997);\ WA76 Coll., T.A. Armstrong [*et al*]{}, Phys. Lett. [**B227**]{}, 186 (1989). A. Kirk, these proceedings. Crystal Ball Coll., H. Marsiske [*et al*]{}, Phys. Rev. [**D41**]{}, 3324 (1990). M. Boglione and M.R. Pennington, hep-ph/9812258, Eur. Phys. J. C (1999) DOI 10.1007/s100529900058, and M.R. Pennington, these proceedings. P.G.O. Freund, Phys. Rev. Lett. [**20**]{}, 235 (1968); H. Harari, [*ibid.*]{}, p. 1395. C. Quigg, in: Proc. 4th Int. Conf. on Experimental Meson Spectroscopy, Boston, 1974 (AIP Conf. Proc. no. 21, particles and fields subseries no. 8) p. 297. H. Harari and Y. Zarmi, Phys. Rev 187, 2230 (1969). M. Teper, in: Confinement, Duality and Nonperturbative Aspects of QCD (Ed. P. van Baal), NATO ASI Series [**B368**]{}, 43 (1998). G. S. Bali [*et al*]{}, Nucl. Phys. B (Proc. Suppl.) [**53**]{}, 239 (1997); [**63**]{}, 209 (1998). S. Narison, Nucl. Phys. [**B509**]{}, 312 (1998); (Proc. Suppl.) [**B64**]{}, 210 (1998). E. Bagan and T.G. Steele, Phys. Lett. [**B243**]{}, 413 (1990). T. Barnes, F.E. Close and S. Monaghan, Nucl. Phys. [**B198**]{}, 380 (1982). N. A. Törnqvist, these proceedings, see also Z. Phys. [**C68**]{}, 647 (1995). V. Dmitra$\check {\rm s}$inovi$\acute {\rm c}$, Phys. Rev. [**C53**]{}, 1383 (1996);\ E. Klempt, B.C. Metsch, C.R. M$\ddot {\rm u}$nz and H.R. Petry, Phys. Lett. [**B361**]{}, 160 (1995); L. Burakovsky and T. Goldmann, Nucl. Phys. [**A628**]{}, 87 (1998). G. t’ Hooft, Phys. Rev. [**D14**]{}, 3432 (1976). P. Minkowski, Nucl. Phys. (Proc. Suppl.) [**7A**]{}, 118 (1989). [^1]: For a summary of the early phase of studies in the seventies, see[@mms]. [^2]: We thank Mike Pennington for the discussions about their analysis.
psfig \#1\#2[[0= 1=to0[$ #2$]{} ]{}]{} [**X-RAY OBSERVATIONS OF DISTANT LENSING CLUSTERS**]{}\ S. Schindler\ [*Max-Planck-Institut für extraterrestrische Physik, Giessenbachstraße, 85740, Garching, Germany*]{}\ [*Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Straße 1, 85740, Garching, Germany*]{} Abstract {#abstract .unnumbered} ======== X-ray observations of three clusters are presented: RXJ1347.5-1145, Cl0939+47, and Cl0500-24. Although these clusters are the in same redshift range (0.32 - 0.45) and act all as gravitational lenses, they show very different properties. RXJ1347.5-1145 seems to be an old, well relaxed system, with a relaxed morphology, high X-ray luminosity, high temperature, high metallicity and strong cooling flow. The other two clusters have the appearance of young systems with substructure and low X-ray luminosity. The optical and X-ray luminosity shows hardly any correlation. A comparison with nearby clusters shows that many properties – like e.g. the metallicity or the amount of subclustering – show a large scatter and no clear trend with time. Introduction ============ Distant clusters are important objects to test cosmological models. A comparison of the properties of distant clusters with the ones of nearby clusters gives insight when and how the evolution took place. Here we present the X-ray properties of three relatively distant clusters in the redshift range of 0.32-0.45. All show a gravitational lensing effect. RXJ1347.5-1145 and Cl0500-24 show bright arcs (Schindler et al. 1995; Giraud 1988), Cl0939+47 shows a weak lensing signal (Seitz et al. 1996). The presence of a gravitational lensing signal means that they must be all massive clusters. From these characteristics one might expect that they are similar in other properties, too. But already the way how they are detected shows that they are by far not similar. While Cl0939+47 and Cl0500-24 were detected optically (Cl0939+47 is actually the most distant Abell cluster), RXJ1347.5-1145 was detected in X-rays in the ROSAT All Sky Survey. In the following we will show that also their X-ray properties are very different. The most luminous X-ray cluster RXJ1347.5-1145 ============================================== For the analysis of RXJ1347.5-1145 we use a ROSAT/HRI observation (see Fig. 1) of 15760 seconds and an ASCA observation of 58300 seconds. These data reveal several extreme cluster properties. The X-ray luminosity of RXJ1347.5-1145 is with $7.3\pm0.8\times 10^{45}$erg/s in the ROSAT band (0.1-2.4 keV) or $2.1\pm0.4\times 10^{46}$erg/s bolometric the highest luminosity of a cluster found so far. In the ASCA spectrum (Fig. 2) an Fe line can be detected. It corresponds to a metallicity of $0.33\pm0.10$ in solar units. As this is a typical value for nearby clusters, it is quite surprising to find it in such a relatively distant cluster. From the ASCA spectrum we can also determine the temperature. With $9.3^{+1.1}_{-1.0}$ keV RXJ1347.5-1145 is a relatively hot cluster. The strongly peaked emission (see Fig.1) suggests the presence of a cooling flow. We find a central cooling time of $1.2\times10^9$ yr. With the standard assumptions we derive a cooling flow radius of 29 arcseconds (200 kpc) and a mass accretion rate of more than 3000 $\msol$/yr. Obviously, RXJ1347.5-1145 is also in terms of cooling flow an extreme. Such a strong cooling flow suggests that was no merging recently. Otherwise it would have disrupted the cooling flow or at least decreased the mass accretion rate. For a comparison of lensing and X-ray masses we calculate the surface mass density from the X-ray data at the radius of the arcs, $2.1\times 10^{14}\msol$. The lensing mass is still preliminary because the redshift of the arcs are only estimated and the lens model is very simple. For a redshift range of z=0.7-1.2, we find a lensing mass of 4.4-7.8$\times10^{14}\msol$. This discrepancy can be removed with a better lens model. Summarizing, although RXJ1347.5-1145 is a distant cluster, it shows the properties of a well evolved, old system: spherically symmetric morphology, high luminosity, high temperature, high metallicity, and obviously no merging in the recent past because of the huge cooling flow (see Table 1). For more details see Schindler et al. (1997). The optically rich cluster Cl0939+47 ==================================== Cl0939+47 is an extremely rich, optically well studied cluster (Dressler & Gunn 1992). For the X-ray analysis we use a ROSAT/PSPC observation of 14350 ksec. Fig. 3 shows the ROSAT/PSPC image of Cl0939+47. It has the appearance of a non-virialized cluster. It is not centrally peaked like RXJ1347.5-1145 but shows substructure. There are three maxima visible. Ellipse fits to different isophote levels yield ellipticities up to 0.75. The X-ray luminosity is with $7.9\pm0.3 \times 10^{44}$ erg/s (0.1-2.4 keV) rather on the low side for such a rich cluster. Also the temperature derived from the PSPC spectrum, $2.9^{+1.3}_{-0.8}$ keV, is relatively low. A mass comparison is difficult for this cluster because, firstly, the weak lensing mass was determined only in the L-shaped region from an HST/WFPC observation (Seitz et al. 1996) and the X-ray mass estimate has a large error because spherical symmetry has to be assumed, which is not a good approximation for this cluster. But it seems that the X-ray mass is about a factor of three smaller than the lensing mass. All the X-ray properties of Cl0939+47 (substructure, low X-ray luminosity, low temperature) as well as the large fraction of post-starburst galaxies (Belloni et al. 1995) point to a young, non-relaxed system (for details see Schindler & Wambsganss 1996). Cl0500-24: a cluster with two subclusters in the line of sight ============================================================== The cluster Cl0500-24 is similar to Cl0939+47 in many respects. It is also an optically rich cluster, which shows substructure. The substructure is not only found in the ROSAT/HRI image (Fig. 4), but also in the velocity distribution of the cluster galaxies (Infante et al. 1994): they found two subclusters with a relative velocity of about 3000 km/s. A comparison of the spatial distribution of the subcluster galaxies and the X-ray emission (Fig. 4) suggests that only one of the subclusters is X-ray luminous, the subcluster around galaxy N. This is an indication that the N subcluster is massive. The C subcluster, however, must be massive as well, because the arc has its curvature towards galaxy C. Obviously, there are two components in this cluster which have a very different gas content. The X-ray luminosity, $3.1^{+0.6}_{-0.4}$ erg/s, is surprisingly low for such a rich cluster. This fits well with the assumption that only part of the cluster is X-ray luminous. An X-ray mass estimate yields a smaller mass than the lensing mass model by Wambsganss et al. (1989). With the new ASCA temperature (Ota et al. 1997; see also Mitsuda, this volume) the X-ray mass is $0.5\times10^{14}\msol$ at 22 arcmin while the lensing model gives a mass of $1.4\times10^{14}\msol$ at the same radius. This discrepancy can be explained easily if one assumes that the cluster consists of two subclusters, out of which only one is X-ray luminous. The X-ray measurement traces only the potential well filled with gas, while lensing is sensitive to all the mass along the line of sight. Furthermore, a discrepancy can arise because the two mass estimates have different centres: the X-ray mass is centred on the X-ray maximum (i.e. close to galaxy N) while the mass model is centred close to galaxy C. Summarizing, Cl0500-24 shows the characteristics of a young system: it has substructure and a low X-ray luminosity (for more details see Schindler & Wambsganss 1997). [|l|c|c|c|]{} & & & & RXJ1347.5-1145 & Cl0939+47 & Cl0500-24\ & & & redshift & 0.45 & 0.41 & 0.32 $L_X$(0.1-2.4keV)\[erg/s\] & $7.3\pm0.8\times10^{45}$ & $3.1\pm0.3\times10^{44}$ & $7.9^{+0.6}_{-0.4}\times10^{44}$ $r_c$ \[kpc\] & 57 & 1100 & 30 $\beta$ & $0.56$& 1.9 & 0.36 metallicity (solar)& $0.33\pm0.10$ & - & - $M_{gas}$($<1$Mpc)& 2.0$\times 10^{14}\msol$& 0.8$\times10^{14}\msol$& 0.5$\times 10^{14}\msol$ $M_{tot}$($<1$Mpc)& $5.8\pm1.2\times 10^{14}\msol$& $2.6^{+1.2}_{-0.6}\times10^{14}\msol$& $2.7^{+1.4}_{-0.7}\times 10^{14}\msol$gas mass fraction ($<1$Mpc) & 30-40% & 25-50% & 12-25%cooling flow radius& 200 kpc& - & - central cooling time& $1.2\times10^9$yr & $3\times10^{10}$yr & - mass accretion rate& $\grsim 3000\msol$/yr & - & - Conclusions =========== Although the presented clusters are all massive and at about the same distance, they are by far not similar. Some have the appearance of young systems, still far away from virial equilibrium (Cl0939+47, Cl0500-24), others seem to be already quite old systems (RXJ1347.5-1145). This difference is not only evident from the amount of substructure but also from the X-ray luminosity, the gas temperature or the metallicity (see Tables 1 and 2). These three clusters show hardly any correlation of optical and X-ray luminosity. [|l|c|c|c|c|c|c|]{} & & & & & & & nearby &Cl0500-24& Cl0939+47& RXJ1347&Cl0016+16$^a$&AXJ2019$^b$& & & & & & redshift & & 0.32& 0.41 &0.45& 0.55 & 1.0 $L_X$(bol)\[$10^{45}$erg/s\] & 0.05-5&0.6& 1.1 & 21 &5.0& 1.9 metallicity &0.2-0.5 & 0.0-1.5$^c$& small & 0.33 &small& $\approx 1.7$temperature & 2-10 & 7.2$^c$& 2.9 & 9.3 & 8.2$^d$ & 8.6 substructure &in 25%$^e$ & yes & yes & no & yes & ? A comparison of the properties of several distant clusters with average properties of nearby clusters is shown in Table 2. Also the clusters Cl0016+16 (Neumann & Böhringer 1997) and AXJ2019 (Hattori et al. 1997) are included. The clusters are sorted according to their distance. The comparison shows that there is a large scatter in the properties but no clear evolutionary trend (see also Mushotzky & Loewenstein 1997). This result points to a very early evolution. Obviously, for studying evolutionary effects in clusters one has to observe large samples of clusters, which are even more distant. It is a pleasure to thank Hans Böhringer, Makoto Hattori, Doris Neumann, and Joachim Wambsganss for inspiring collaborations. I acknowledge financial support by the Verbundforschung. References {#references .unnumbered} ========== [ ]{} Belloni, P., Bruzual, A.G., Thimm, G.J., Röser, H.-J. 1995, A&A, 297, 61 Dressler, A., Gunn, J.E. 1992, ApJS, 78, 1 Hattori, M., Ikebe, Y., Asaoka, I., Takeshima, T., Böhringer, H., Mihara, T., Neumann, D.M., Schindler, S., Tsuru, T., Tamura, T. 1997, Nature, in press Giraud, E. 1988, ApJ, 334, L69 Infante, L., Fouqué, P., Hertling, G., Way, M.J., Giraud, E., Quintana H. 1994, A&A, 289, 381 Mushotzky, R.F. & Loewenstein, M. 1997, ApJ, 481, L63 Neumann, D.M., Böhringer, H. 1997, MNRAS, 289, 123 Neumann, D.M. 1997, Ph.D. Thesis, Ludwigs-Maximilians-Universität, München Ota, N., Mitsuda, K., Fukazawa, Y., 1997, preprint Schindler, S., Guzzo, L., Ebeling, H., Böhringer, H., Chincarini, G., Collins, C.A., De Grandi, S., Neumann, D.M., Briel U.G., Shaver, P., Vettolani, G. 1995, A&A, 299, L9 Schindler, S., Wambsganss, J. 1996, A&A, 313, 113 Schindler, S., Wambsganss, J. 1997, A&A, 322, 66 Schindler, S., Hattori, M., Neumann, D.M., Böhringer, H. 1997, A&A, 317, 646 Seitz, C., Kneib J.-P., Schneider P., Seitz, S. 1996, A&A, 314, 707 Tsuru, T., Koyama, K., Hughes, J.P., Arimoto, N., Kii, T., Hattori, M. 1996, in UV and X-Ray Spectroscopy of Astrophysical and Laboratory Plasmas, ed. K. Yamashita K. and T. Watanabe (Tokyo: Universal Academic Press), 375 Wambsganss, J., Giraud, E., Schneider, P., Weiss, A. 1989, ApJ, 337, L73
--- abstract: 'We study the orthogonal complement of the Hilbert subspace considered by by van Eijndhoven and Meyers in [@Van1990] and associated to holomorphic Hermite polynomials. A polyanalytic orthonormal basis is given and the explicit expressions of the corresponding reproducing kernel functions and Segal–Bargmann integral transforms are provided.' address: 'A.G.S.-L.A.M.A., CeReMAR, Department of Mathematics, P.O. Box 1014, Faculty of Sciences, Mohammed V University in Rabat, Morocco' author: - 'A. Benahmadi' - 'A. Ghanmi' - 'M. Souid El Ainin' title: The orthogonal complement of the Hilbert space associated to holomorphic Hermite polynomials --- Introduction ============ The known Hermite polynomials and their different generalizations have been one of the most interesting fields for research, since their introduction by Lagrange and Chebyshev. They appear in a wide spectrum of research domains including enginery, pure and applied mathematics, and different branches of physics. The classical ones on the real line ${\mathbb R}$ are defined by ([@Rainville71; @Szego75; @Thangavelu93]) $$H_m(x) = (-1)^m e^{x^2} \partial_x^m e^{-x^2} = m!\sum_{k=0}^{[m/2]} \frac{(-1)^k}{k!} \frac{(2x)^{m-2k}}{(m-2k)!} .$$ Here and elsewhere after, we use $\partial_x$ to denote the partial differential operator $\partial/\partial_x$. They can be extended to the whole complex plane ${\mathbb C}$ by replacing the real $x$ by the complex variable $z$, leading to the class of holomorphic Hermite polynomials $ H_m(z)$. The last ones inherit the most of the algebraic properties of $H_m(x)$ by analytic continuation. Moreover, they possess further interesting analytic properties. The associated functions $$\begin{aligned} \label{onset} \psi^s_m (z) = \left( \frac{1-s}{\pi \sqrt{s}} \right)^{1/2} \left(\frac{1-s}{1+s}\right)^{m/2}\frac{e^{-\frac{z^2}{2}}}{\sqrt{2^m m!}} H_m(z) , \end{aligned}$$ for given fixed $ 0<s<1$, satisfy the orthogonal property ([@Van1990]) $$\begin{aligned} \label{orthRelation} \int_{{\mathbb C}}\psi^s_n (z)\overline{ \psi^s_m(z)}e^{-\frac{1-s^2}{2s}|z|^2}e^{\frac{1+s^2}{4s}(z^2+\overline{z}^2)} d\lambda(z) = \delta_{n,m},\end{aligned}$$ where $d\lambda(z)=dxdy$ being the Lebesgue measure on ${\mathbb C}\equiv {\mathbb R}^2$. This is to say that the functions $\psi^s_n (z)$ in form an orthonormal system in the Hilbert space ${\mathscr{H}^{2,s}({\mathbb C})}:=L^2({\mathbb C},\omega_s d\lambda)$, where the weight function $\omega_s$ is given by $$\omega_s(z,{\overline{z}})=e^{\frac{1+s^2}{4s}(z^2+\overline{z}^2)-\frac{1-s^2}{2s}|z|^2} .$$ Equivalently, if $M_\alpha$ denotes the multiplication operator $$\begin{aligned} \label{MultOpMs} [ M_\alpha f](z) := M_\alpha(z) f(z)= e^{\frac{1+s^2}{4s} z^2 }f(z) , \quad M_\alpha(z):= e^{\frac{1+s^2}{4s} z^2 },\end{aligned}$$ with $\alpha=\alpha_s =\frac{1+s^2}{4s}$, then the functions $$\begin{aligned} \label{onset2} \widetilde{\psi}^s_m (z) = [ M_\alpha \psi^s_m ](z) $$ form an orthonormal system in ${\mathcal{L}^{2,{\nu}}({\mathbb C})}:= L^2({\mathbb C},e^{-{\nu}|z|^2}d\lambda)$, where $ \nu={\nu_s}= \frac{1-s^2}{2s}$. Accordingly, we define the Hilbert subspace ${\mathcal{X}_{s}({\mathbb C})}$ as in [@Van1990] by ${\mathcal{X}_{s}({\mathbb C})}=\mathcal{H}ol\cap {\mathscr{H}^{2,s}({\mathbb C})}$. Its companion ${\mathcal{F}^{2,{\nu}}({\mathbb C})}=\mathcal{H}ol\cap {\mathcal{L}^{2,{\nu}}({\mathbb C})}= M_\alpha ({\mathcal{X}_{s}({\mathbb C})})$ is the classical Bargmann–Fock space of weight ${{\nu}}$ (see e.g. [@BenahmadiG2018; @Folland89]). The aim of the present paper is three folds 1. Review and complete the study of the space ${\mathcal{X}_{s}({\mathbb C})}$. In particular, we provide the associated Segal–Bargmann transforms for the configuration space $L^2({\mathbb R})$. See Section 2. 2. Study a Hilbertian decomposition of ${\mathscr{H}^{2,s}({\mathbb C})}$ in terms of some reproducing kernel Hilbert subspaces ${\mathcal{X}_{n,s}({\mathbb C})}$, and provide to each ${\mathcal{X}_{n,s}({\mathbb C})}$ an orthonormal basis generalizing the ones in to the polyanalytic setting, as well as the explicit expression of the reproducing kernel of ${\mathcal{X}_{n,s}({\mathbb C})}$. See Section 3. 3. We also give the corresponding Segal–Bargmann integral transform. See Section 3. Complements on ${\mathcal{X}_{s}({\mathbb C})}$ ================================================ We begin with the following \[RepKer0\] The functions $\psi^s_n $ constitute an orthonormal basis of the reproducing kernel Hilbert space ${\mathcal{X}_{s}({\mathbb C})}$ with kernel given explicitly by $$\begin{aligned} \label{RepKern} K^s(z,w)=\frac{1-s^2}{2\pi s} e^{-\frac{1+s^2}{4s}(z^2+\overline{w}^2)+\frac{1-s^2}{2s}z\overline{w}}. \end{aligned}$$ The proof of can be handled by invoking the unitary operator $M_\alpha$ in and observing that the functions $$\begin{aligned} \label{scaledbasis} \phi^s_m(z) = \frac{1}{\sqrt{\pi m!}} \left(\frac{1-s^2}{2s}\right)^{(m+1)/2} e^{-\frac{1+s^2}{4s}z^2}z^m \end{aligned}$$ form an orthonormal basis of ${\mathcal{X}_{s}({\mathbb C})}$, so that one concludes for the explicit expression of $ K^s(z,w)$ by performing $ K^s(z,w)=\sum\limits_{m=0}^{+\infty} \phi^s_m(z)\overline{\phi^s_m(w)}$ and next using the generating function of the Hermite polynomials $H_n(z)$ ([@Rainville71 p. 130]). \[RemRepKer\] The expression of the reproducing kernel can also be proved in an easy way by making appeal to the following general principle. Let $\mathcal{H}$ be a separable reproducing kernel Hilbert space (RKHS) on the complex plane and denotes by $K^{\mathcal{H}}$ its reproducing kernel function. If $M$ is a multiplication operator by a function $M(z):= e^{\psi(z)}$. Then, $\mathcal{H}'= M\mathcal{H}$ is a RKHS whose kernel function is given by $$\begin{aligned} \label{RemRepKerF} K^{\mathcal{H}'}(z,w)=e^{\psi(z)} K^{\mathcal{H}} (z,w) e^{\overline{\psi(w)}}. \end{aligned}$$ The space $\mathop{\cup}\limits_{0<s<1} {\mathcal{X}_{s}({\mathbb C})}=S^{1/2}_{1/2}$ is the Gelfand–Shilov space (of holomorphic functions) extended to ${\mathbb C}$ (see [@Van1990]). In the sequel, we consider the integral transform of Segal–Bargmann type $$\begin{aligned} \label{SBT} [\mathscr{B}_s f] (z) := \int_{{\mathbb R}} B_s(t,z) f(t) dt \end{aligned}$$ associated to the kernel function $$\begin{aligned} \label{KerFct} B_s(t,z) := \left( \frac{1-s^2}{2\pi s\sqrt{s \pi}} \right)^{1/2} \exp\left( - \frac{1}{2s} t^2 - \frac{1}{2s} z^2 + \frac{\sqrt{1-s^2}}{s} tz \right) . \end{aligned}$$ Then, we assert \[thmSBT\] The transform $\mathscr{B}_s$ defines a unitary isometric integral transform from the configuration Hilbert space $L^{2}({\mathbb R})$ onto ${\mathcal{X}_{s}({\mathbb C})}$. The kernel function $ B_s(t,z)$ in can be rewritten as $$\begin{aligned} \label{KerFctExp} B_s(t,z) := \sum_{m=0}^\infty f_m(t) \psi^s_m (z), \end{aligned}$$ where $$\label{Orthonbasis} f_m(t) = \frac{e^{-\frac{t^2}2}}{\sqrt{2^m m!\sqrt{\pi} }} H_m(t)$$ is an orthonormal basis of $L^{2}({\mathbb R})$. Indeed, we have $$\begin{aligned} \label{KerFct} \sum_{m=0}^\infty f_m(t) \psi^s_m (z) &= \left( \frac{1-s}{\pi \sqrt{s \pi}} \right)^{1/2} e^{-\frac{1}{2}(t^2+z^2)} \sum_{m=0}^\infty \left(\frac{1-s}{1+s}\right)^{m/2} \frac{H_m(t) H_m(z)}{2^m m!}. \end{aligned}$$ The rest of the proof is straightforward making use of the Mehler formula for the Hermite polynomials extended to the complex plane, to wit ([@Mehler1866 p.174, Eq. (18)], see also [@Rainville71 p.198, Eq. (2)]) $$\label{MehlerkernelHnsigma} \sum_{m=0}^\infty \frac{\lambda^m }{2^m m!} H_m (t) H_m (z) = \frac{1}{\sqrt{1 - \lambda^2}} \exp\left( \frac{- \lambda^2 (t^2 + z^2) + 2 \lambda tz }{1 - \lambda^2} \right)$$ valid for every fixed $0<\lambda<1$. By means of and , we have $[\mathscr{B}_s f_m] (z) = \psi^s_m (z). $ Moreover, the inversion formula of $\mathscr{B}_s$ is given by $$[\mathscr{B}_s^{-1} \varphi] (t) = \int_{{\mathbb C}} \varphi (z) B_s(t,{\overline{z}}) \omega_s(z,{\overline{z}}) d\lambda(z).$$ By considering $\widetilde{B_s}(t,z):= s^{1/4} B_s(s^{1/2} t,z) $, we define an integral transform $\widetilde{\mathscr{B}}_s$ from $L^{2}({\mathbb R})$ onto ${\mathcal{X}_{s}({\mathbb C})}$ such that $[\widetilde{\mathscr{B}}_s f_mn] (z) = \phi^s_m (z)$, where $\phi^s_m$ are as in , since $$\widetilde{B_s}(t,z) = \sum_{m=0}^\infty f_m(t) \phi^s_m(z).$$ A special orthonormal basis of ${\mathscr{H}^{2,s}({\mathbb C})}$ ================================================================= The multiplication operator $M_\alpha: f \longmapsto M_\alpha f=e^{\alpha z^2}f $ defines a unitary operator from ${\mathscr{H}^{2,s}({\mathbb C})}$ onto ${\mathcal{L}^{2,{\nu}}({\mathbb C})}$. Moreover, it maps isometrically the Hilbert subspace ${\mathcal{X}_{s}({\mathbb C})}$ onto the Bargmann–Fock space ${\mathcal{F}^{2,{\nu}}({\mathbb C})}$. Therefore, an orthogonal decomposition of ${\mathscr{H}^{2,s}({\mathbb C})}$ can be deduced easily from the one of ${\mathcal{L}^{2,{\nu}}({\mathbb C})}$, $ {\mathcal{L}^{2,{\nu}}({\mathbb C})}= \bigoplus_{n=0}^\infty{\mathcal{F}^{2,{\nu}}_n({\mathbb C})}, $ given in terms of the polyanalytic Hilbert spaces $$\begin{aligned} {\mathcal{F}^{2,{\nu}}_n({\mathbb C})}= Ker|_{{\mathscr{H}^{2,s}({\mathbb C})}} \left( \Delta_{\nu} - n{{\nu}} Id \right) \end{aligned}$$ where $\Delta_{\nu} := -\partial_z\partial_{{\overline{z}}}+\nu{\overline{z}}\partial_{{\overline{z}}}$ and with $ {\mathcal{F}^{2,{\nu}}_0({\mathbb C})}= {\mathcal{F}^{2,{\nu}}({\mathbb C})}$. See for e.g. [@GhIn2005JMP] for details. In fact, the consideration of ${\mathcal{X}_{n,s}({\mathbb C})}:=M_{-\alpha}{\mathcal{F}^{2,{\nu}}_n({\mathbb C})}$ leads to the orthogonal decomposition $$\begin{aligned} {\mathscr{H}^{2,s}({\mathbb C})}= \bigoplus_{n=0}^\infty {\mathcal{X}_{n,s}({\mathbb C})}.\end{aligned}$$ An immediate orthonormal basis of ${\mathcal{X}_{n,s}({\mathbb C})}$ is then given by $ e^{-\alpha z^2} H_{m,n}^{{{\nu}}}(z,{\overline{z}})$ for varying $m,n=0,1,2,\cdots $, where $$H_{m,n}^{{{\nu}}}(z,{\overline{z}}) := (-1)^{m+n} e^{{{\nu}}|z|^2} \partial_{{\overline{z}}}^m \partial_z^n \left( e^{-{{\nu}}|z|^2}\right)$$ denotes the weighted polyanalytic complex Hermite polynomials [@Gh13ITSF; @Gh2017; @Ito52], generalizing the monomials ${{\nu}}^m z^m=H_{m,0}^{{{\nu}}}(z,{\overline{z}})$. The main aim in this section is to provide another “nontrivial” orthonormal basis $\psi^s_{m,n}(z,{\overline{z}})$ of ${\mathscr{H}^{2,s}({\mathbb C})}$, consisting of polyanalytic functions generalizing $\psi^s_m $ and whose first elements are the holomorphic functions $\psi^s_m (z)$ in , i.e., $\psi^s_{m,0}(z,{\overline{z}})=\psi^s_m (z)$. and obtained an appropriate basis of the space ${\mathcal{X}_{n,s}({\mathbb C})}$. The introduction of ${\mathcal{X}_{n,s}({\mathbb C})}$ entails the consideration of the integral transform $$\begin{aligned} [\mathscr{W}^s_{n} f] (z,{\overline{z}}) = \left(\frac{\nu}{\pi}\right) \left(\frac{\nu^n}{n!}\right)^{1/2} e^{-\alpha z^2} \int_{{\mathbb C}} e^{-\nu|\xi|^2 + \alpha \xi^2 + {{\nu}} \overline{\xi} z } ({\overline{z}}-{\overline{\xi}})^n \psi(\xi) d\lambda(\xi).\end{aligned}$$ Then, we can prove the following \[OrthBasis\] The transform $\mathscr{W}^s_{n}$ is a unitary integral transform from ${\mathcal{X}_{s}({\mathbb C})}$ onto ${\mathcal{X}_{n,s}({\mathbb C})}$. Moreover, the functions $$\begin{aligned} \label{nesfct} \psi^s_{m,n}(z,{\overline{z}})= \left( \frac{1-s}{\pi\nu^n n! \sqrt{s}} \right)^{1/2} \left(\frac{1-s}{1+s}\right)^{m/2} \frac{e^{-\frac{z^2}{2}}}{\sqrt{2^m m!} } \left( \nabla_{\nu,\alpha-\frac 12}^n H_{m}\right) (z) , \end{aligned}$$ where $\nabla_{\nu,\alpha} := -\partial_z +\nu {\overline{z}}-2\alpha z$, form an orthonormal basis of ${\mathcal{X}_{n,s}({\mathbb C})}$. The proof lies essentially on the observation that the unitary operator $\mathscr{W}^s_{n}$ can be rewritten as $\mathscr{W}^s_{n} = M_{-\alpha} \mathscr{T}^{\nu}_{0,n} M_{\alpha}$, where $\mathscr{T}^{\nu}_{k,n}$ is the integral transform considered in [@BenahmadiG2018 Eq. (2.17)] and given by $$[\mathscr{T}^{\nu}_{k,n}\psi](z,{\overline{z}})= \left(\frac{(-1)^n \nu}{\pi \sqrt{k! n! \nu^{k+n} }}\right) \int_{{\mathbb C}} e^{-\nu|\xi|^2 + \nu \overline{\xi} z } H^\nu_{k,n}(\xi-z, {\overline{\xi}}-{\overline{z}}) \psi(\xi) d\lambda(\xi),$$ as well as on the fact that $\psi^s_{m,n}(z,{\overline{z}}):= [\mathscr{W}^s_{n} \psi^s_m](z,{\overline{z}})$. Thus, by means of [@BenahmadiG2018 Theorem 2.12], keeping in mind the fact that the polynomials $H^\nu_{m,n}(z,{\overline{z}})=:\nabla_{\nu,0}^n (z^m)$ is an orthogonal basis of ${\mathcal{L}^{2,{\nu}}({\mathbb C})}$ [@Ito52; @Gh13ITSF], the following $$[\mathscr{T}^{\nu}_{0,n}\psi](z,{\overline{z}}) = \left(\frac{1}{ \nu^n n!}\right)^{1/2} \nabla_{\nu,0}^{n} \psi,$$ holds true for every nonnegative integers $n$ and any $\psi \in {\mathcal{L}^{2,{\nu}}({\mathbb C})}\cap \mathcal{C}^{n}({\mathbb C})$. The rest of the second assertion is straightforward since the functions $ \psi^s_m$ form an orthonormal basis of ${\mathcal{X}_{s}({\mathbb C})}$. The explicit expression of $\psi^s_{m,n}(z,{\overline{z}})$ follows by direct computation. Indeed, we have $$\begin{aligned} \psi^s_{m,n}(z,{\overline{z}}) &= M_{-\alpha} \mathscr{T}^{\nu}_{0,n} M_{\alpha} \psi^s_{m}(z) \\& = \left(\frac{1}{ \nu^n n!}\right)^{1/2} M_{-\alpha} \nabla_{\nu,0}^{n} \left( M_{\alpha} \psi^s_{m}\right) (z)\\ &= \left(\frac{1}{ \nu^n n!}\right)^{1/2} \nabla_{\nu,\alpha}^n \left(\psi^s_{m}\right) (z) \\&= \left(\frac{1}{ \nu^n n!}\right)^{1/2} e^{\frac{-z^2}{2}} \nabla_{\nu,\alpha-\frac 12}^n \left(e^{\frac{z^2}{2}} \psi^s_{m}\right) (z) , \end{aligned}$$ since $\nabla_{\nu,a} \left( M_\gamma \psi\right) = M_\gamma \nabla_{\nu,a+\gamma} \psi$ and $ \nabla_{\nu,0}^n \left( M_\gamma \psi\right) = M_\gamma \nabla_{\nu,\gamma}^n \psi$. The inverse of $ \mathscr{W}^s_{n}: {\mathcal{X}_{s}({\mathbb C})}\longrightarrow {\mathcal{X}_{n,s}({\mathbb C})}$ is given by $[\mathscr{W}^s_{n}]^{-1}=M_{\alpha} \mathscr{T}^{\nu}_{n,0}M_{-\alpha}$, More explicitly $$[\mathscr{W}^s_{n}]^{-1}\psi(z) = \left( \frac{\nu} {\pi} \right) \left( \frac{\nu^{n}}{n!}\right)^{1/2} e^{\alpha z^2 } \int_{{\mathbb C}} e^{-\nu|\xi|^2 -\alpha \xi^2 + \nu \overline{\xi} z } (\xi-z)^n \psi(\xi) d\lambda(\xi).$$ The new class of functions in generalizes the one studied in [@BenahmadiG2019] and the previous theorem provide an integral representation of the special functions $\psi^s_{m,n}(z,{\overline{z}})$. Moreover, it is closely connected to the polynomials $$\begin{aligned} \label{Datt1997} H'_{m,n}(x,y;z,w|\tau)=m!n!\sum\limits_{k=0}^{min(n,m)}\frac{(-\tau)^k}{k!}\frac{H'_{n-k}(x,y)}{(n-k)!}\frac{H'_{m-k}(z,w)}{(m-k)!} \end{aligned}$$ in [@DattoliLorenzuttaMainoTorre1997], where $H'_n(x,y):=i^n y^{\frac{n}{2}} H_n\left( \frac{x}{2 i} y^{-\frac{1}{2} }\right) .$ The considered space ${\mathcal{X}_{n,s}({\mathbb C})}$ is a reproducing kernel Hilbert space for the point evaluation map in ${\mathcal{X}_{n,s}({\mathbb C})}$ is continuous. This, can be recovered easily by means of Remark \[RemRepKer\]. Thus, we assert \[ExRepKer\] The explicit expression of the reproducing kernel of ${\mathcal{X}_{n,s}({\mathbb C})}$ is given by $$K^s_n(z,w)=\left( \frac{1-s^2}{2\pi s}\right) \frac{(-1)^n}{n!\nu^n}e^{\nu z\overline{w}-\alpha(z^2+\overline{w}^2)}H^\nu_{n,n}(z-w,\overline{z}-\overline{w}).$$ By means of Remark \[RemRepKer\], the reproducing kernel $K^s_n(z,w)$ of ${\mathcal{X}_{n,s}({\mathbb C})}$ obeys . Hence, we have $ K^s_n(z,w)=M_\alpha(z) K^{{\mathcal{F}^{2,{\nu}}({\mathbb C})}} (z,w) \overline{M_\alpha(w)}$. where $K^{{\mathcal{F}^{2,{\nu}}({\mathbb C})}_n} $ is the reproducing kernel of the generalized Bargmann space ${\mathcal{F}^{2,{\nu}}({\mathbb C})}_n$ given by [@GhIn2005JMP] $$\begin{aligned} K^{{\mathcal{F}^{2,{\nu}}({\mathbb C})}_n}_{n} (z,w)=\left( \frac{\nu}{\pi}\right) \frac{(-1)^n}{n!\nu^n}e^{\nu z\overline{w}}H^\nu_{n,n}(z-w,\overline{z}-\overline{w}). \end{aligned}$$ For $n=0$ we recover the reproducing kernel of the Hilbert space ${\mathcal{X}_{s}({\mathbb C})}$ in Proposition \[RepKer0\]. The identity $$\begin{aligned} H^\nu_{n,n}(z-w,\overline{z}-\overline{w}) = (-1)^n e^{ \left( \alpha -\frac{1}{2}\right) (z^2+\overline{w}^2) -\nu z\overline{w} } \nabla_{\nu,\alpha-\frac 12}^{n_z} \overline{\nabla_{\nu,\alpha-\frac 12}^{n_w}} e^{- \left( \alpha-\frac 12\right) (z^2+\overline{w}^2) + {\nu}z \overline{w} }\end{aligned}$$ or equivalently $$\begin{aligned} H^\nu_{n,n}(z-w,\overline{z}-\overline{w})= (-1)^n e^{\nu(|z|^2+|w|^2-z\overline{w})}\partial^n_z\partial^n_{\overline{w}}e^{-\nu(|z|^2+|w|^2 + z\overline{w})}\end{aligned}$$ holds true by comparing the result of Theorem \[ExRepKer\] to the fact that the reproducing kernel $K^s_n$ can be rewritten as $K^s_n(z,w) = \sum\limits_{m=0}^{+\infty} \psi^s_{m,n}(z)\overline{\psi^s_{m,n}(w)} $, for $\{\psi^s_{m,n}(z,{\overline{z}}) , m=0,1,2,\cdots\}$, in , being an orthonormal basis of ${\mathcal{X}_{n,s}({\mathbb C})}$. We conclude this section by giving the explicit expression of the generalized Segal–Bargmann integral transform for the spaces ${\mathcal{X}_{n,s}({\mathbb C})}$. We have to consider the weighted configuration space $L^{2,\nu}({\mathbb R})$ instead of $L^{2}({\mathbb R})$, where $\nu >0$. It is the Hilbert space of all square integrable ${\mathbb C}$-valued functions on ${\mathbb R}$ with respect to the Gaussian measure $e^{-\nu x^2}dx$, for which the rescaled Hermite polynomials $$\begin{aligned} \label{basis2nu} g^\nu_m(x) = \left(\frac{\nu}{\pi}\right)^{\frac{1}{4}} \frac{H_m(\sqrt{\nu}x)}{\sqrt{2^m m!}} \end{aligned}$$ form an orthonormal basis. The associated coherent states transform from $L^{2,\nu}({\mathbb R})$ onto ${\mathcal{X}_{n,s}({\mathbb C})}$ mapping $g^\nu_m$ to $\psi^s_{m,n}$ is given by $$\mathscr{S}^s_n f(z):= {\left< f, \overline{S^s_n(.,z)} \right>} _{L^{2,\nu}({\mathbb R})} = \int_{{\mathbb R}} f(x) S^s_n(x,z) e^{-\nu x^2} dx,$$ where the kernel function $S^s_n(x,z)$ is given by $$S^s_n(x,z)=\sum\limits_{m=0}^{+\infty} g^\nu_m(x) \psi^s_{m,n}(z,\overline{z}) .$$ For fixed $a>0$, $b\in {\mathbb R}$ and $c\in{\mathbb C}$, we define $I_n^{a,b}(z,{\overline{z}}|c) $ to be the class of polyanalytic polynomials in [@BenahmadiG2019], $$I_n^{a,b}(z,\overline{z}|c) := (-1)^n e^{a|z|^2 -b z^2 -c z} \partial_z^n \left( e^{-a|z|^2 + b z^2 + c z}\right) .$$ We have $$\begin{aligned} \label{closed} S^s_n(x,z)= \left(\frac{\nu}{\pi s}\right)^{\frac{1}{4}} \left( \frac{1-s^2}{2\pi s \nu^n n! } \right)^{1/2} e^{ - \frac{1}{2s} z^2 - \frac{\nu(1-s)}{2s} x^2+ \frac{{\nu}\sqrt{2s}}{s} x z} I^{\nu, -\frac {\nu}2 }_n\left( z,{\overline{z}}\Big | \frac{{\nu}\sqrt{2s}}{s} x \right) . \end{aligned}$$ Moreover, the transform $\mathscr{S}^s_n$ defines an isomtric transform from $L^{2,\nu}({\mathbb R})$ onto ${\mathcal{X}_{n,s}({\mathbb C})}$. We need only to prove the closed formula for $S^s_n(x,z)$. The rest holds true for general coherent state transformations on the reproducing kernel Hilbert spaces likes ${\mathcal{X}_{n,s}({\mathbb C})}$. Indeed, starting from and and applying the Mehler formula the expression of $S^s_n(x,z)$ reduces further to $$\begin{aligned} S^s_n(x,z) &= \left(\frac{\nu}{\pi s}\right)^{\frac{1}{4}} \left( \frac{1-s^2}{2\pi s \nu^n n! } \right)^{1/2} e^{-\frac{z^2}{2} - \frac{\nu(1-s)}{2s} x^2 } \nabla_{\nu,\alpha-\frac 12}^{n_z} \exp\left( - \frac{1-s}{2s} z^2 + \frac{{\nu}\sqrt{2s}}{s} x z \right) .\end{aligned}$$ Using the fact $\nabla_{\nu,\gamma} f = - e^{\nu|z|^2 -\gamma z^2} \partial_z \left( e^{-\nu|z|^2 +\gamma z^2} f\right) $, we get $$\nabla_{\nu,\gamma}^n f = (-1)^n e^{\nu|z|^2 -\gamma z^2} \partial_z^n \left( e^{-\nu|z|^2 +\gamma z^2} f\right)$$ by induction, and therefore $$\begin{aligned} S^s_n(x,z) &= \left(\frac{\nu}{\pi s}\right)^{\frac{1}{4}} \left( \frac{1-s^2}{2\pi s \nu^n n! } \right)^{1/2} e^{ \nu|z|^2 - \alpha z^2 - \frac{\nu(1-s)}{2s} x^2} (-1)^n \partial_z^n \left( e^{-\nu|z|^2 -\frac {\nu}2 z^2 + \frac{{\nu}\sqrt{2s}}{s} x z} \right) \end{aligned}$$ Subsequently $$\begin{aligned} S^s_n(x,z)= \left(\frac{\nu}{\pi s}\right)^{\frac{1}{4}} \left( \frac{1-s^2}{2\pi s \nu^n n! } \right)^{1/2} e^{ - \frac{1}{2s} z^2 - \frac{\nu(1-s)}{2s} x^2+ \frac{{\nu}\sqrt{2s}}{s} x z} I^{\nu, -\frac {\nu}2 }_n\left( z,{\overline{z}}\Big | \frac{{\nu}\sqrt{2s}}{s} x \right) .\end{aligned}$$ Concluding remarks ================== In the previous section the space ${\mathcal{X}_{n,s}({\mathbb C})}$ are realized as the image of ${\mathcal{X}_{s}({\mathbb C})}$ the integral transform $\mathcal{W}^s_{n}$ or also as the image of $L^{2,\nu}({\mathbb R})$ by the generalized Segal–Bargmann transform $\mathscr{S}^s_n$. Another realization of ${\mathcal{X}_{n,s}({\mathbb C})}$ is by considering the $n$-th standard Segal–Bargmann transform [@BenahmadiG2019] $$\mathscr{B}^{\nu}_n\varphi(z)= \frac{\left(\frac{\nu}{\pi}\right)^{\frac{3}{4}}}{\sqrt{2^n\nu^nn!}}\int_{\mathbb{R}}e^{-\nu(x-\frac{z}{\sqrt{2}})^2}H^\nu_n\left( \frac{z+\overline{z}}{\sqrt{2}}-x\right) \varphi(x)dx$$ from $L^{2,\nu}({\mathbb R})$ onto ${\mathcal{F}^{2,{\nu}}_n({\mathbb C})}$. Indeed, one has to deal with $ \mathscr{B}'_{\nu,n}: L^{2,\nu}({\mathbb R}) \longrightarrow {\mathcal{X}_{n,s}({\mathbb C})}$, $$\begin{aligned} \mathscr{B}'_{\nu,n} f(z,{\overline{z}}) = \left( M_{-\alpha}\mathscr{B}^{\nu}_n f \right) (z,{\overline{z}}). \end{aligned}$$ It is clear that for every fixed $b$, the functions $[\mathscr{B}'_{\nu,n}]^{-1}\psi_{m,n}$ form an orthonormal basis of $ L^{2,\nu}({\mathbb R})$. But, there is no clear evidence if they are the same or not. We claim that $\left( [\mathscr{B}'_{\nu,n}]^{-1}\psi_{m,n}\right) $ do not depend of $n$. The corresponding Poisson kernel can be given explicitly leading to a nontrivial $1d$-fractional Fourier transform for the Hilbert space $L^{2,\nu}({\mathbb R})$. [99]{} Benahmadi A., Ghanmi A., Non-trivial 1d and 2d Segal–Bargmann transforms. Integral Transforms Spec. Funct. 30 (2019), no. 7, 547–563. Benahmadi A., Ghanmi A., On a novel class of polyanalytic Hermite polynomials. Preprint Dattoli G., Lorenzutta S., Maino G., Torre A., Theory of multiindex multivariable Bessel functions and Hermite polynomials. Le Matematiche, Vol. LII (1997) – Fasc. I, pp. 177–195 Folland G B., [*Harmonic analysis in phase space*]{}. Princeton university press, New Jersey, 1989. Ghanmi A., [*Operational formulae for the complex Hermite polynomials $H_{p,q}(z, {\overline{z}})$*]{}. [Integral Transforms Spec. Funct.]{}, Volume 24, Issue 11 (2013) pp 884-895. Ghanmi A., Mehler’s formulas for the univariate complex Hermite polynomials and applications. Math. Methods Appl. Sci. 40 (2017), no. 18, 7540–7545. Ghanmi A., Intissar A., [ Asymptotic of complex hyperbolic geometry and $L^2$-spectral analysis of Landau-like Hamiltonians]{}, J. Math. Phys. 46 (2005), no. 3, 032107, 26 pp. Itô K., [*Complex multiple Wiener integral*]{}. [ Jap. J. Math.]{}, 22 (1952) 63-86 Mehler F.G. [Ueber die Entwicklung einer Function von beliebig vielen Variabeln nach Laplaceschen Functionen höherer Ordnung]{}. [*J. Reine Angew. Math.*]{} 1866; 66:161–176. Rainville E.D., [*Special functions*]{}, Chelsea Publishing Co., Bronx, N.Y., 1971. Szegö G., [*Orthogonal polynomials. Fourth edition*]{}, American Mathematical Society, Providence, R.I., 1975. Thangavelu S., [*Lectures on Hermite and Laguerre Expansions*]{}. Princeton University Press, 1993. van Eijndhoven S.J.L., Meyers J.L.H., [*New orthogonality relations for the Hermite polynomials and related Hilbert spaces*]{}. J. Math. Anal. Appl. 146 (1990), no. 1, 89–98.
--- abstract: 'BTZ black holes are excellent laboratories for studying black hole thermodynamics which is a bridge between classical general relativity and quantum nature of gravitation. In addition, three-dimensional gravity could have equipped us for exploring some of the ideas behind the two dimensional conformal field theory based on the $AdS_{3}/CFT_{2}$. Considering the significant interests in these regards, we examine charged BTZ black holes. We consider the system contains massive gravity with energy dependent spacetime to enrich the results. In order to make high curvature (energy) BTZ black holes more realistic, we modify the theory by energy dependent constants. We investigate thermodynamic properties of the solutions by calculating heat capacity and free energy. We also analyze thermal stability and study the possibility of Hawking-Page phase transition. At last, we study geometrical thermodynamics of these black holes and compare the results of various approaches.' author: - 'S. H. Hendi$^{1,2}$[^1], S. Panahiyan$^{1,3}$[^2], S. Upadhyay$^{4}$[^3] and B. Eslam Panah$^{1}$[^4]' title: 'Charged BTZ black holes in the context of massive gravity’s rainbow' --- Introduction ============ General relativity (GR) has been very successful in describing different phenomena in low energy limits. Despite its success in this regime, there are several issues which signal the necessity of modifying this theory. Among them, one can name acceleration expansion of the universe, existence of dark matter/dark energy [@nj] and several other problems. GR predicts the existence of massless property for gravitons as intermediate particles denoting gravitational interactions. In order to solve the mentioned problems, it has been proposed that Einstein gravity can be modified to include massive gravitons. In other words, the presence of massive and massless modes must be seen in gravitational theory which is describing the system. This proposal has been put into examination and its results were encouraging. For example, the current acceleration of universe without considering a cosmological constant was explained by massive gravity [kur,Expand1,Expand2]{}. In addition, it was shown that massive spin-$2$ particles could be the candidate for dark matter since their energy-momentum tensor behaves as that of dark matter fluid [@Aoki; @BG]. Furthermore, the solutions to hierarchy problem point out the existence of massive modes, hence massive gravity [@DvaliGP; @DvaliGPI; @DvaliG]. It is worthwhile to mention that studies conducted in the context of string theory and quantum gravity predict the presence of massive gravitons as well [string1,string2,string3]{}. The effects of massive gravity have been explored in the context of astrophysical objects. For example, one can obtain a maximum mass of neutron stars more than $3M_{sun}$ [@HBEslam] in the context of massive gravity. In addition, it can modify the thermodynamical quantities (behavior) of black holes as well [Bouchareb,Capela,Volkov,Babichev,Ghosh,ThermoMassive1,ThermoMassive2]{}. Especially, the existence of van der Waals like behavior for non-spherical black holes [PRLwithMann]{}, remnant of temperature [@BTZMassive] and anti-evaporation process for them were reported [@anti1; @anti2]. The possible effects of massive graviton on gravitational waves produced during inflation were also studied [@Gumrukcuoglu]. Fierz and Pauli were the first researchers to start investigation of a theory describing the possible free massive graviton [@fr; @fr1]. Later, it was found that this theory of massive gravity suffers the Boulware-Deser (BD) ghost instability at the non-linear level [@dg; @dg1]. Recently, a significant progress has been made toward constructing massive gravity theories without such instability [@de]. Furthermore, the nonlinear massive modifications to GR were also studied by many people in various perspectives. Particularly, Refs. [@de; @de1; @kur] study a class of nonlinear massive gravity theories in which the ghost field is absent [@has; @has1]. The simplest way to construct a massive gravity is to simply add a mass term to the Einstein-Hilbert action, giving the graviton a mass $m$ in such a way that GR is recovered when $m\rightarrow 0$. Since a mass term breaks the diffeomorphism invariance of the theories, hence, the energy momentum is no longer conserved in this class of massive gravity. In this paper, we employ a type of massive gravity which was introduced by Vegh in Ref. [@Vegh]. This massive gravity has specific applications in the context of lattice physics through the concept of holography. Meaning that black hole solutions in this gravity could have superconductor properties on the boundary and massive gravitons could have lattice like behavior. While the black hole solutions in the mentioned paper are obtained in $4$-dimensions, here, we conduct our study in $3$-dimensions with two other generalizations: energy dependent constants and gravity’s rainbow. The usual energy-momentum relation or dispersion relation in special relativity may be modified with corrections in the order of Planck length by modifying the Lorentz–Poincaré symmetry. This deformed formalism of special relativity is known as “Doubly Special Relativity” [@ame; @ame1; @mag; @kow]. The generalization of this idea to curved spacetime was done by Magueijo and Smolin [@mag1]. This formalism is known commonly as gravity’s rainbow. The idea of gravity’s rainbow formalism is that the free falling observers who make measurements with energy $E$ will observe the same laws of physics as in modified special relativity. In fact, the gravity’s rainbow produces a correction to the spacetime metric which becomes significant as soon as the particle’s energy/momentum approaches the Planck energy. In this formalism, the connection and curvature depend on energy in such a way that the usual Einstein’s equations is replaced by a one parameter family of equations. In this context, the Gauss-Bonnet and dilaton gravities were generalized to energy-dependent Gauss-Bonnet and dilatonic theories of gravity and their black hole solutions were studied [fai,faifai]{}. Recently, the critical behavior of uncharged and charged black holes in Gauss-Bonnet gravity’s rainbow was analyzed and it was found that the generalization to a charged case puts an energy dependent restriction on different parameters [@fai1]. Two classes of $F(R)$ gravity’s rainbow solutions were also investigated [@hend00]. In the first case, the energy dependent $F(R)$ gravity without energy momentum tensor was studied and, secondly, $F(R)$ gravity’s rainbow in the presence of conformally invariant Maxwell source was analyzed. In addition, the Starobinsky model of inflation in the context of gravity’s rainbow was investigated where rainbow functions are written in the power-law form of the Hubble parameter [@aut]. In this context, the spectral index of curvature perturbation, the tensor-to-scalar ratio and consistency of these models with Planck 2015 data are also discussed. Moreover, Galileon gravity’s rainbow by considering Vaidya spacetime has been studied in [Vaidya]{}. Also, the Unruh, Hawking, fiducial and free-fall temperatures of the black hole in gravity’s rainbow have been investigated in Refs. [Bibhas1,Gim]{}. The absence of black holes at LHC [@AliPLB], remnants of black objects [@AliJHEP], nonsingular universes in Einstein and Gauss-Bonnet gravities [@AliHendi1; @AliHendi2] have been analyzed in the gravity’s rainbow background. From astrophysical perspective, it was shown that the existence of energy dependent spacetime can modify the hydrostatic equilibrium equation of stars [@HendiJCAP]. In addition, the modifications on Hawking-Page phase transition [@HW1; @HW2], wave function of the universe [@Wave] and generalization of black hole thermodynamics [@Mod] are investigated in the context of gravity’s rainbow. To explore the foundations of classical and quantum gravity, GR in $3$-dimensions has become a very popular model [@car]. Of the drawbacks of the GR model in three dimensions were that there were no Newtonian limit [@jd] and no propagating degrees of freedom. In 1992, Bañados, Teitelboim and Zanelli (BTZ) came with surprising result that three dimensional gravity with a negative cosmological constant has a black hole solution [@btz]. BTZ black holes provide a good understanding of certain central issues like black hole thermodynamics [@car1; @ast; @sar], quantum gravity, string and gauge theory and more importantly the AdS/CFT conjecture [@wit; @car2]. Furthermore, BTZ solutions perform a crucial role in improving our perception of gravitational interaction in low dimensional spacetime [@wit1]. The charged BTZ black hole is the analogous solution of adS-Maxwell gravity in three dimensions [@car1; @mar; @cle]. Recently, thermodynamics and phase structure of the charged black hole solutions in both grand canonical and canonical ensembles were studied [@CaiMassive; @hend1; @BTZMassive]. Furthermore, thermodynamical phase transition of BTZ black holes through the Landau-Lifshitz theory [@Purt] and quantum correction of the entropy in the noncommutative BTZ black holes [@correction] have been investigated. As we mentioned before, the core stone of the gravity’s rainbow is doubly special relativity (DSR). In fact, the gravity’s rainbow in its first proposal was introduced as “doubly general relativity” [@mag1]. Therefore, in order to outline the properties of the gravity’s rainbow in three dimensions, one should regard the DSR in three dimensions. Specifically speaking, in a pioneering work by Freidel, *et al* [@Freidel], it was shown that gravity in $2+1$ dimensions coupled to point particles results into a nontrivial example of DSR. Therefore, it is stated that (quantum) gravity in $2+1$ dimensions coupled to point particles is indeed just a DSR theory. This point was shown by the fact that symmetry algebra of quantum gravity in $2+1$ dimensions is not Poincaré, but it is a (quantum) $\kappa $-deformed Poincaré [@Welling]. On the other hand, the symmetry algebra of a DSR theory is also $\kappa $-deformed Poincaré. It is possible to explicitly map the phase space of quantum gravity in $2+1$ coupled to a single point particle to the algebra symmetry generators of a DSR theory. In addition, in another pioneering work, Blaut, *et al* showed that depending on the direction of deformation of $\kappa $-deformed Poincaré, phase spaces of single particle in DSR theories have the energy-momentum spaces of the form of de Sitter, anti de Sitter and flat space [@Blaut]. The study was conducted for arbitrary dimensions including $3$-disunions. Now, remembering that gravity’s rainbow essentially is a generalization of the DSR, therefore, it is expected that it preserves the fundamental properties of the DSR which enables us to recognize the origin of the gravity’s rainbow and the presence of its effects. In addition, following the same method of Ref. [@Assanioussi], one can find that the effective metric of a quantum cosmological model describing the emergent spacetime in arbitrary dimensions should be indeed of the rainbow type, without needing to any ad-hoc input. Nevertheless, in this paper, we take into account the three dimensional energy dependent spacetime as a toy model and investigate its nontrivial thermodynamic properties caused by energy functions. Furthermore, it is worthwhile to mention that in $2+1$ dimensions, the Planck energy is defined as $E_{P}^{\left( 2+1\right) }=c^{4}/G^{\left( 2+1\right) }$ in which $c$ is the speed of light and $G$ is gravitational constant in $2+1$ dimensions. Thus, the dimension of $G$ is inverse mass and therefore, it may seem that theory under consideration is a classical one, but that is not the case. The existence of matter in $2+1$ gravity causes the geometry of the spacetime to be conical one with specific deformed asymptotic conditions which depend on $G$. This deformation has specific effects on algebra of the classical phase which highlights yet another reason why $2+1$ gravity is a DSR theory. The fact is $G$ here is identified with inverse of the $\kappa$ deformation parameter of the (quantum) $\kappa $-Poincaré algebra. As it was pointed out, essentially, the $3$-dimensional quantum gravity coupled with a point particle is a DSR theory. In Ref. [@Kowalski-Glikman], the relation between $2+1$ quantum gravity and DSR was described in details. Livine, *et al* in their work studied quantum geometry of a $3$-dimensional DSR [@Livine]. For furthers studies regarding the quantum applications of the DSR theory in $3$-dimensions, we refer the readers to Refs. [@q1; @q2; @q3; @q4]. This shows that DSR theory could be a quantum one. Since the gravity’s rainbow is a generalization of the DSR, we expect that it preserves its properties as well which indicates that the effects of gravity’s rainbow could be quantum-like ones. But we should emphasize it that here, our focus and main motivations are concerning the effects of gravity’s rainbow alongside of massive gravity on semi-classical thermodynamical behavior of the black holes in three dimensions. Geometrical thermodynamics (GTs) is one of the interesting methods for studying the properties of thermodynamical systems. In this method, the Riemannian geometry is used to construct phase space. The Ricci scalar of this phase space is employed to extract some information regarding thermodynamical behavior of the system. In other words, GTs is a bridge between geometry and thermodynamics. The geometrical information of Ricci scalar are obtained through its divergencies. These divergencies determine three important points; I\) Bound points which separate solutions with positive temperature (physical systems) from those with negative temperature (non-physical systems). II\) Phase transition points which represent discontinuities in thermodynamical quantities such as heat capacity. III\) The sign of Ricci scalar around divergence points determines the nature of interaction on molecular level [@Rupp]. Weinhold introduced the first geometrical thermodynamical approach in 1975. Weinhold’s approach was based on internal energy as thermodynamical potential [@WeinholdI; @WeinholdII]. Then, Ruppeiner proposed an alternative approach which has entropy as its thermodynamical potential [RuppeinerI,RuppeinerII]{}. Since Weinhold and Ruppeiner’s approaches are not Legendre invariant, Quevedo introduced another approach for GTs [QuevedoI,QuevedoII]{}. Several investigations regarding thermodynamics of the black holes through these methods were done in Refs. [HanC,BravettiMMA,Ma,GarciaMC,ZhangCY,MoLW2016,Sanchez,Soroushfar1]{}. On the other hand, it was shown that mentioned methods may confront specific problems in describing thermodynamical properties of the black holes (see refs. [@HPEMI; @HPEMII; @HPEMIII; @HPEMIV], for more details). In other words, obtained results of these three approaches were not consistent with those extracted from other methods. Therefore, in order to remove the shortcomings of other methods, Hendi et al proposed a new thermodynamical metric (HPEM) [@HPEMI]. It is notable that, it was shown that employing this new metric leads into consistent results regarding thermodynamical properties of the black holes. We refer the reader to Ref. [@Wen] for a comparative study regarding these four thermodynamical metrics and Ref. [zhang]{} regarding the application of HPEM metric in studying critical behavior of the system. The paper at hand, regards three dimensional charge black holes with three generalizations; energy dependent constants, gravity’s rainbow and massive gravity. Recently, three dimensional charged black holes in the presence of massive gravity have been investigated [@BTZMassive]. Here, we apply the generalization to gravity’s rainbow to understand how this generalization would modify previous results. In fact, we would like to see how the energy dependent spacetime would affects thermodynamical structure of the massive charged BTZ black holes. Such generalization is necessary from different aspects; First of all, black holes and their physics are governed by high energy physics. This indicates that it is necessary to include the upper limit of Plank energy on energies that particles can acquire. This is the prescription of gravity’s rainbow. On the other hand, it is stated that quantum corrections of quantum gravity could be observed as energy dependency of the spacetime [@quantum1; @quantum2]. In other words, one could include the quantum corrections in form of the energy dependency of spacetime which leads to gravity’s rainbow. These provide us with motivations to consider gravity’s rainbow alongside of massive gravity. The consideration of energy dependency of the constants is rooted in studies that are conducted in the context of renormalization group flow [@flow]. These studies emphasized on the energy dependency of constants on the scale of theory probed. Through several studies, the flow of cosmological [@cosmflow] and Newton [@newtonflow] constants were examined. Since the scale measurement of theory under consideration depends on the energy that probe can acquire, therefore, it is logical to consider all the constants as energy dependent ones. Such consideration has been taken into account in the context of Gauss-Bonnet gravity and it was shown that it enriches both geometrical and thermodynamical aspects of the black holes [@faifai; @fai1]. Here too, we employ such consideration to take all the constants energy dependent. Such an idea provides different perspectives for the observers who are at different distance from black holes under consideration. In addition, it would have specific contributions to other studies that could be conducted in the context of these types of the black holes (we refer the reader to Refs. [@suresh; @prasia] for some examples). Our other motivation for considering such set up for black holes is to provide a number of generalizations. These specific generalizations are effective in specific regions of energy which provide a better picture regarding the nature of black holes. The outline of paper is as follows. First, we will introduce the basic field equations and metric, and extract black holes solutions. Next, thermodynamical quantities are calculated and the first law of thermodynamics for black holes is examined. Then thermodynamical properties of the black holes are studied through, mass, temperature, heat capacity and free energy. Next, we will study thermodynamics of these black holes in the context of GTs and show the consistency of its results with divergencies and bound points of the heat capacity. The paper is finished with some closing remarks. Black hole solutions {#FieldEq} ==================== The general formalism of gravity’s rainbow could be obtained by using a deformation of the standard energy-momentum relation $$E^{2}f^{2}(\varepsilon )-p^{2}g^{2}(\varepsilon )=m^{2}, \label{MDR}$$where the dimensionless energy ratio is $\varepsilon =E/E_{P}$ in which $E$ and $E_{P}$ are, respectively, the energy of test particle and the Planck energy. Since the energy of a test particle can not exceed the Plank energy, we should remind $0<\varepsilon \leq 1.$ Here, $f(\varepsilon )$ and $% g(\varepsilon )$ are energy functions which are restricted with the following condition in infrared limit $$\lim\limits_{\varepsilon \rightarrow 0}f(\varepsilon )=1,\qquad \lim\limits_{\varepsilon \rightarrow 0}g(\varepsilon )=1.$$Regarding the analogy between the energy-momentum four vector $(E,\vec{p})$ with time-space one $(t,\vec{x})$, it is possible to use the energy functions to build an energy dependent spacetime with following recipe $$\hat{g}(\varepsilon )=\eta ^{ab}e_{a}(\varepsilon )\otimes e_{b}(\varepsilon ), \label{rainmetric}$$where $$e_{0}(\varepsilon )=\frac{1}{f(\varepsilon )}\tilde{e}_{0},\qquad e_{i}(\varepsilon )=\frac{1}{g(\varepsilon )}\tilde{e}_{i},$$with $\tilde{e}_{0}$ and $\tilde{e}_{i}$ being the energy independent frame fields (the algorithm of (\[rainmetric\]) may be originate from the analogy between two invariant relations; the energy-momentum relation and the line element invariant). The $3$-dimensional form of massive gravity’s Lagrangian is $$L_{massive}=m\left( \varepsilon \right) ^{2}\sum_{i=1}^{3}c_{i}(\varepsilon )% \mathcal{U}_{i}(g,f),$$in which $c(\varepsilon )_{i}$’s are some energy dependent constants and $% \mathcal{U}_{i}$’s are symmetric polynomials of the eigenvalues of the $% 3\times 3$ matrix $\mathcal{K}_{\nu }^{\mu }=\sqrt{g^{\mu \alpha }f_{\alpha \nu }}$, which can be written as follows $$\begin{aligned} \mathcal{U}_{1} &=&\left[ \mathcal{K}\right] ,\;\;\;\;\;\mathcal{U}_{2}=% \left[ \mathcal{K}\right] ^{2}-\left[ \mathcal{K}^{2}\right] ,\;\;\;\;\;% \mathcal{U}_{3}=\left[ \mathcal{K}\right] ^{3}-3\left[ \mathcal{K}\right] % \left[ \mathcal{K}^{2}\right] +2\left[ \mathcal{K}^{3}\right] ,\end{aligned}$$ where by using variational principle, we could have $$\begin{aligned} \chi _{\mu \nu } &=&-\frac{c_{1}(\varepsilon )}{2}\left( \mathcal{U}% _{1}g_{\mu \nu }-\mathcal{K}_{\mu \nu }\right) -\frac{c_{2}(\varepsilon )}{2}% \left( \mathcal{U}_{2}g_{\mu \nu }-2\mathcal{U}_{1}\mathcal{K}_{\mu \nu }+2% \mathcal{K}_{\mu \nu }^{2}\right) \nonumber \\ &&-\frac{c_{3}(\varepsilon )}{2}(\mathcal{U}% _{3}g_{\mu \nu }-3\mathcal{U}_{2}\mathcal{K}_{\mu \nu }+6\mathcal{U}_{1}\mathcal{K}_{\mu \nu }^{2}-6\mathcal{K}_{\mu \nu }^{3}). \label{massiveTerm}\end{aligned}$$ The only non-zero term of massive gravity is $\mathcal{U}_{1}$ while the other higher order terms are vanished. By taking this fact into account, one can find following action governing our black holes of interest $$\mathcal{I}=-\frac{1}{16\pi G(\varepsilon )}\int d^{3}x\sqrt{-g}\left[ \mathcal{R}-2\Lambda \left( \varepsilon \right) -\mathcal{F}+m\left( \varepsilon \right) ^{2}c_{1}(\varepsilon )\mathcal{U}_{1}(g,f)\right] , \label{Action}$$where $\mathcal{R}$ and $\mathcal{F}$ are, respectively, the scalar curvature and the Lagrangian of Maxwell electrodynamics, $G(\varepsilon )$ is the energy dependent gravitational constant, $\Lambda \left( \varepsilon \right) $ is the energy dependent cosmological constant and $f$ is an energy dependent fixed symmetric tensor. In addition, $\mathcal{F}=F_{\mu \nu }F^{\mu \nu }$ is the Maxwell invariant, in which $F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }$ is the Faraday tensor with $A_{\mu }$ as the gauge potential. Taking the action (\[Action\]) into account and using the variational principle, we obtain the field equations corresponding to the gravitation and gauge fields as $$R_{\mu \nu }-\left( \frac{R}{2}-\Lambda \left( \varepsilon \right) \right) g_{\mu \nu }+G\left( \varepsilon \right) \left( \frac{1}{2}g_{\mu \nu }% \mathcal{F}-2L_{\mathcal{F}}F_{\mu \rho }F_{\nu }^{\rho }\right) +m\left( \varepsilon \right) ^{2}\chi _{\mu \nu }=0, \label{Field equation}$$$$\partial _{\mu }\left( \sqrt{-g}F^{\mu \nu }\right) =0. \label{Maxwell equation}$$ Here, we are interested in static charged black hole solutions, and therefore, we consider the metric of $3$-dimensional spacetime with the following energy dependent line element $$ds^{2}=-\frac{\psi (r)}{f(\varepsilon )^{2}}dt^{2}+\frac{1}{% g(\varepsilon )^{2}}\left( \frac{dr^{2}}{\psi (r)}+r^{2}d\varphi ^{2}\right) , \label{metric}$$in which $\psi (r)$ is the metric function of our black holes. Our main motivation is to obtain massive black holes in the context of gravity’s rainbow. This requires specific modifications in the reference metric in form of $$f_{\mu \nu }=diag\left( 0,0,\frac{c(\varepsilon )^{2}}{g(\varepsilon )^{2}}% \right) , \label{f11}$$where $c(\varepsilon )$ is an arbitrary energy dependent positive constant. This choice of reference metric is motivated from holographical perspective of strongly interacting quantum field theories. Vegh, in his work showed that this choice of reference metric provides the possibility of the graviton to have lattice like behaviour by showing a Drude peak which approaches a delta function in the massless gravity limit [@Vegh]. Using this metric ansatz (\[f11\]), $\mathcal{U}_{1}$ will be calculated in form of [@CaiMassive] $$\mathcal{U}_{1}=\frac{c(\varepsilon )}{r}. \label{U}$$ In order to have a radial electric field, we consider the following gauge potential$$A_{\mu }=h(r) \delta _{\mu }^{t}, \label{gauge potential}$$where by using the metric (\[metric\]) with the Maxwell field equation (\[Maxwell equation\]), one can find the following differential equation $$h^{\prime }(r)+rh^{\prime \prime }(r)=0, \label{heq}$$in which the prime and double prime are representing the first and second derivatives with respect to $r$, respectively. It is a matter of calculation to solve Eq. (\[heq\]), yielding $$h(r)=q\left( \varepsilon \right) \ln \left(\frac{r}{l(\varepsilon )}\right), \label{h(r)}$$where $q(\varepsilon )$ is an energy dependent integration constant related to the electric charge and $l(\varepsilon )$ is an arbitrary energy dependent constant with length dimension which is considered for the sake of having dimensionless logarithmic argument. It is worthwhile to mention that the corresponding electromagnetic field tensor is $F_{tr}=\frac{% q(\varepsilon )}{r}$, which is independent of $l(\varepsilon )$. In order to obtain metric function, $\psi (r)$, we use Eq. (\[Field equation\]) with Eq. (\[metric\]), and obtain the following differential equations $$\begin{aligned} &&rg(\varepsilon )^{2}\psi ^{\prime }(r)+2r^{2}\Lambda \left( \varepsilon \right) +2G(\varepsilon )g(\varepsilon )^{2}f(\varepsilon )^{2}q\left( \varepsilon \right) ^{2}-m\left( \varepsilon \right) ^{2}c(\varepsilon )c_{1}(\varepsilon )r=0, \label{eqENMax1} \\ &&\frac{r^{2}}{2}g(\varepsilon )^{2}\psi ^{\prime \prime }(r)+\Lambda \left( \varepsilon \right) r^{2}-G(\varepsilon )g(\varepsilon )^{2}f(\varepsilon )^{2}q\left( \varepsilon \right) ^{2}=0, \label{eqENMax2}\end{aligned}$$which correspond to $tt$ (or $rr$) and $\varphi \varphi $ components of Eq. (\[Field equation\]), respectively. It is straightforward to show that metric function is obtained as$$\psi (r)=-\frac{\Lambda \left( \varepsilon \right) r^{2}}{% g(\varepsilon )^{2}}-m_{0}\left( \varepsilon \right) -2G\left( \varepsilon \right) f(\varepsilon )^{2}q\left( \varepsilon \right) ^{2}\ln \left(\frac{r}{l(\varepsilon )}\right)+\frac{m\left( \varepsilon \right) ^{2}c(\varepsilon )c_{1}(\varepsilon )r}{g(\varepsilon )^{2}}, \label{f(r)ENMax}$$where $m_{0}(\varepsilon )$ is an energy dependent integration constant related to the total mass of black holes. It is worthwhile to mention that the resulting metric function (\[f(r)ENMax\]) satisfies all the components of field equation (\[Field equation\]), simultaneously. In the absence of massive parameter (i.e. $m(\varepsilon )=0$), the metric function Eq. ([f(r)ENMax]{}) will be reduced to $$\psi (r)=-\frac{\Lambda \left( \varepsilon \right) r^{2}}{% g(\varepsilon )^{2}}-m_{0}\left( \varepsilon \right) -2G\left( \varepsilon \right) f(\varepsilon )^{2}q\left( \varepsilon \right) ^{2}\ln \left(\frac{r}{l(\varepsilon )}\right).$$ Our next step is examination of the geometrical structure of solutions. First, we should look for the existence of essential singularity(ies). The Ricci and Kretschmann scalars of the solutions are, respectively, $$\begin{aligned} R &=&6\Lambda \left( \varepsilon \right) +\frac{2G\left( \varepsilon\right) f(\varepsilon )^{2}q\left( \varepsilon \right) ^{2}}{r^{2}}-\frac{2m\left( \varepsilon \right) ^{2}c(\varepsilon )c_{1}(\varepsilon )}{r}, \\ R_{\alpha \beta \gamma \delta }R^{\alpha \beta \gamma \delta } &=&12\Lambda \left( \varepsilon \right) ^{2}-\frac{8\Lambda \left( \varepsilon \right) m\left( \varepsilon \right) ^{2}c(\varepsilon )c_{1}(\varepsilon )}{r}+\frac{% 2\left[ m\left( \varepsilon \right) ^{4}c(\varepsilon )^{2}c_{1}(\varepsilon )^{2}+4G\left( \varepsilon \right) g\left( \varepsilon \right) ^{2}f(\varepsilon )^{2}\Lambda \left( \varepsilon \right) q\left( \varepsilon \right) ^{2}\right] }{r^{2}} \nonumber \\ &-&\frac{8G\left( \varepsilon \right) g\left( \varepsilon \right) ^{2}f(\varepsilon )^{2}q\left( \varepsilon \right) ^{2}m\left( \varepsilon \right) ^{2}c(\varepsilon )c_{1}(\varepsilon )}{r^{3}}+\frac{12G\left( \varepsilon \right) ^{2}g\left( \varepsilon \right) ^{4}f(\varepsilon )^{4}q\left( \varepsilon \right) ^{4}}{r^{4}}.\end{aligned}$$ These relations confirm that there is an essential curvature singularity at $% r=0$. For the limit of $r\longrightarrow \infty $, the Ricci and Kretschmann scalars yield the values $6\Lambda \left( \varepsilon \right) $ and $% 12\Lambda \left( \varepsilon \right) ^{2}$ , respectively, which show that for $\Lambda \left( \varepsilon \right) >0$ ($\Lambda \left( \varepsilon \right) <0$), the asymptotical behavior of the solution is (a)dS with an energy dependent cosmological constant. Our final step in this section is investigation of other geometrical properties such as the existence of regular horizon. For this purpose, we have plotted Fig. \[Fig1\] to find the real positive roots of metric function. Evidently, depending on the choices of different parameters, it is possible to observe, two horizons, one extreme horizon and without horizon (naked singularity) for these solutions (see Fig. \[Fig1\] for more details). This confirms that the singularity can be covered with an event horizon, and therefore, our solutions are basically representing black holes. $% \begin{array}{ccc} \epsfxsize=5cm \epsffile{plotmaple1.eps} & \epsfxsize=5cm % \epsffile{plotmaple2.eps} & \epsfxsize=5cm \epsffile{plotmaple3.eps}% \end{array} $ Thermodynamics -------------- Now, we intend to calculate the conserved and thermodynamic quantities of the solutions and examine the validity of the first law of thermodynamics. Using the standard definition of Hawking temperature with its relation to the surface gravity on the outer horizon $r_{+}$, we obtain $$T=-\frac{\Lambda \left( \varepsilon \right) r_{+}}{2\pi f\left( \varepsilon \right) g\left( \varepsilon \right) }+\frac{m\left( \varepsilon \right) ^{2}c(\varepsilon )c_{1}(\varepsilon )}{4\pi f\left( \varepsilon \right) g\left( \varepsilon \right) }-\frac{f\left( \varepsilon \right) g\left( \varepsilon \right) G\left( \varepsilon \right) q\left( \varepsilon \right) ^{2}}{2\pi r_{+}}. \label{TotalTT}$$ Furthermore, calculating the flux of electric field at infinity and using the Gauss’s law, we can compute the electric charge, $Q$, as $$Q=\frac{1}{2}f\left( \varepsilon \right) G\left( \varepsilon \right) q\left( \varepsilon \right) . \label{TotalQ}$$ Since we are working in Einstein gravity, the entropy of black holes can be obtained by employing the area law. According to this law, we can derive the entropy as a quarter of event horizon area [hawking1,hawking2,hawking3,Hunter1,Hunter2,Hunter3]{} $$S=\frac{\pi }{2g\left( \varepsilon \right) }r_{+}. \label{TotalS}$$ Also, we can obtain the total mass of solutions by using the Hamiltonian approach and/or the counterterm method with the following explicit form $$M=\frac{m_{0}\left( \varepsilon \right) }{8f\left( \varepsilon \right) }, \label{TotalM}$$ where $m_{0}\left( \varepsilon \right) $ can be computed from the metric function (\[f(r)ENMax\]) on the horizon ($\psi \left( r=r_{+},\varepsilon\right) =0$), and consequently, it may be presented with the following expression $$m_{0}\left( \varepsilon \right) =-\frac{\Lambda (\varepsilon )r_{+}^{2}}{% g(\varepsilon )^{2}}-2G(\varepsilon)f(\varepsilon )^{2}q(\varepsilon )^{2} \ln \left( \frac{r_{+}}{l(\varepsilon )}\right) +\frac{ m(\varepsilon )^{2}c(\varepsilon )c_{1}(\varepsilon )r_{+}}{g(\varepsilon )^{2}}. \nonumber$$ The electric potential, $U$, is calculated through the difference of gauge potential between the reference and the horizon, and it is given as $$U=A_{\mu }\chi ^{\mu }\left\vert _{r\rightarrow reference}\right. -A_{\mu }\chi ^{\mu }\left\vert _{r\rightarrow r_{+}}\right. =-q\left( \varepsilon \right) \ln \left( \frac{r_{+}}{l(\varepsilon )}\right) . \label{TotalU}$$ Now, we are in a position to check the validity of the first law of thermodynamics. Exploiting thermodynamic quantities such as electric charge (\[TotalQ\]), entropy (\[TotalS\]) and mass (\[TotalM\]), with the first law of black hole thermodynamics $$dM=TdS+UdQ,$$one can define the intensive parameters conjugate to $S$ and $Q$. These quantities are the temperature and the electric potential $$T=\left( \frac{\partial M}{\partial S}\right) _{Q}\ \ \ \mbox{and} \ \ \ \ \ \ \ \ \ U=\left( \frac{\partial M}{\partial Q}\right) _{S}. \label{TU}$$ Using the obtained electric charge (\[TotalQ\]) and entropy (\[TotalS\]) with total mass of the black holes (\[TotalM\]), one can find following Smarr-type formula $$M\left( S,Q\right) =-\,{\frac{\Lambda \left( \varepsilon \right) {S}^{2}}{% 2f\left( \varepsilon \right) {\pi }^{2}}}+\,{\frac{m\left( \varepsilon \right) ^{2}S\,c(\varepsilon )c_{1}(\varepsilon )}{4g\left( \varepsilon \right) f\left( \varepsilon \right) \pi }}-\frac{{Q}^{2}}{G\left( \varepsilon \right) f\left( \varepsilon \right) }\ln \left( 2\,{\frac{% Sg\left( \varepsilon \right) }{\pi \,l(\varepsilon )}}\right) . \label{MSmarr}$$ It is a matter of calculation to show that the calculated temperature and electric potential by using Eq. (\[TU\]) are same as those calculated for the temperature (\[TotalTT\]) and the electric potential (\[TotalU\]). In other word, although massive term and gravity’s rainbow modify some of thermodynamic quantities, the first law of thermodynamics is still valid. Our next thermodynamical quantity of the interest is the heat capacity. This quantity contains information regarding the phase transition points and conditions for thermal stability. The stability of solutions is governed by the sign of heat capacity; its positivity indicates thermal stability while the opposite represents instability. The phase transition points are extracted by finding the divergencies of the heat capacity. In other words, divergencies of the heat capacity may be characterized with a second order phase transitions. In addition, since the roots of heat capacity and temperature are the same (due to form of the heat capacity), the roots of heat capacity are denoted as bound points (separating positive/negative temperature from each other). In particular, we will analyze the heat capacity with fixed charge (in canonical ensemble) and with fixed chemical potential (in grand canonical ensemble). ### Canonical ensemble Let us begin this subsection by computing the free energy of the system which provides information regarding the amount of the work that a thermodynamical system can perform. In total, this quantity is given by removing the amount of energy that can not be used to perform work from total internal energy of the system. The unusable energy is a combination of the total entropy and temperature of system. Therefore, for these black holes (in a canonical ensemble with a fixed charge $Q$), we have the following Helmholtz free energy $$F=M-TS=\frac{\Lambda \left( \varepsilon \right) r_{+}^{2}}{g\left( \varepsilon \right) ^{2}f\left( \varepsilon \right) }-\frac{G\left( \varepsilon \right) f\left( \varepsilon \right) q\left( \varepsilon \right) ^{2}}{4}\left[ \ln \left( \frac{r_{+}}{l(\varepsilon )}\right) -1\right] . \label{free}$$ The heat capacity with fixed charge is given by $$C_{Q}=T\frac{\left( \frac{\partial S}{\partial r_{+}}\right) _{Q}}{\left( \frac{\partial T}{\partial r_{+}}\right) _{Q}}=\frac{\pi r_{+}\left[ 2\Lambda (\varepsilon )r_{+}^{2}-m(\varepsilon )^{2}c(\varepsilon )c_{1}(\varepsilon )r_{+}+2f(\varepsilon )^{2}g(\varepsilon )^{2}G(\varepsilon )q(\varepsilon )^{2}\right] }{4g(\varepsilon )\left[ \Lambda (\varepsilon )r_{+}^{2}-f(\varepsilon )^{2}g(\varepsilon )^{2}G(\varepsilon )q(\varepsilon )^{2}\right] }. \label{heat}$$ From the above expression, the effects of gravity’s rainbow on specific heat can be seen easily. ### Grand canonical ensemble In the grand canonical ensemble with a fixed chemical potential (electric potential, $U$, in this case) associated with the charge, the Gibbs free energy for such black holes is given by $$\begin{aligned} \mathbb{G} &=&M-TS-\mu Q, \nonumber \\ &=&\frac{\Lambda (\varepsilon )r_{+}^{2}}{4g(\varepsilon )^{2}}\left( \frac{1% }{f(\varepsilon )}-\frac{1}{2}\right) +\frac{m(\varepsilon )^{2}c(\varepsilon )c_{1}(\varepsilon )r_{+}}{8g(\varepsilon )^{2}}\left( 1-% \frac{1}{f(\varepsilon )}\right) \nonumber \\ &+&\frac{1}{4}\left( \frac{\mu (\varepsilon )}{\ln \left( \frac{r_{+}}{l}% \right) }\right) ^{2}f(\varepsilon )\left[ G(\varepsilon )-\frac{% f(\varepsilon )}{g(\varepsilon )^{2}}\ln \left( \frac{r_{+}}{l(\varepsilon )}% \right) -2G(\varepsilon )\ln \left( \frac{r_{+}}{l(\varepsilon )}\right) % \right] .\end{aligned}$$ The Hawking temperature for the black holes in the grand canonical ensemble is given by $$T=-\frac{\Lambda \left( \varepsilon \right) r_{+}}{2\pi f\left( \varepsilon \right) g\left( \varepsilon \right) }+\frac{m\left( \varepsilon \right) ^{2}c(\varepsilon )c_{1}(\varepsilon )}{4\pi f\left( \varepsilon \right) g\left( \varepsilon \right) }-\frac{f\left( \varepsilon \right) g\left( \varepsilon \right) G\left( \varepsilon \right) }{2\pi r_{+}}\left( \frac{% \mu (\varepsilon )}{\ln \left( \frac{r_{+}}{l(\varepsilon )}\right) }\right) ^{2}.$$ Now, the heat capacity with a fixed chemical potential is calculated by $$C_{\mu }=T\left( \frac{\partial S}{\partial T}\right) _{\mu }=\frac{\pi ^{2}f(\varepsilon )g(\varepsilon )\left( \ln \left( \frac{r_{+}}{% l(\varepsilon)}\right) \right) ^{3} \ r_{+}^{2} \ T}{f(\varepsilon )^{2}g(\varepsilon )^{2}G(\varepsilon)\mu (\varepsilon )^{2}\left( \ln \left( \frac{r_{+}}{l(\varepsilon )}\right) +2\right) -\Lambda (\varepsilon )r_{+}^{2}\left( \ln \left( \frac{r_{+}}{l(\varepsilon )}\right) \right) ^{3}% }.$$ Next, we will study thermodynamical aspects of these black holes with the help of obtained thermodynamical quantities. Thermodynamical aspects of charged massive BTZ black holes in gravity’s rainbow =============================================================================== In this section, we are interested in studying, in particular, mass, temperature and heat capacity of the charged massive BTZ black holes in gravity’s rainbow. We will discuss the free energy and phase diagram for such black holes as well. Mass/Internal energy -------------------- Our first item of the interest is mass of black holes. The total mass of black holes has usually the interpretation of internal energy of a typical system. Evidently, the mass has three distinctive terms: cosmological constant term, $\Lambda(\varepsilon)$, massive term, $m(\varepsilon)$, and charge term, $q(\varepsilon)$. Depending on the choices of different values for these terms, the internal energy could have one of the following cases: I\) It is a positive definite function with a minimum. II) It is a positive definite function everywhere except at the a point which is an extreme root. III) It may have two roots with a region of negativity between these roots and a minimum. IV) Being only an increasing function of the horizon radius without any minimum. Considering the positive nature of energy functions and other energy dependent constants, we find that the charge term contributes to negativity of the internal energy. As for $\Lambda(\varepsilon)$, its contribution depends on the type of spacetime we are working in. For anti-de Sitter spacetime, this term has constructive effects on the values of internal energy. Whereas, for de Sitter case, the internal energy is a decreasing function of this term. For the mass term, one finds that its effect depends on the choices of $c_{1}(\varepsilon)$. For negative values of this parameter, the mass term has negative effects on values of the internal energy while the opposite effect is true for positive $c_{1}(\varepsilon)$. In general, it is not possible to obtain the root of internal energy analytically. But, regarding a vanishing term, it is possible to do so in which the results are given as $$\begin{aligned} r_{1}|_{q(\varepsilon)=0} &=&{\frac{m( \varepsilon) ^{2}{c(\varepsilon )c}% _{1}(\varepsilon )}{\Lambda ( \varepsilon) }}, \label{root of M chargeless} \\ r_{2}|_{m(\varepsilon)=0} &=&l(\varepsilon )\ \exp \left[ -\frac{1}{2}\,% \mathit{LambertW}\left( {\frac{\Lambda ( \varepsilon) {l}( \varepsilon) ^{2}% }{G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}g( \varepsilon) ^{2}}}\right) \right], \label{root of M massiveless} \\ r_{3}|_{\Lambda ( \varepsilon) =0} &=&\frac{-2\,G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}g( \varepsilon) ^{2}\mathit{LambertW}% \left( -{\frac{l( \varepsilon) m( \varepsilon) ^{2}{c(\varepsilon )c}% _{1}(\varepsilon )}{2G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}g( \varepsilon) ^{2}}}\right) }{m( \varepsilon) ^{2}{c( \varepsilon) c}% _{1}( \varepsilon) }. \label{root of M Lambdaless}\end{aligned}$$ As for the high energy limit of internal energy, the dominant term is the charge term. In the absence of electric part of the solutions, for vanishing horizon radius, hence evaporation, the total mass of black holes would vanish too. Interestingly, it is possible to eliminate the effects of electric part by setting, $l(\varepsilon )=r_{+} $. Therefore, for including the effects of electric charge, the limit $l(\varepsilon )\neq r_{+}$ must be satisfied. The second dominant term after the electric charge is the massive term which highlights the effects of the massive gravity in high energy regime. On the other hand, for asymptotical behavior, the leading term will be the cosmological constant term. In the absence of this term, the asymptotical behavior of the system will be governed by the massive term which again, represents the effects of generalization to massive gravity. Considering these two cases, one can conclude that for medium black holes, the internal energy is highly affected by massive gravity. This means that for medium black holes, the effects of the presence of massive gravitons would be detectable within the internal energy. As for gravity’s rainbow, except for the root of mass in the absence of electric charge, the obtained roots, high energy limit and asymptotical behavior are highly affected by generalization to gravity’s rainbow. Consequently, the extracted properties and behaviors of the internal energy (existence of roots and negative values for internal energy) depend on the choices of rainbow functions of the metric, hence gravity’s rainbow. Coupling different orders of rainbow functions with different parameters provides the possibility of manipulation of properties and behaviors of the internal energy. Temperature ----------- Now, let us focus on the temperature of these black holes. Here too, the charge term has negative contribution on the values of temperature. The effects of cosmological term depend on the spacetime under consideration. For adS black holes, the cosmological term has positive effects on temperature while for dS spacetime, the temperature is a decreasing function of this term. Interestingly, the massive term is not coupled with any order of horizon radius and it behaves as a constant. The roots of temperature are marking bound points. The reason for such naming is as follows; in classical thermodynamics, the negative values of temperature are interpreted as non-physical solutions. Therefore, the roots of temperature separates physical solutions form non-physical ones. It is a matter of calculation to show that these black holes have following roots $$r|_{T=0}=\,{\frac{m( \varepsilon) ^{2}\,c( \varepsilon) c_{1}( \varepsilon) \pm \sqrt{m( \varepsilon) ^{4}\,c(\varepsilon )^{2}c_{1}(\varepsilon )^{2}-16\,\Lambda ( \varepsilon) G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}g( \varepsilon) ^{2}}}{4\Lambda ( \varepsilon) }}. \label{root of Temperature}$$The existence of real valued root for the temperature is limited to following condition $$m( \varepsilon) ^{4}\,c(\varepsilon )^{2}c_{1}(\varepsilon )^{2}-16\,\Lambda ( \varepsilon) G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}g( \varepsilon) ^{2}\geq 0, \label{condition1}$$which could be used to extract a specific limitation for the mass of graviton in term of other parameters $$m( \varepsilon) \,=\left( {\frac{16\,\Lambda ( \varepsilon) G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}g( \varepsilon) ^{2}}{c(\varepsilon )^{2}c_{1}(\varepsilon )^{2}}} \right)^{\frac{1}{4}}. \label{mass of graviton}$$ Once more, we emphasize that by satisfying obtained condition, the roots of temperature will be real valued. For adS black holes, only one positive valued root exists for the temperature which is the negative branch of the obtained roots. On the contrary, for dS black holes, two positive valued roots may exist for the temperature. The existence of second positive valued root depends on the following condition $$0<\sqrt{m( \varepsilon) ^{4}\,c(\varepsilon )^{2}c_{1}(\varepsilon )^{2}-16\,\Lambda ( \varepsilon) G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}g( \varepsilon) ^{2}}<m( \varepsilon) ^{2}\,c(\varepsilon )c_{1}(\varepsilon ), \label{condition2}$$which is partly similar to the condition for having real valued roots. The resulting roots show that the contributions of rainbow functions of the metric, could only be observed in the electric charge term. It means that in the roots of temperature, the energy functions of metric are only coupled with the electric charge term. In the absence of electric charge, the root of temperature is given by $$r_{T=0\text{ }with\text{ }q(\varepsilon )=0}={\frac{m(\varepsilon )^{2}\,c(\varepsilon )c_{1}(\varepsilon )}{2\Lambda (\varepsilon )}}, \label{root of Temperature chargeless}$$which shows that the positive valued root exists only for dS spacetime and this root is an increasing function of the graviton’s mass and a decreasing function of the cosmological constant. Interestingly, in this case, the rainbow functions have no effects on the root of temperature. For massless gravitons case, the root of temperature will be obtained as $$r_{T=0\text{ }with\text{ }m(\varepsilon) =0}=-{\frac{\sqrt{-\Lambda ( \varepsilon) G( \varepsilon) }g( \varepsilon) f( \varepsilon) q( \varepsilon) }{\Lambda ( \varepsilon) }}. \label{root of Temperature massiveless}$$ Evidently, for this case, the real valued root only exists for adS black holes and only one positive root could be extracted for these black holes in this case. Here, the root is an increasing function of the electric charge and rainbow functions. For the absence of cosmological constant, the root of temperature could be calculated as $$r_{T=0\text{ }with\text{ }\Lambda \left( \varepsilon\right) =0}=2\,{\frac{G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}g( \varepsilon) ^{2}}{% m( \varepsilon) ^{2}\,c(\varepsilon )c_{1}(\varepsilon )}}, \label{root of Temperature Lambdaless}$$which, contrary to the absence of electric charge case, is a decreasing function of the graviton’s mass. It is worthwhile to mention that in this case, the root is an increasing function of the electric charge and rainbow functions. The high temperature limit of these black holes are governed by the electric charge term. On the other hand, the leading order in asymptotical behavior is the cosmological constant term. Interestingly, in the absence of electric charge (whether setting electric charge zero or consider the case $% l(\varepsilon)=r_{+}$), the next leading order in high temperature will be the massive term. Now, remembering that this term in the temperature is not coupled with any order of the horizon radius, one can see that for the evaporation of black holes (vanishing horizon radius) the temperature will be non-zero. In other words, in the evaporation of these black holes, a trace of the existence of black holes is left behind which presents itself as fluctuation in the temperature of spacetime. This provides the possibility of the existence of black hole’ information after their evaporation. This remnant, for the temperature of black holes, is an increasing function of the graviton’s mass and a decreasing function of the rainbow functions. Considering the effects of massive gravity, one can state that thermodynamical behavior of temperature for medium black holes is governed by the mass of graviton. On the other hand, the effects of gravity’s rainbow on the temperature could be observed for the three cases of small, medium and large black holes. In other words, the generalization of gravity’s rainbow, contrary to massive gravity, has some effects on the temperature of all black holes with different sizes. It is worthwhile to mention that the effects of gravity’s rainbow on the temperature of medium and large black holes are the same while for the small black holes, these effects are opposite. Heat capacity ------------- By taking a closer look at the heat capacity, one can see that its numerator is the same as temperature. Therefore, the roots of heat capacity and temperature are the same, and the arguments that were stated in the last subsection (for the temperature and its roots) can apply for the heat capacity and its roots as well. On the other hand, the denominator of heat capacity contains information regarding phase transition points. In other words, the divergence point of heat capacity (roots of denominator of the heat capacity) can be characterized as a phase transition. Denominator of the heat capacity contains only electric charge and cosmological constant terms with coupling of rainbow functions and so it does not depend on the massive gravity parameter. In other words, the generalization to gravity’s rainbow affects the divergencies of heat capacity, hence phase transitions of the black holes, whereas the existence of massive gravitons does not affect the phase transitions of these black holes. It is a matter of calculation to show that by using Eq. (\[heat\]), the positive valued divergence point of heat capacity is obtained as $$r_{_{C_{Q} \rightarrow \infty}}={\frac{\sqrt{\Lambda ( \varepsilon) G( \varepsilon) }g( \varepsilon) f( \varepsilon) q( \varepsilon) }{\Lambda ( \varepsilon) }}. \label{divergency of heat}$$ Evidently, the real valued phase transition points are observed for dS black holes only. The existence of the phase transition point depends on the electric charge and cosmological terms. This phase transition point is an increasing function of the electric charge and rainbow functions. The high energy limit of the heat capacity is given by $$\lim_{very\text{ }small\text{ }r_{+}}C_{Q}=-\,{\frac{\pi }{2g( \varepsilon) }% }\,r_{+}+\,{\frac{\pi m( \varepsilon) ^{2}\,c(\varepsilon )c_{1}(\varepsilon )}{4g( \varepsilon) ^{3}G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}}}{r}_{+}^{2}-{\frac{\Lambda ( \varepsilon) \pi }{g( \varepsilon) ^{3}G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}}}r_{+}^{3}+O\left( r_{+}^{4}\right) , \label{high energy limit heat}$$in which the effects of generalization to gravity’s rainbow could be observed in dominant term. The presence of electric charge and massive gravity could be detected from second dominant term of the high energy limit. Here, the massive parameter and electric charge are coupled with each other. The contribution of the cosmological constant could be seen in third leading order, which is coupled with the electric charge as well. Taking a closer look, one can see that the effects of gravity’s rainbow are presented in all three leading orders of high energy limit of the heat capacity. As one can see, the effects of massive gravity could be more highlighted for the medium black holes while for large black holes, the effect of cosmological constant governs the behavior of heat capacity. Interestingly, in the absence of electric charge (similar to the cases that were studied in temperature), the dominant term for the high energy limit behaves like a constant including massive gravity and cosmological constant in following form $$\lim_{very\text{ }small\text{ }r_{+}}C_{Q}=-\,{\frac{\pi m( \varepsilon) ^{2}\,c(\varepsilon )c_{1}(\varepsilon )}{4g( \varepsilon) \Lambda ( \varepsilon) }}+\,{\frac{\pi }{2g( \varepsilon) }}\,r_{+}+O\left( r_{+}^{2}\right), \text{ \ \ \ \ \ }\mbox{for} \text{ \ \ \ \ }q( \varepsilon) =0. \label{high energy limit heat chargeless}$$ Similar to the temperature, here too, for vanishing horizon radius (evaporation of these black holes), the heat capacity will be non-zero. This shows that the traces of the existence of black holes after their evaporation could be observed in differences of heat capacity of the place where black holes existed. In this case, one can see that graviton’s mass modify thermodynamical behavior of the black holes in their last stage of existence. The modifications of the gravity’s rainbow could be seen by the presence of rainbow function, $g(\varepsilon)$. On the other hand, for the asymptotic limit, one can derive following relation for the heat capacity $$\lim_{very\text{ \textit{large} }r_{+}}C_{Q}=\,{\frac{\pi }{2g( \varepsilon) }}r_{+}-{\frac{\pi m( \varepsilon) ^{2}\,c(\varepsilon )c_{1}(\varepsilon )}{% 4g( \varepsilon) \Lambda ( \varepsilon) }}+{\frac{G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}g( \varepsilon) \pi }{\Lambda ( \varepsilon) r_{+}}}+O\left( r_{+}^{-2}\right) . \label{asymptotic heat}$$ First of all, the dominant term for the asymptotical behavior of heat capacity is same as one extracted for the high energy limit with different sign. The second leading term in this case behaves like a constant term including massive gravity which, opposite to high energy limit, is coupled with the cosmological constant. The presence of electric charge part of the solutions could be observed in the third leading term of this case. Like previous, here, the presence of gravity’s rainbow could be observed in all the three leading terms in asymptotical behavior of the heat capacity. One of the differences of this case with the high energy limit is the fact that in this limit, the presence of electric charge (cosmological constant) was only observed in denominator (numerator) of leading terms whereas, in the asymptotical case, the presence of electric charge (cosmological constant) was only observed in numerator (denominator) of the leading terms. In the absence of cosmological constant, the asymptotical behavior of heat capacity will be modified into $$\lim_{very\text{ \textit{large} }r_{+}}C_{Q}=\,{\frac{\pi m( \varepsilon) ^{2}\,c(\varepsilon )c_{1}(\varepsilon )}{4g( \varepsilon) ^{3}G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}}}r_{+}^2-\,{\frac{\pi }{2g( \varepsilon) }}r_{+}\text{\ \ \ \ }for\text{ }\Lambda ( \varepsilon) =0, \label{asymptotic Lambdaless}$$which shows that the dominant term in the asymptotical behavior of heat capacity is a coupling between massive gravity and electric part of the solutions with the effects of gravity’s rainbow. Considering the effects of massive gravity in both high energy regime and asymptotic limit, one can see that graviton’s mass highly modifies the behavior of heat capacity in both of these regimes. This highlights the contribution of massive gravity in thermodynamical behavior of these black holes. The same could be also stated for gravity’s rainbow due to the presence of rainbow functions in both of these regimes in the leading terms. ### Free energy The free energy contains information regarding the phase transition points. The chemical equilibrium is reached for a system when its free energy is minimized. In other words, the first order derivation of the free energy with respect to thermodynamical quantities vanishes at the equilibrium point. This equilibrium point marks the place where system goes under a phase transition. The other important information regarding the free energy is stored in its roots. In general, the root of free energy for these black holes is $$r_{_{F=0}}=l(\varepsilon ) \ \exp \left[ -\frac{1}{2}\mathit{LambertW}\left( -{\frac{8\Lambda ( \varepsilon) {l(\varepsilon)}^{2}{\mathrm{\exp }}\left( 2\right) }{G( \varepsilon) q( \varepsilon) ^{2}f( \varepsilon) ^{2}g( \varepsilon) ^{2}}}\right) +1\right] . \label{root of free}$$ Using the concept which was introduced for obtaining phase transition point from the free energy, one can show that the positive valued extremum point of the free energy is obtained as $$r_{_{F \rightarrow \infty}}={\frac{\sqrt{\Lambda ( \varepsilon) G( \varepsilon) }g( \varepsilon) f( \varepsilon) q( \varepsilon) }{\Lambda ( \varepsilon) }},$$which is exactly the same phase transition point obtained for the heat capacity. Therefore, divergencies of the heat capacity and extremum of the free energy coincide with each other. The high energy limit of free energy is governed by the electric charge term. But here, similar to the case of internal energy, it is possible to cancel the effects of charge term through setting $l(\varepsilon)=r_{+}(\varepsilon)$. On the contrary, the leading order in the asymptotical behavior of free energy is the cosmological constant. In both of the mentioned regimes, the effects of gravity’s rainbow could be observed by the coupling of energy functions with different parameters. It is worthwhile to mention that the free energy is independent of the generalization to massive gravity. In other words, the free energy (energy which could be converted to work) is independent of the graviton’s mass. Using the obtained extremum (critical horizon radius), one can obtain internal energy, temperature and free energy of the phase transition point as $$\begin{aligned} M_{_{Phase\text{ }Transition}} &=&\frac{\,q( \varepsilon) }{8g( \varepsilon) \Lambda ( \varepsilon) }\left( m( \varepsilon) ^{2}c(\varepsilon )c_{1}(\varepsilon )\sqrt{\Lambda ( \varepsilon) G( \varepsilon) }-f( \varepsilon) G( \varepsilon) q( \varepsilon) g( \varepsilon) \Lambda ( \varepsilon) \right. \nonumber \\ &&\left. -\left[ 1+2\,\ln \left( {\frac{\sqrt{\Lambda ( \varepsilon) G( \varepsilon) }g( \varepsilon) f( \varepsilon) q( \varepsilon) }{\Lambda ( \varepsilon) l(\varepsilon )}}\right) \right] \right) , \label{Phase mass} \\ T_{_{Phase\text{ }Transition}} &=&\,{\frac{m( \varepsilon) ^{2}c(\varepsilon )c_{1}(\varepsilon )\sqrt{\Lambda ( \varepsilon) G( \varepsilon) }-4\,f( \varepsilon) G( \varepsilon) q( \varepsilon) g( \varepsilon) \Lambda ( \varepsilon) }{4\pi \,f( \varepsilon) g( \varepsilon) \sqrt{\Lambda ( \varepsilon) G( \varepsilon) }}}, \label{phase temp} \\ F_{_{Phase\text{ }Transition}} &=&\frac{\,f( \varepsilon) G( \varepsilon) q( \varepsilon) ^{2}}{8}\left[ 3-2\,\ln \left( {\frac{\sqrt{\Lambda ( \varepsilon) G( \varepsilon) }g( \varepsilon) f( \varepsilon) q( \varepsilon) }{\Lambda ( \varepsilon) l(\varepsilon )}}\right) \right], \label{phase free}\end{aligned}$$ where we should regard the positive cosmological constant to obtain real valued quantities. ### Phase diagrams In order to complete our discussion regarding thermodynamical structure of these black holes, we have plotted a series of the diagrams for the mass/internal energy (Figs. \[Fig3\] and \[Fig4\]), the temperature (Figs. \[Fig5\] and \[Fig6\]), the heat capacity (Figs. \[Fig5\] and \[Fig6\]) and the free energy (Fig. \[Fig7\]) for two cases of dS and adS spacetime. In Ref. [@Mamasani], it was shown that in order to remove existence of ensemble dependency, $l(\varepsilon )$ which was inserted for the sake of dimensionless argument, should be replaced by following relation $$\Lambda ( \varepsilon) =\pm \frac{1}{l(\varepsilon )}, \label{Lam}$$in which positive branch is related to dS spacetime while the opposite is for AdS solutions. Hereafter, we use Eq. (\[Lam\]) to plot phase diagrams. $% \begin{array}{ccc} \epsfxsize=5cm \epsffile{M-AdS-diff-m.eps} & \epsfxsize=5cm % \epsffile{M-AdS-diff-fe.eps} & \epsfxsize=5cm \epsffile{M-AdS-diff-q.eps}% \end{array} $ $% \begin{array}{ccc} \epsfxsize=5cm \epsffile{M-dS-diff-m.eps} & \epsfxsize=5cm % \epsffile{M-dS-diff-fe.eps} & \epsfxsize=5cm \epsffile{M-dS-diff-q.eps}% \end{array} $ $% \begin{array}{ccc} \epsfxsize=5cm \epsffile{CQ-T-AdS-diff-m.eps} & \epsfxsize=5cm % \epsffile{CQ-T-AdS-diff-fe.eps} & \epsfxsize=5cm % \epsffile{CQ-T-AdS-diff-q.eps}% \end{array} $ $% \begin{array}{ccc} \epsfxsize=5cm \epsffile{CQ-T-dS-diff-m.eps} & \epsfxsize=5cm % \epsffile{CQ-T-dS-diff-fe.eps} & \epsfxsize=5cm \epsffile{CQ-T-dS-diff-q.eps}% \end{array} $ $% \begin{array}{ccc} \epsfxsize=6cm \epsffile{F-AdS-diff-fe.eps} & \epsfxsize=6cm % \epsffile{F-AdS-diff-q.eps} & \\ \epsfxsize=6cm \epsffile{F-dS-diff-fe.eps} & \epsfxsize=6cm % \epsffile{F-dS-diff-q.eps} & \end{array} $ Studying mass/internal energy diagrams for adS black holes shows that depending on the choices of different parameters, this quantity could have a minimum. This minimum could have a negative mass/internal energy which indicates that two roots for this quantity exists with region of negative mass/internal energy. The exception is for the absence and small values of electric charge. For these cases, there exists a mass for the black holes in the limit of vanishing horizon radius (see Fig. \[Fig3\] for more details). As for dS case, the mass/internal energy is a decreasing function of the horizon radius with one root. The only exception is for the absence of electric charge. In this case, a maximum is formed for the mass/internal energy with one root (see Fig. \[Fig4\] for more details). For dS black holes, the region of positivity of the mass/internal energy is located before root. The temperature and heat capacity have the same roots. Before the root, for adS black holes, temperature and heat capacity are negative and solutions are non-physical. Whereas, after the root, both temperature and heat capacity are positive valued and solutions are physical and enjoy thermally stability. Interestingly, in the absence of electric charge, temperature and heat capacity are positive and non-zero for vanishing horizon radius, which confirms our earlier discussion regarding the existence of remnant for these quantities (see Fig. \[Fig5\] for more details). For dS black holes, the behaviors of temperature and heat capacity are completely different. Here, the temperature has one maximum. If the maximum is located at a positive temperature, the heat capacity (and temperature) will have two roots. Between these roots, only positive temperature hence, the physical solutions exists. Otherwise, temperature is negative and the solutions are non-physical. At the maximum of temperature, the heat capacity acquires divergency which marks a phase transition point. The phase transition is between large black holes to small ones. This shows that thermally stable black holes only exist between smaller root and divergence point. The only exception for the presence of maximum in temperature is the absence of electric charge. In this case, temperature is a decreasing function of the horizon radius with one root. In positive region of the temperature, the heat capacity is negative. Therefore, for this case, the solutions are thermally unstable (see Fig. \[Fig6\] for more details). Finally, for adS case, the free energy is only a decreasing function of the horizon radius. The only exception for this case is for the absence of electric charge, where the free energy is negative valued without any root (see up panels of Fig. \[Fig5\] for more details). For the dS case, a minimum is formed for the free energy. This minimum represents the point in which black holes go under second order phase transition. The only exception for this case is for the absence of electric charge. In this case, the free energy is an increasing function of the horizon radius (see down panels of Fig. \[Fig7\] for more details). Geometrical thermodynamics ========================== In this section, we employ GTs approach to investigate thermodynamical properties of the black holes by using the so-called HPEM metric. Applying GTs approach, we can extract some information regarding thermodynamical behavior of the system by studying the Ricci scalar of constructed phase space. In this method, the phase transition and bound points should be represented as divergencies of the Ricci scalar. Recent studies in the context of the GTs approaches for the black hole thermodynamics have shown that the Ricci scalars of Weinhold, Ruppeiner and Quevedo metrics may lead to extra divergencies which are not matched with the bound points and the phase transitions [@HPEMI; @HPEMII; @HPEMIII; @HPEMIV]. In other words, there were cases of mismatch between divergencies of the Ricci scalar and the mentioned points (bound and phase transition points), and also existence of extra divergency unrelated to these points were reported [HPEMI,HPEMII,HPEMIII,HPEMIV]{}. In order to overcome the shortcomings of the mentioned methods (Weinhold, Ruppeiner and Quevedo metrics), the HPEM method was introduced and it was shown that the specific structure of this metric provides satisfactory results regarding GTs of different classes of the black holes. In addition, this metric contains information which enables one to determine the type of divergencies to distinguish the divergencies related to the bound points and those correspond to the phase transition points. The HPEM metric of a charged black hole is in the following form $$ds^{2}=S\frac{M_{S}}{M_{QQ}^{3}}\left( -M_{SS}dS^{2}+M_{QQ}dQ^{2}\right) ,$$in which $M_{X}=\partial M/\partial X$ and $M_{XX}=\partial ^{2}M/\partial X^{2}$. The denominator of Ricci scalar of this phase space is [@HPEMI] $$\begin{aligned} \text{denominator}(\mathcal{R}) &=&2S^{3}M_{SS}^{2}M_{S}^{3}, \nonumber \\ && \nonumber \\ &=&\frac{\pi ^{10}G(\varepsilon )^{6}f(\varepsilon )}{128g(\varepsilon )^{7}}% \left[ \Lambda (\varepsilon )r_{+}^{2}-f(\varepsilon )^{2}g(\varepsilon )^{2}G(\varepsilon )q(\varepsilon )^{2}\right] ^{2} \nonumber \\ && \nonumber \\ &&\left. \times \left[ 2\Lambda (\varepsilon )r_{+}^{2}-m(\varepsilon )^{2}c(\varepsilon )c_{1}(\varepsilon )r_{+}+2f(\varepsilon )^{2}g(\varepsilon )^{2}G(\varepsilon )q(\varepsilon )^{2}\right] ^{3}\right. ,\end{aligned}$$using Eq. (\[heat\]), we can rewrite the above equation in the following form $$\text{denominator}(\mathcal{R})=\frac{\pi ^{7}G(\varepsilon )^{6}f(\varepsilon )}{2048r_{+}^{3}g(\varepsilon )^{9}}\left[ \text{% denominator}\left( C_{Q}\right) \right] ^{2}\times \left[ \text{numerator}% \left( C_{Q}\right) \right] ^{3}. \label{denomR}$$ Comparing Eq. (\[denomR\]) with the obtained heat capacity (\[heat\]), it is evident that bound points and phase transition points of the heat capacity are matched with divergencies of the Ricci scalar of the HPEM metric for different parameters. In order to have better picture, we employ HPEM metric and present table \[tab1\] and plot following two diagrams (Figs. \[Fig8\] and \[Fig9\]). In the table \[tab1\], $R_{\infty }$ and $C_{\infty }$ are, respectively, divergencies of the Ricci scalar and heat capacity. Also, $C_{0}$ and $T_{0}$ are the roots of heat capacity and temperature, respectively. Comparison between Figs. \[Fig8\] and \[Fig9\] with plotted diagrams in Figs. \[Fig5\] and \[Fig6\] (or see the table \[tab1\], for more details), shows that all the bound and phase transition points are matched with divergencies of the Ricci scalar of the HPEM metric for different parameters. These coincidences between the divergencies of the Ricci scalar of HPEM metric with the bound and the phase transition points of heat capacity and the temperature, confirm the validity of results of this thermodynamical metric. So, one can use this method as an independent approach regarding studying thermodynamical properties of the black holes. Another interesting property of HPEM metric is related to the sign of Ricci scalar of HPEM metric around the bound and the phase transition points which depends on the type of point. As one can see, the sign of Ricci scalar around the bound point changes, while for the phase transition point, the sign of Ricci scalar of HPEM metric does not change (see Figs. \[Fig8\] and \[Fig9\], for more details). Therefore, by studying the sign of Ricci scalar of HPEM metric, we can characterize the type of divergencies. On the other hand, in GTs, the sign of Ricci scalar determines whether system has attractive (for negative sign) or repulsive (for positive sign) interaction around the bound and phase transition point. Here, we see that before the bound point, system has repulsive interaction and by crossing the bound point, the interaction is changed into attractive (see Figs. \[Fig8\] and \[Fig9\], for more details). On the other hand, for the phase transition point, the sign of Ricci scalar is positive (see Figs. \[Fig8\] and [Fig9]{}, for more details). It is notable that, we can not extract these information by using the temperature and the heat capacity of system. Therefore, we see that employing the HPEM metric provides extra information regarding the nature of interactions around the bound and the phase transition points. $% \begin{array}{ccc} \epsfxsize=5cm \epsffile{GTDLambda1diffm.eps} & \epsfxsize=5cm % \epsffile{GTDLambda1difff.eps} & \epsfxsize=5cm % \epsffile{GTDLambda1diffq.eps}% \end{array} $ $% \begin{array}{ccc} \epsfxsize=5cm \epsffile{GTDLambda-1diffm.eps} & \epsfxsize=5cm % \epsffile{GTDLambda-1difff.eps} & \epsfxsize=5cm % \epsffile{GTDLambda-1diffq.eps}% \end{array} $ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ $\Lambda(\varepsilon)$ $f(\varepsilon)$ $m(\varepsilon)$ $% $R_{\infty}$ $C_{\infty}$ $C_{0}$ $T_{0}$ q(\varepsilon)$ ------------------------ ------------------ ------------------ ----------------- ----------------------------- -------------- -------------------- --------------------- $-1$ $0.9$ $0$ $0.5$ $0.85500$ $-$ $0.85500$ $0.85500$ $-1$ $0.9$ $0.92$ $0.5$ $0.35668$ $-$ $0.35668$ $0.35668$ $-1$ $0.9$ $1$ $0.5$ $0.31568$ $-$ $0.31568$ $0.31568$ $-1$ $0.9$ $1$ $0.5$ $0.31568$ $-$ $0.31568$ $0.31568$ $-1$ $1.07$ $1$ $0.5$ $0.42592$ $-$ $0.42592$ $0.42592$ $-1$ $1.5$ $1$ $0.5$ $0.74086$ $-$ $0.74086$ $0.74086$ $-1$ $0.9$ $1$ $0$ $-$ $-$ $-$ $-$ $-1$ $0.9$ $1$ $0.5$ $0.31568$ $-$ $0.31568$ $0.31568$ $-1$ $0.9$ $1$ $1.1$ $1.13029$ $-$ $1.13029$ $1.13029$ $-1$ $0.9$ $1$ $1.2$ $1.28269$ $-$ $1.28269$ $1.28269$ $1$ $0.9$ $0$ $0.5$ $0.85500$ $0.85500$ $-$ $-$ $1$ $0.9$ $0.92$ $0.5$ $0.85500$ $0.85500$ $-$ $-$ $1$ $0.9$ $1$ $0.5$ $(0.48137,0.85500,1.51862)$ $0.85500$ $% $(0.48137,1.51862)$ (0.48137,1.51862)$ $1$ $0.9$ $1$ $0.5$ $(0.48137,0.85500,1.51862)$ $0.85500$ $% $(0.48137,1.51862)$ (0.48137,1.51862)$ $1$ $1.07$ $1$ $0.5$ $1.01650$ $1.01650$ $-$ $-$ $1$ $1.5$ $1$ $0.5$ $1.42500$ $1.42500$ $-$ $-$ $1$ $0.9$ $1$ $0$ $2.00000$ $-$ $2.00000$ $2.00000$ $1$ $0.9$ $1$ $0.5$ $(0.48137,0.85500,1.51862)$ $0.85500$ $% $(0.48137,1.51862)$ (0.48137,1.51862)$ $1$ $0.9$ $1$ $1.1$ $1.88100$ $1.88100$ $-$ $-$ $1$ $0.9$ $1$ $1.2$ $2.05200$ $2.05200$ $-$ $-$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ : Roots and divergencies of the Ricci scalar, heat capacity and temperature for $G(\protect\varepsilon)=l(\protect\varepsilon)=1$, $g(% \protect\varepsilon)=1.9$ and $c(\protect\varepsilon)=c_{1}(\protect% \varepsilon)=2$.[]{data-label="tab1"} Conclusions =========== In this paper, we have considered the three dimensional black holes in the presence of massive gravity’s rainbow. The solutions were extracted and the effects of massive gravity and gravity’s rainbow on the geometrical structure of the black holes were studied. Furthermore, the conserved and thermodynamical properties of these black holes were extracted in both canonical and grand canonical ensembles. It was shown that the existence of massive gravity and generalization of the gravity’s rainbow modify thermodynamical quantities of the black holes. The effects of these generalizations were studied for different thermodynamical quantities. It was shown that by suitable choices of different parameters, it is possible to obtain a minimum for the mass. This minimum could be located at negative values of the mass/internal energy which leads to the existence of a region of negativity for the mass/ineternal energy and two roots. Existence of the negative mass/internal energy is not valid in the classical thermodynamics of the black holes. Therefore, one can conclude that for this region of the negative mass/internal energy, black hole solutions do not exist. This puts a limitation on the values that different parameters can acquire. Next, the temperature was taken into account and it was pointed out that one can acquire a root for this quantity. This root separates the physical solutions with positive temperature from non-physical ones with negative temperature. Remarkably, we have observed that the existence of root was restricted with satisfaction of a condition which was depending on the values of different parameters. In addition, it was shown that for a specific limit, there may exist a remnant of temperature for these black holes. In other words, after the evaporation of black holes, there will be traces of existence of these black holes in form of fluctuation in temperature of the spacetime. Subsequently, the heat capacity was investigated and the possibility of divergence point was pointed out. This divergency marks the phase transition point for these black holes. The existence of real valued positive divergence point depends on the spacetime being dS or adS. The asymptotic and high energy limits of the heat capacity were studied as well and it was shown that these limits were governed by the gravity’s rainbow and massive gravity generalizations. After that, the free energy was studied and its root and divergence point were extracted. It was shown that the extremum of free energy and divergence point of the heat capacity are matched. The resulting critical horizon radius was used to obtain the critical temperature, the mass/internal energy and the free energy. At last, we have used HPEM metric in the context GTs in order to study thermodynamical structure of these black holes. It was shown that this metric can describe the (non)physical and phase transition points of these black holes. Besides, by studying the sign of HPEM metric around the bound and phase transition points, it was possible to distinguish repulsive and attractive interactions of thermodynamical system. The rainbow functions of the metric are originated form quantum corrections. Study conducted in this paper showed significant modifications in thermodynamics of the black holes with consideration of the such corrections. In other words, it was shown that in semi-classical/quantum regime, thermodynamics of the black holes would be modified into a level which differers from classical case. We observed that different orders of the rainbow functions affect the high energy and asymptotical behaviors of the solutions and their leading terms. On the other hand, we found that the highest contribution of massive gravitons could be observed in medium black holes. The only case, in which the effects of massive gravity could be observed for small (large) black holes was in the case of vanishing electric charge (cosmological constant). Obtained results here could be employed to study lattice like behavior in context of AdS/QCD correspondence. In addition, it is possible to employ the results of this paper to investigate the entropy spectrum and quasinormal modes. In fact, it would be interesting to investigate the effects of energy functions on these quantities. The results of this paper could also be employed in the context of AdS/CFT correspondence and central charges, specially considering the energy dependency of the constants and their effects on calculation of the central charges. We would like to thank the referee for his/her insightful comments. We also thank Shiraz University Research Council. This work has been supported financially by the Research Institute for Astronomy and Astrophysics of Maragha, Iran. [999]{} N. Jarosik et al., Astrophys. J. Supp., **192**, 14 (2011). K. Hinterbichler, Rev. Mod. Phys. **84**, 671 (2012). C. Deffayet, Phys. Lett. B **502**, 199 (2001). C. Deffayet, G. Dvali and G. Gabadadze, Phys. Rev. D **65**, 044023 (2002). K. Aoki and S. Mukohyama, Phys. Rev. D **94**, 024001 (2016). E. Babichev, L. Marzola, M. Raidal, A. Schmidt-May, F. Urban, H. Veermäe and M. von Strauss, Phys. Rev. D **94**, 084055 (2016). G. Dvali, G. Gabadadze and M. Porrati, Phys. Lett. B, **485**, 208 (2000). G. Dvali , G. Gabadadze and M. Porrati, Phys. Lett. B, 484, 112 (2000). G. Dvali and G. Gabadadze, Phys. Rev. D **63**, 065007 (2001). G. ’t Hooft, \[arXiv:0708.3184v4\]. C. P. Burgess, P. G. Camara, S. P. de Alwis, S. B. Giddings, A. Maharana, F. Quevedo and K. Suruliz, JHEP **04**, 053 (2008). V. Niarchos,Fortsch. Phys. **57**, 646 (2009). S. H. Hendi, G. H. Bordbar, B. Eslam Panah and S. Panahiyan, \[arXiv:1701.01039\]. A. Bouchareb and G. Clement, Class. Quantum Gravit. **24**, 5581 (2007). F. Capela and P. G. Tinyakov, JHEP **04**, 042 (2011). M. S. Volkov, Lect. Notes Phys. **892**, 161 (2015). E. Babichev and R. Brito, Class. Quantum Grav. **32**, 154001 (2015). S. G. Ghosh, L. Tannukij, P. Wongjun, Eur. Phys. J. C **76**, 119 (2016). M. Zhang and W. B. Liu, \[arXiv:1610.03648\]. Y.P. Hu, X. X. Zeng and H. Q. Zhang, Phys. Lett. B **765**, 120 (2017). S. H. Hendi, R. B. Mann, S. Panahiyan and B. Eslam Panah, Phys. Rev. D **95**, 021501(R) (2017). S. H. Hendi, B. Eslam Panah and S. Panahiyan, JHEP **05**, 029 (2016). T. Katsuragawa, Universe **1**, 158 (2015). T. Katsuragawa and S. Nojiri, Phys. Rev. D **91**, 084001 (2015). A. E. Gumrukcuoglu, S. Kuroyanagi, C. Lin, S. Mukohyama and N. Tanahashi, Class. Quantum Gravit. **29**, 235026 (2012). M. Fierz, Helv. Phys. Acta **12**, 3 (1939). M. Fierz and W. Pauli, Proc. R. Soc. A **173**, 211 (1939). D. G. Boulware and S. Deser, Phys. Rev. D **6**, 3368 (1972). D. G. Boulware and S. Deser, Phys. Lett. B **40**, 227 (1972). C. de Rham and G. Gabadadze, Phys. Rev. D **82**, 044020 (2010). C. de Rham, G. Gabadadze and A. J. Tolley, Phys. Rev. Lett. **106**, 231101 (2011). S. F. Hassan and R. A. Rosen, Phys. Rev. Lett. **108**, 041101 (2012). S. F. Hassan, R. A. Rosen and A. Schmidt-May, JHEP **02**, 026 (2012). D. Vegh, \[arXiv:1301.0537\]. G. Amelino-Camelia, Int. J. Mod. Phys. D **11**, 35 (2002). G. Amelino-Camelia, Phys. Lett. B **510**, 255 (2001). J. Magueijo and L. Smolin, Phys. Rev. Lett. **88**, 190403 (2002). J. Kowalski-Glikman, Lect. Notes Phys. **669**, 131 (2005). J. Magueijo and L. Smolin, Class. Quantum Gravit. **21**, 1725 (2004). S. H. Hendi and M. Faizal, Phys. Rev. D **92**, 044027 (2015). S. H. Hendi, M. Faizal, B. Eslam Panah and S. Panahiyan, Eur. Phys. J. C **76**, 296 (2016). S. H. Hendi, S. Panahiyan, B. Eslam Panah, M. Faizal and M. Momennia, Phys. Rev. D **94**, 024028 (2016). S. H. Hendi, B. Eslam Panah, S. Panahiyan and M. Momennia, Adv. High Energy Phys. **2016**, 9813582 (2016). A. Chatrabhuti, V. Yingcharoenrat and P. Channuie, Phys. Rev. D **93**, 043515 (2016). P. Rudra, M. Faizal and A. F. Ali, Nucl. Phys. B **909**, 725 (2016). G. Yadav, B. Komal and B. R. Majhi, \[arXiv:1605.01499\]. Y. Gim and W. Kim, Eur. Phys. J. C **76**, 166 (2016). A. F. Ali, M. Faizal and M. M. Khalil, Phys. Lett. B **743**, 295 (2015). A. F. Ali, M. Faizal and M. M. Khalil, JHEP **12**, 159 (2014). A. Awad, A. F. Ali and B. Majumder, JCAP **10**, 052 (2013). S. H. Hendi, M. Momennia, B. Eslam Panah and M. Faizal, Astrophys. J. **827**, 153 (2016). S. H. Hendi, G. H. Bordbar, B. Eslam Panah and S. Panahiyan, JCAP **09**, 013 (2016). Z. W. Feng, S. Z. Yang, H. L. Li and X. T. Zu, \[arXiv:1608.06824\]. Y. W. Kim, S. K. Kim and Y. J. Park, Eur. Phys. J. C **76**, 557 (2016). M. Khodadi, K. Nozari and B. Vakili, Gen. Rel. Grav. **48**, 64 (2016). R. Banerjee and R. Biswas, \[arXiv:1610.08090\]. S. Carlip, J. Korean Phys. Soc. **28**, S447 (1995). J. D. Barrow, A. B. Burd and D. Lancaster, Class. Quantum Gravit. **3**, 551 (1986). M. Bañados, C. Teitelboim and J. Zanelli, Phys. Rev. Lett. **69**, 1849 (1992). S. Carlip, Class. Quantum Gravit. **12**, 2853 (1995). A. Ashtekar, Adv. Theor. Math. Phys. **6**, 507 (2002). T. Sarkar, G. Sengupta and B. Nath Tiwari, JHEP **11**, 015 (2006). E. Witten, Adv. Theor. Math. Phys. **2**, 505 (1998). S. Carlip, Class. Quantum Gravit. **22**, R85 (2005). E. Witten, \[arXiv:07063359\]. C. Martinez, C. Teitelboim and J. Zanelli, Phys. Rev. D **61**, 104013 (2000). G. Clement, Phys. Lett. B **367**, 70 (1996). R. G. Cai, Y. P. Hu, Q. Y. Pan and Y. L. Zhang, Phys. Rev. D **91**, 024032 (2015). S. H. Hendi, B. Eslam Panah and S. Panahiyan, \[arXiv:1602.01832\]. Z. Y. Tang, C. Y. Zhang, M. Kord Zangeneh, B. Wang and J. Saavedra, \[arXiv:1610.01744\]. M. A. Anacleto, F. A. Brito, E. Passos, A. G. Cavalcanti and J. Spinelly, \[arXiv:1510.08444\]. L. Freidel, J. Kowalski-Glikman and L. Smolin Phys. Rev. D **69**, 044001 (2004). H. J. Matschull and M. Welling, Class. Quantum Gravit. **15**, 2981(1998). A. Blaut, M. Daszkiewicz, J. Kowalski-Glikman and S. Nowak, Phys. Lett. B **582**, 82 (2004). M. Assanioussi, A. Dapor and J. Lewandowski, \[arXiv:1412.6000\]. J. Kowalski-Glikman, \[arXiv: gr-qc/0603022\] E. R. Livine and D. Oriti, JHEP **11**, 050 (2005). L. Freidel and E. R. Livine, Class. Quantum Gravit. **23**, 2021 (2006). E. R. Livine, S. Speziale and J. L. Willis, Phys. Rev. D **75**, 024038 (2007). G. Amelino-Camelia, Symmetry **2**, 230 (2010). G. Amelino-Camelia, M. Arzano, S. Bianco and R. J. Buonocore, Class. Quantum Gravit. **30**, 065012 (2013). G. Ruppeiner, Phys. Rev. E **86**, 021130 (2012). F. Weinhold, J. Chem. Phys. **63**, 2479 (1975). F. Weinhold, J. Chem. Phys. **63**, 2484 (1975). G. Ruppeiner, Phys. Rev. A **20**, 1608 (1979). G. Ruppeiner, Rev. Mod. Phys. **67**, 605 (1995). H. Quevedo, J. Math. Phys. **48**, 013506 (2007). H. Quevedo and A. Sanchez, JHEP **09**, 034 (2008). Y. W. Han and G. Chen, Phys. Lett. B **714**, 127 (2012). A. Bravetti, D. Momeni, R. Myrzakulov and A. Altaibayeva, Adv. High Energy Phys. **2013**, 549808 (2013). M. S. Ma, Phys. Lett. B **735**, 45 (2014). M. A. García-Ariza, M. Montesinos and G. F. T. d. Castillo, Entropy **16**, 6515 (2014). J. L. Zhang, R. G. Cai and H. Yu, JHEP **02**, 143 (2015). J. X. Mo, G. Q. Li and Y. C. Wu, JCAP **04**, 045 (2016). H. Quevedo, M. N. Quevedo and A. Sanchez, Phys. Rev. D **94**, 024057 (2016). S. Soroushfar, R. Saffari and N. Kamvar, Eur. Phys. J. C **76**, 476 (2016). S. H. Hendi, S. Panahiyan, B. Eslam Panah and M. Momennia, Eur. Phys. J. C **75**, 507 (2015). S. H. Hendi, S. Panahiyan and B. Eslam Panah, Adv. High Energy Phys. **2015**, 743086 (2015). S. H. Hendi, A. Sheykhi, S. Panahiyan and B. Eslam Panah, Phys. Rev. D **92**, 064028 (2015). S. H. Hendi, S. Panahiyan, B. Eslam Panah and Z. Armanfard, Eur. Phys. J. C **76**, 396 (2016). W. Y. Wen, \[arXiv:1602.08848\]. M. Zhang and W. B. Liu, \[arXiv:1610.03648\]. L. Smolin, Nucl. Phys. B **742**, 142 (2006). R. Garattini and G. Mandanici, Phys. Rev. D **85**, 023507 (2012). O. J. Rosten, Phys. Rep. **511**, 177 (2012). A. Kaya, Phys. Rev. D **87**, 123501 (2013). K. Groh and F. Saueressig, J. Phys. A **43**, 365403 (2010). J. Suresh and V. C. Kuriakose, \[arXiv:1605.00142\]. P. Prasia and V. C. Kuriakose, \[arXiv:1608.05299\]. S. W. Hawking, Phys. Rev. Lett. **26**, 1344 (1971). J. M. Bardeen, B. Carter and S. W. Hawking, Commun. Math. Phys. **31**, 161 (1973). J. D. Beckenstein, Phys. Rev. D **7**, 2333 (1973). S. W. Hawking and C. J. Hunter, Phys. Rev. D **59**, 044025 (1999). C. J. Hunter, Phys. Rev. D **59**, 024009 (1998). S. W. Hawking, C. J. Hunter and D. N. Page, Phys. Rev. D **59**, 044033 (1999). S. H. Hendi, S. Panahiyan and R. Mamasani, Gen. Relativ. Gravit. **47**, 91 (2015). [^1]: email address: hendi@shirazu.ac.ir [^2]: email address: sh.panahiyan@gmail.com [^3]: email address: sudhakerupadhyay@gmail.com [^4]: email address: behzad.eslampanah@gmail.com
--- abstract: 'The luminosity function (LF) for stars is here fitted by a Schechter function and by a Gamma probability density function. The dependence of the number of stars on the distance, both in the low and high luminosity regions, requires the inclusion of a lower and upper boundary in the Schechter and Gamma LFs. Three astrophysical applications for stars are provided: deduction of the parameters at low distances, behavior of the average absolute magnitude with distance, and the location of the photometric maximum as a function of the selected flux. The use of the truncated LFs allows to model the Malmquist bias.' address: | Physics Department, via P.Giuria 1,\ I-10125 Turin,Italy author: - Lorenzo Zaninetti title: Standard and Truncated Luminosity Functions for stars in the Gaia Era --- [*Keywords*]{}: stars: fundamental parameters stars: luminosity function, mass function Introduction ============ The stellar luminosity function (LF) is the relative numbers of stars of different luminosities in a standard volume of space ,usually a cubic parsec. The determination of the LF for stars is complicated at a local level by the presence of five classes for the stars, as given by the MK system, and by the mass-luminosity relation. The presence of the Malmquist bias, after [@Malmquist_1920; @Malmquist_1922; @Malmquist_1936], for an introduction, see section 3.6 in [@Binney1998] or the historical section 2 in [@Butkevich2005], modifies the distribution in absolute magnitude as a function of the distance and therefore complicates the modeling of the LF for stars. The LFs for stars started to be fitted by a Gaussian probability density function (PDF) in absolute magnitude, see [@Eddington1914]. In order to deal with the boundaries, a double truncated Gaussian in absolute magnitude has been considered, see [@Jaschek1985]. The astronomical derivation of the LF takes account of a standard volume with a radius of $\approx 20\,pc$. As an example [@Wielen1974] has derived the first local LF for stars in a spherical volume having radius of $22\,pc$ and more recently [@Flynn2006] has measured the volume luminosity density and surface luminosity density generated by the Galactic disc, using accurate data on the local luminosity function and the vertical structure of the disc. A new sample of stars, representative of the solar neighborhood LF, has been constructed from the Hipparcos (HIP) catalogue and the Fifth Catalogue of Nearby Stars, see [@Just2015]. From the previous analysis, the following questions can be raised. - Is it possible to model the LF for stars with the Schechter function and the Gamma LF? - Is it possible to model the absolute magnitude-distance plane with the truncated Schechter function or the truncated Gamma LF? - Is it possible to model the observational maximum in the number of stars and the average number of stars versus distance at a given flux? The Gaia Catalog ================ A great number of stars with mean apparent magnitude in the G-band, flux, $f$, expressed in electron-charge per second (e-/s) and parallax, $\approx$ two million, are available at the Gaia Data Release 1 (Gaia DR1) astrometric catalogs, see [@GAIA2016a; @GAIA2016b], with data at <http://vizier.u-strasbg.fr/viz-bin/VizieR> and specific Table I/337/tgasptyc. The above catalog gives stellar parallax, G-band flux, G-band magnitude, Tycho-2 or HIP BT magnitude and Tycho-2 or HIP VT magnitude. As pointed out by [@Stassun2016] there is an average offset of $-0.25 \pm 0.05$ mas in the Gaia parallaxes and therefore we increased by 0.25 the parallax. According to Gaia DR1, the luminosity as deduced from the flux will be expressed in Gaia units, namely, $e-/s\,pc^2$. The $G$ magnitude, see [@GAIA2017b] is $$G = -2.5 \log (f) + zp \quad ,$$ where $zp$ is the photometric zero derived as in [@GAIA2016c], we found numerically $zp=25.52$. The distribution of all Gaia DR1 sources in the sky is illustrated in Figure \[gaia\_mollweide\]. ![ Mollweide projection of the sky density of all Gaia DR1 sources in Galactic coordinates. []{data-label="gaia_mollweide"}](f01.eps){width="7cm"} The observational Hertzsprung-Russell (H-R) diagram in $M_G$ as obtained by the Gaia DR1 parallaxes versus (B-V) , evaluated as BT-VT, is presented in Figure \[gaia\_hr\] and in a contour density version in Figure \[gaia\_hr\_contour\], see also figure 1 in [@GAIA2017a]. ![ $M_{\mathrm G}$ against $(B-V)$ , evaluated as BT-VT, (H-R diagram) in the first 100 pc. []{data-label="gaia_hr"}](f02.eps){width="7cm"} ![ Contour density of stars for H-R diagram in a logarithmic scale. []{data-label="gaia_hr_contour"}](f03.eps){width="7cm"} The distance modulus is $$m_G - M_G = 5\,\log (d) -5 \quad , \label{distancemodulusstars}$$ where $m_g$ is the apparent magnitude in the G-band, $M_g$ is the absolute magnitude in the G-band and $d$ is the distance in pc. Isolating $M_G$ in the above equation we obtain the theoretical curve for the upper observable absolute magnitude $$M_g =-5\,\log (d) +5 +m_G \quad , \label{mabsgupper}$$ once the maximum apparent magnitude in the g-band, $m_{lim}$, is inserted, i.e. $m_G$=12.71. Figure \[gaia\_lower\] presents the absolute magnitude as function of the distance as well the upper theoretical curve in magnitude. ![ $M_{\mathrm G}$ versus distance in pc in the first 100 pc (green points) and theoretical upper curve in magnitude lower curve in the plot (red line) when $m_G$=12.71. []{data-label="gaia_lower"}](f04.eps){width="7cm"} The completeness of the sample can be evaluated by the following relationship for the absolute magnitude $$M_g= -{\frac {-{\it m_{lim}}\,\ln \left( 10 \right) +5\,\ln \left( d \right) -5\,\ln \left( 10 \right) }{\ln \left( 10 \right) }} \quad .$$ On inserting in the above formula $m_{lim}$=12.71 we obtain a numerical relationship between selected absolute magnitude and numerical relationship over which the sample is complete, see Figure \[sample\_complete\]. ![ The relationship of completeness for $M_{\mathrm G}$ versus distance in pc. []{data-label="sample_complete"}](f05.eps){width="7cm"} In the case here considered the absolute magnitude covers the range $[3 , 12 \, mag ]$ and therefore we deal with a complete sample. Standard LFs ============ Here we introduce an algorithm to build the LF, the statistical tests adopted, as well as the Schechter and Gamma LFs. The derived parameter for the local LF will be applied in Section \[secaverage\] according to the general principle that the LF is equal everywhere but the upper observable absolute magnitude decreases with distance. The astronomical LF ------------------- A LF for stars is built according to the following points 1. A standard distance is chosen, i.e. $20\,pc$, 2. The GAIA’s stars are selected according to the following ranges of existence: $-5 \leq M_V \leq 15$ where $M_V$ is the absolute visual magnitude and $-0.3 \leq (B-V) \leq 3$, 3. We organize an histogram with bins large 1 mag 4. We divide the obtained frequencies by the involved volume, 5. We do not apply the $1/V_{a}$ method because our sample is complete at $20\,pc$, 6. The error of the LF is evaluated as the square root of the frequencies divided by the involved volume. The LF for Gaia’s stars is reported in Figure \[gaia\_lf\_due\] together the LF main sequence in the V band as extracted from Table 2, column 9, in [@Just2015]. ![ LF in the V band main sequence, empty stars, and Gaia’s LF, filled circles. []{data-label="gaia_lf_due"}](f06.eps){width="7cm"} Statistical Tests ----------------- The merit function $\chi^2$ is computed as $$\chi^2 = \sum_{j=1}^n ( \frac {LF_{theo} - LF_{astr} } {\sigma_{LF_{astr}}})^2 \quad , \label{chisquare}$$ where $n$ is the number of bins for the LF of the stars and the two indices $theo$ and $astr$ stand for ‘theoretical’ and ‘astronomical’, respectively. The reduced merit function $\chi_{red}^2$ is evaluated by $$\chi_{red}^2 = \chi^2/NF \quad, \label{chisquarereduced}$$ where $NF=n-k$ is the number of degrees of freedom and $k$ is the number of parameters. The goodness of the fit can be expressed by the probability $Q$, see equation 15.2.12 in [@press], which involves the number of degrees of freedom and $\chi^2$. According to [@press], the fit “may be acceptable” if $Q \geq 0.001$. The Akaike information criterion (AIC), see [@Akaike1974], is defined by $$AIC = 2k - 2 ln(L) \quad,$$ where $L$ is the likelihood function and $k$ is the number of free parameters in the model. We assume a Gaussian distribution for the errors and the likelihood function can be derived from the $\chi^2$ statistic $L \propto \exp (- \frac{\chi^2}{2} ) $ where $\chi^2$ has been computed by Equation (\[chisquare\]), see [@Liddle2004], [@Godlowski2005]. Now the AIC becomes $$AIC = 2k + \chi^2 \quad. \label{AIC}$$ The Schechter LF ---------------- Let $L$, the luminosity of a star, be defined in $[0, \infty]$. The Schechter LF of the stars, $\Phi$, originally applied to the stars, see [@schechter], is $$\Phi (L;\Phi^*,\alpha,L^*) dL = (\frac {\Phi^*}{L^*}) (\frac {L}{L^*})^{\alpha} \exp \bigl ( {- \frac {L}{L^*}} \bigr ) dL \quad, \label{lf_schechter}$$ where $\alpha$ sets the slope for low values of $L$, $L^*$ is the characteristic luminosity, and $\Phi^*$ represents the number of stars per pc$^3$. The normalization is $$\int_0^{\infty} \Phi (L;\Phi^*,\alpha,L^*) dL = \rm \Phi^*\, \Gamma \left( \alpha+1 \right) \quad , \label{norma_schechter}$$ where $$\rm \Gamma \, (z ) =\int_{0}^{\infty}e^{{-t}}t^{{z-1}}dt \quad ,$$ is the Gamma function. The average luminosity, $ { \langle L \rangle } $, is $${ \langle (\Phi (L;\Phi^*,\alpha,L^*) \rangle } = \rm L^* \,{\rm \Phi^* }\,\Gamma \left( \alpha+2 \right) \quad . \label{ave_schechter}$$ An equivalent form in absolute magnitude of the Schechter LF is $$\begin{aligned} \Phi (M;\Phi^*,\alpha,M^*)dM= \nonumber\\ 0.921 \Phi^* 10^{0.4(\alpha +1 ) (M^*-M)} \exp \bigl ({- 10^{0.4(M^*-M)}} \bigr) dM \, , \label{lfstandard}\end{aligned}$$ where $M^*$ is the characteristic magnitude. The resulting fitted curve is displayed in Figure \[gaia\_lf\_schechter\_der\] with parameters as in Table \[schechterfit\]. ![ The observed LF for stars, empty stars with error bar, and the fit by the Schechter LF when the distance covers the range $[0\,pc , 20 \, pc ]$. []{data-label="gaia_lf_schechter_der"}](f07.eps){width="7cm"} $$\begin{array}{ccccccc} \hline \hline \noalign{\smallskip} M^*\, (mag) & \Psi^* \,(pc^{-3}) & \alpha & \chi^2 & \chi_{red}^2 & Q & AIC \\ \noalign{\smallskip} \hline 5.59 & 0.0085 & -0.26 & 166.97 & 33.39 & 3.23\,10^{-34} & 172.97 \\ \hline \hline \end{array}$$ The Gamma LF ------------ The [*Gamma* ]{} LF, defined in $[0, \infty]$, is $$f(L;\Psi^*,L^*,c) = \Psi^* \frac { \left( {\frac {L}{L^*}} \right) ^{c-1}{{\rm e}^{-{\frac {L}{L^*}}}} } { L^*\Gamma \left( c \right) } \label{Gammastandard}$$ where $\Psi^*$ is the total number of stars per pc$^3$, $$\mathop{\Gamma\/}\nolimits\!\left(z\right) =\int_{0}^{\infty}e^{{-t}}t^{{z-1}}dt \quad ,$$ where $L^*\, > \, 0$ is the scale and $c > \, 0$ is the shape, see formula (17.23) in [@univariate1]. The average luminosity is $$\langle f(L;\Psi^*,L^*,c) \rangle = \Psi^* L^*c \quad.$$ The change of parameter $(c-1)=\alpha$ allows obtaining the same scaling as for the Schechter LF (\[lf\_schechter\]), for more details, see [@Zaninetti2016a]. The version in absolute magnitude is $$\begin{aligned} \rm \Psi (M;\Psi^*,c,M^*)dM= \nonumber \\ \frac { 0.4\,{\it \Psi^*}\, \left( {\frac {{10}^{- 0.4\,M}}{{10}^{- 0.4\,{M}^ {{\it star}}}}} \right) ^{c-1}{{\rm e}^{-{\frac {{10}^{- 0.4\,M}}{{10} ^{- 0.4\,{M}^{{\it star}}}}}}}{10}^{- 0.4\,M}\ln \left( 10 \right) } { {10}^{- 0.4\,{M}^{{\it star}}}\Gamma \left( c \right) } dM \quad .\end{aligned}$$ The resulting fitted curve is displayed in Figure \[gaia\_lf\_gammac\] with parameters as in Table \[gammacfit\]. ![ The observed LF for stars, empty stars with error bar, and the fit by the Gamma LF when the distance covers the range $[0\,pc , 20 \, pc ]$. []{data-label="gaia_lf_gammac"}](f08.eps){width="7cm"} $$\begin{array}{ccccccc} \hline \hline \noalign{\smallskip} M^*\, (mag) & \Psi^* \,(pc^{-3}) & c & \chi^2 & \chi_{red}^2 & Q & AIC \\ \noalign{\smallskip} \hline 5.59 & 0.01 & 0.73 & 166.9 & 33.39 & 3.23\,10^{-34} & 172.9 \\ \hline \hline \end{array}$$ Truncated LFs ============= Here we derive the truncated version of the Schechter and Gamma LFs. \[sectiontruncated\] The truncated Schechter LF -------------------------- The luminosity $L$ is defined in the interval $[L_l, L_u ]$, where the indices $l$ and $u$ mean ‘lower’ and ‘upper’; the truncated Schechter LF, $S_T$, is $$S_T(L;\Psi^*,\alpha,L^*,L_l,L_u)= \frac { - \left( {\frac {L}{{\it L^*}}} \right) ^{\alpha}{{\rm e}^{-{\frac { L}{{\it L^*}}}}}{\it \Psi^*}\,\Gamma \left( \alpha+1 \right) } { {\it L^*}\, \left( \Gamma \left( \alpha+1,{\frac {L_{{u}}}{{\it L^*}}} \right) -\Gamma \left( \alpha+1,{\frac {L_{{l}}}{{\it L^*}} } \right) \right) } \label{lf_trunc_schechter} \quad ,$$ where $\Gamma(a, z)$ is the incomplete Gamma function, defined by $$\mathop{\Gamma\/}\nolimits\!\left(a,z\right) =\int_{z}^{\infty}t^{a-1}e^{-t}dt \quad ,$$ see [@NIST2010]. The average value is $${ \langle S_T(L;\Psi^*,\alpha,L^*,L_l,L_u) \rangle } = \frac{ N } { {\it L^*} \left( \Gamma \left( \alpha+1,{\frac {L_{{u}}}{{\it L^*}}} \right) -\Gamma \left( \alpha+1,{\frac {L_{{l}}}{{\it L^*}} } \right) \right) }$$ with $$\begin{aligned} N= {\it \Psi^*} \Bigg ( {{\it L^*}}^{2}\Gamma \big ( \alpha+1,{\frac {L_{{u}}}{{\it L^*}}} \big ) \alpha-{{\it L^*}}^{2}\Gamma \big ( \alpha+1,{\frac {L_{{l}}}{{\it L^*}}} \big ) \alpha+{{\it L^*}}^{ 2}\Gamma \big ( \alpha+1,{\frac {L_{{u}}}{{\it L^*}}} \big ) \nonumber \\ -{{ \it L^*}}^{2}\Gamma \big ( \alpha+1,{\frac {L_{{l}}}{{\it L^*}}} \big ) -{{\it L^*}}^{-\alpha+1}{{\rm e}^{-{\frac {L_{{l}}}{{\it L^*}}}}}{L_{{l}}}^{\alpha+1}+{{\it L^*}}^{-\alpha+1}{{\rm e}^{-{ \frac {L_{{u}}}{{\it L^*}}}}}{L_{{u}}}^{\alpha+1} \Bigg ) \times \nonumber \\ \Gamma \big ( \alpha+1 \big ) \quad .\end{aligned}$$ The four luminosities $L,L_l,L^*$ and $L_u$ are connected with the absolute magnitudes $M$, $M_l$, $M_u$ and $M^*$ through the following relation, $$\begin{aligned} \frac {L}{L_{\sun}} = 10^{0.4(M_{\sun} - M)} \quad , \frac {L_l}{L_{\sun}} = 10^{0.4(M_{\sun} - M_u)} \quad , \nonumber \\ \frac {L^*}{L_{\sun}} = 10^{0.4(M_{\sun} - M^*)} \, \quad , \frac {L_u}{L_{\sun}} = 10^{0.4(M_{\sun} - M_l)} \label{magnitudes}\end{aligned}$$ where the indices $u$ and $l$ are inverted in the transformation from luminosity to absolute magnitude and $L_{\sun}$ and $M_{\sun}$ are the luminosity and absolute magnitude of the sun in the considered band. The equivalent form in absolute magnitude of the truncated Schechter LF is therefore $$\Psi (M;\Psi^*,\alpha,M^*,M_l,M_u)dM = \frac{AS}{DS} \quad ,$$ with $$\begin{aligned} AS = - 0.4 \left( {10}^{ 0.4 {\it M^*}- 0.4 M} \right) ^{\alpha}{ {\rm e}^{-{10}^{ 0.4 {\it M^*}- 0.4 M}}} \times \nonumber\\ {\it \Psi^*} \Gamma \left( \alpha+1 \right) {10}^{ 0.4 {\it M^*}- 0.4 M} \left( \ln \left( 2 \right) +\ln \left( 5 \right) \right)\end{aligned}$$ and $$\begin{aligned} DS= \Gamma \left( \alpha+1,{10}^{- 0.4 M_{{l}}+ 0.4 {\it M^*}} \right) -\Gamma \left( \alpha+1,{10}^{ 0.4 {\it M^*}- 0.4 M_{{u} }} \right)\end{aligned}$$ The averaged absolute magnitude, $\langle M \rangle$, is $$\begin{aligned} { \langle \Psi (M;\Psi^*,\alpha,M^*,M_l,M_u) \rangle } = \nonumber \\ \frac{ \int_{M_l}^{M_u} M(M;\Psi^*,\alpha,L^*,L_l,L_u) M dM } { \int_{M_l}^{M_u} M(M;\Psi^*,\alpha,L^*,L_l,L_u) dM } \quad . \label{xmtruncated}\end{aligned}$$ More details can be found in [@Zaninetti2017a]. The resulting fitted curve is displayed in Figure \[gaia\_lf\_schechter\_noder\_trunc\] with parameters as in Table \[truncschechterfit\]. ![ The observed LF for stars, empty stars with error bar, and the fit by the truncated Schechter LF when the distance covers the range $[0\,pc , 20 \, pc ]$. []{data-label="gaia_lf_schechter_noder_trunc"}](f09.eps){width="7cm"} $$\begin{array}{ccccccccc} \hline \hline \noalign{\smallskip} M^* & M_l & M_u & \Psi^* & \alpha & \chi^2 & \chi_{red}^2 & Q & AIC \\ \noalign{\smallskip} \hline 5.6 & 3.56 & 11.89 & 0.0083 & -0.26 & 167 & 55.67 & 5.54\,10^{-36} & 177 \\ \hline \hline \end{array}$$ The truncated Gamma LF ---------------------- The truncated Gamma LF is defined in the interval $[L_l, L_u ]$ $$f(L;\Psi^*,L^*,c,L_l,L_u) = \Psi^*\, k\;\left( {\frac {L}{{\it L^*}}} \right) ^{c-1}{{\rm e}^{-{\frac {L}{{ \it L^*}}}}} \label{Gammatruncated}$$ where the constant $k$ is $$\begin{aligned} k = \nonumber \\ \frac{c} { {\it L^*}\, \left( \left( {\frac {L_{{u}}}{{\it L^*}}} \right) ^{ c}{{\rm e}^{-{\frac {L_{{u}}}{{\it L^*}}}}}-\Gamma \left( 1+c,{ \frac {L_{{u}}}{{\it L^*}}} \right) +\Gamma \left( 1+c,{\frac {L_{{ l}}}{{\it L^*}}} \right) - \left( {\frac {L_{{l}}}{{\it L^*}}} \right) ^{c}{{\rm e}^{-{\frac {L_{{l}}}{{\it L^*}}}}} \right) } \, . \label{constant}\end{aligned}$$ Its expected value is $$\begin{aligned} \langle f(L;\Psi^*,L^*,c,L_l,L_u) \rangle = \nonumber \\ \Psi^* \frac { -c \left( \Gamma \left( 1+c,{\frac {L_{{u}}}{{\it L^*}}} \right) - \Gamma \left( 1+c,{\frac {L_{{l}}}{{\it L^*}}} \right) \right) { \it L^*} } { \left( {\frac {L_{{u}}}{{\it L^*}}} \right) ^{c}{{\rm e}^{-{\frac { L_{{u}}}{{\it L^*}}}}}-\Gamma \left( 1+c,{\frac {L_{{u}}}{{\it L^*}}} \right) +\Gamma \left( 1+c,{\frac {L_{{l}}}{{\it L^*}}} \right) - \left( {\frac {L_{{l}}}{{\it L^*}}} \right) ^{c}{{\rm e}^ {-{\frac {L_{{l}}}{{\it L^*}}}}} } \quad . \label{meanGammatruncated}\end{aligned}$$ More details on the truncated Gamma PDF can be found in [@Zaninetti2013e; @Okasha2014; @Zaninetti2016a]. The Gamma truncated LF in magnitude is $$\begin{aligned} \label{lfgtmagni} \Psi (M;\Psi^*,c,M^*,M_l,M_u) dM = \nonumber \\ \frac { 0.4\,c \left( {10}^{ 0.4\,{\it M^*}- 0.4\,M} \right) ^{c}{{\rm e}^ {-{10}^{ 0.4\,{\it M^*}- 0.4\,M}}}{\it \Psi^*}\, \left( \ln \left( 2 \right) +\ln \left( 5 \right) \right) } { D }\end{aligned}$$ where $$\begin{aligned} D = \nonumber \\ {{\rm e}^{-{10}^{- 0.4\,M_{{l}}+ 0.4\,{\it M^*}}}} \left( {10}^{- 0.4\,M_{{l}}+ 0.4\,{\it M^*}} \right) ^{c} \nonumber \\ -{{\rm e}^{-{10}^{ 0.4\, {\it M^*}- 0.4\,M_{{u}}}}} \left( {10}^{ 0.4\,{\it M^*}- 0.4\,M_ {{u}}} \right) ^{c} \nonumber \\ -\Gamma \left( 1+c,{10}^{- 0.4\,M_{{l}}+ 0.4\,{ \it M^*}} \right) +\Gamma \left( 1+c,{10}^{ 0.4\,{\it M^*}- 0.4 \,M_{{u}}} \right) \quad .\end{aligned}$$ The averaged absolute magnitude, $\langle M \rangle$, is defined numerically as in Equation \[xmtruncated\]. The resulting fitted curve is displayed in Figure \[gaia\_lf\_gamma\_trunc\] with parameters as in Table \[truncGammafit\]. ![ The observed LF for stars, empty stars with error bar, and the fit by the truncated Gamma LF when the distance covers the range $[0\,pc , 20 \, pc ]$. []{data-label="gaia_lf_gamma_trunc"}](f10.eps){width="7cm"} $$\begin{array}{ccccccccc} \hline \hline \noalign{\smallskip} M^* & M_l & M_u & \Psi^* & c & \chi^2 & \chi_{red}^2 & Q & AIC \\ \noalign{\smallskip} \hline 5.3 & 3.56 & 11.89 & 0.01 & 0.67 & 169 & 56.33 & 2.02\,10^{-36} & 179 \\ \hline \hline \end{array}$$ Distance effects ================ We model the average absolute magnitude of the stars as a function of the distance, the photometric maximum in the number of stars for a given flux as a function of the distance, and the average distance of the stars for a given flux in the framework of the two truncated LFs here considered. \[secdistance\] Averaged absolute magnitude {#secaverage} --------------------------- In order to model the influence of the distance $d$ in pc on the LF, an empirical variable lower bound in absolute magnitude, $M_l$, has been introduced, $$M_l(d) =5.53 -0.27 \, d ^{0.7} \label{mld} \quad .$$ The upper bound, $M_u$ was already fixed by the nonlinear equation (\[mabsgupper\]). A second distance correction is $$M^*= M_u(d) - 2.8 - 5.2 \exp{- \frac{d}{100} } \quad , \label{mstarcorrection}$$ where $M_u(d)$ has been defined in Equation (\[mabsgupper\]). Figure \[gaia\_xmd\_schechter\_trunc\] compares the theoretical average absolute magnitudes for the truncated Schechter LF with the observed ones; the value of $M^*$ in Equation (\[mstarcorrection\]) minimizes the difference between the two curves. ![ Average observed absolute G-magnitude versus distance for Gaia (green points), average theoretical absolute magnitude for truncated Schechter LF with $\alpha=-0.61$ as given by Equation (\[xmtruncated\]) (dot-dash-dot red line), curve for the empirical lowest absolute magnitude at a given distance, see Equation (\[mld\]) (full black line) and the theoretical curve for the highest absolute magnitude at a given distance (dashed black line), see Equation (\[mabsgupper\]). []{data-label="gaia_xmd_schechter_trunc"}](f11.eps){width="10cm"} Conversely Figure \[gaia\_xmd\_gamma\_trunc\] compares the theoretical average absolute magnitudes for the truncated Gamma LF with the observed ones; also here the value of $M^*$ obtained from Equation (\[mstarcorrection\]) minimizes the difference between the two curves. ![ Average observed absolute G-magnitude versus distance for Gaia (green points), average theoretical absolute magnitude for truncated Gamma LF with $c=0.38$ as given by the analogue of equation (\[xmtruncated\]) (dot-dash-dot red line), theoretical curve for the empirical lowest absolute magnitude at a given distance, see Equation (\[mld\]) (full black line) and the theoretical curve for the highest absolute magnitude at a given distance (dashed black line), see Equation (\[mabsgupper\]). []{data-label="gaia_xmd_gamma_trunc"}](f12.eps){width="10cm"} The photometric maximum ----------------------- The definition of the flux, $f$, is $$f = \frac{L}{4 \pi r^2} \quad , \label{flux}$$ where $r$ is the distance and $L$ the luminosity of the star. The joint distribution in distance, [*r*]{}, and flux, [*f*]{}, for the number of stars is $$\frac{dN}{d\Omega dr df} = \frac{1}{4 \,\pi} \int_0^{\infty } 4 \pi r^2 dr \Phi(\frac{L}{L^*}) \delta (f-\frac{L}{4\,\pi\,r^2} ) \quad , \label{nldef}$$ were the factor ($\frac{1}{4 \pi}$) converts the number density into density for solid angle and the Dirac delta function selects the required flux. We now apply the sifting properties of the delta function, see [@Bracewell2000], to the case of the Schechter LF as given by formula \[lf\_schechter\] $$\frac{dN}{d\Omega dr df} = \frac{1}{L^*}\, 4\,\pi\,{r}^{4}{\it \Phi^*}\, \left( 4\,{\frac {\pi\,f{r}^{2}}{{\it L^*}} } \right) ^{\alpha}{{\rm e}^{-4\,{\frac {\pi\,f{r}^{2}}{{\it L^*}}}} } \quad . \label{nfunctionrschechter}$$ We now introduce the critical radius $r_{crit}$ $$r_{crit}= \frac{1}{2} \,{\frac {\sqrt {{\it L^* }}}{\sqrt {\pi}\sqrt {f}}} \quad .$$ Therefore the joint distribution in distance and flux becomes $$\frac{dN}{d\Omega dr df} = \frac{1}{L^*} \, 4\,\pi\,{r}^{4}{\it \Phi^*}\, \left( {\frac {{r}^{2}}{{{\it r_{crit}}}^{2}}} \right) ^{\alpha}{{\rm e}^{-{\frac {{r}^{2}}{{{\it r_{crit}}}^{2}}}}} \quad . \label{nfunctionrschechter_rcrit}$$ The above number of stars has a maximum at $r=r_{max}$: $$r_{max}= \sqrt {2+\alpha}{\it r_{crit}} \quad , \label{rmaxrcrit}$$ and the average distance of the stars, $ { \langle r \rangle} $, is $$\langle r \rangle ={\frac {{\it r_{crit}}\,\Gamma \left( 3+\alpha \right) } {\Gamma \left( \frac{5}{2} +\alpha \right) }} \quad . \label{raverageflux}$$ Figure \[gaia\_max\_uno\] presents the number of stars observed in Gaia as a function of the distance for a given window in the flux, as well as the theoretical curve. ![ The stars of Gaia with $ 1243361.38 \,(e-/s) \leq f \leq 1450291.3 \, (e-/s) $ or $ 10.121 \, (mag) \leq G \,(mag) \, \leq 10.288$ are organized by frequency versus distance, (empty circles); the error bar is given by the square root of the frequency. The maximum frequency of the observed stars is at $d=247$ pc. The full line is the theoretical curve generated by $\frac{dN}{d\Omega dr df}$ as given by the application of the Schechter LF which is Equation (\[nfunctionrschechter\]) and the theoretical maximum is at $d=274$ pc. The parameters are $L^*= 5 \,10^{11}$ (e-/s)$\times$ pc$^2$ and $\alpha$ =-0.62. Case of the Schechter LF. []{data-label="gaia_max_uno"}](f13.eps){width="6cm"} Figure \[gaia\_max\_molti\] presents the observed position of the maximum of the number of stars as a function of the flux. ![ Position of the observed maximum as function of the flux (empty stars) and theoretical average value as given by Equation (\[raverageflux\]) for the Schechter LF (full line). The parameters are $L^*= 5.5 \,10^{11}$ (e-/s)$\times$ pc$^2$ and $\alpha$ =-0.62. Case of the Schechter LF. []{data-label="gaia_max_molti"}](f14.eps){width="6cm"} In order to shift to more familiar variables Figure \[molti\_max\_g\] reports the position of the above maximum as function of the apparent Gaia magnitude ![ Position of the observed maximum as function of the apparent magnitude, G, (empty stars) and theoretical average value as given by Equation (\[raverageflux\]) for the Schechter LF (full line). The parameters are the same of Figure \[gaia\_max\_molti\]. Case of the Schechter LF. []{data-label="molti_max_g"}](f15.eps){width="6cm"} Figures \[gaia\_ave\_schechter\] and \[ave\_schechter\_g\] present the observed average value of the number of stars as a function of the flux and apparent magnitude. ![ Position of the average distance of the stars as function of the flux (empty stars) and theoretical curve as given by Equation (\[rmaxrcrit\]) (full line) for the Schechter LF. The parameters are $L^*= 1.3 \,10^{13}$ (e-/s)$\times$ pc$^2$ and $\alpha$ =-0.62. Case of the Schechter LF. []{data-label="gaia_ave_schechter"}](f16.eps){width="6cm"} ![ Position of the average distance of the stars as function of the apparent magnitude, G, (empty stars) and theoretical curve as given by Equation (\[rmaxrcrit\]) (full line) for the Schechter LF. The parameters are the same of Figure \[gaia\_ave\_schechter\]. Case of the Schechter LF. []{data-label="ave_schechter_g"}](f17.eps){width="6cm"} In the case of the Gamma LF, the maximum in the number of stars is at $$r_{max}= \sqrt {c+1}{ r_{crit}} \quad , \label{rmaxrcritGamma}$$ and the average distance of the stars $ { \langle r \rangle} $, is $$\langle r \rangle ={\frac {{ r_{crit}}\,c \left( c+1 \right) \Gamma \left( c \right) }{ \Gamma \left( \frac{3}{2}+c \right) }} \quad . \label{raveragefluxGamma}$$ Conclusions =========== [*Standard LFs*]{}. The Schechter function and the Gamma PDF can model the LF for stars, see Tables \[schechterfit\] and \[gammacfit\] as well as Figures \[gaia\_lf\_schechter\_der\] and \[gaia\_lf\_gammac\], but the values of the involved parameters depend on the chosen distance. The truncated Schechter function and the truncated Gamma LF can model the averaged absolute magnitude as a function of the distance, see Figures \[gaia\_xmd\_schechter\_trunc\] and \[gaia\_xmd\_gamma\_trunc\]. As an example, four analytical equations have been used in the case of the truncated Schechter LF: (i) the average theoretical absolute magnitude for the truncated Schechter LF, see Equation (\[xmtruncated\]), (ii) an empirical expression for the lowest absolute magnitude at a given distance, see Equation (\[mld\]), (iii) a theoretical curve for the highest absolute magnitude at a given distance, see Equation (\[mabsgupper\]), and (iv) a distance dependence expression for $M^*$ as given by Equation \[mstarcorrection\]. The above four equations model the Malmquist bias. The number of stars as a function of the distance presents a maximum which is a function of the flux, see Figures \[gaia\_max\_uno\] and \[gaia\_max\_molti\] for the Schechter LF. The theoretical and observed average distance of the stars are also functions of the selected flux, see Figure \[gaia\_ave\_schechter\]. The treatment here adopted deals with an homogeneous distribution of stars and therefore the the vertical scale-heights are not covered, see [@Just2015]. Acknowledgments {#acknowledgments .unnumbered} =============== This work has made use of data from the European Space Agency (ESA) mission [*Gaia*]{}\ (<https://www.cosmos.esa.int/gaia>), processed by the [*Gaia*]{} Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the [*Gaia*]{} Multilateral Agreement.\ Bibliography {#bibliography .unnumbered} ============ [10]{} url \#1[[\#1]{}]{}urlprefix\[2\]\[\][[\#2](#2)]{} K G 1920 [A study of the stars of spectral type A ]{} [*Lund Medd. Ser. II*]{} [**22**]{}, 1 K G 1922 [On some relations in stellar statistics ]{} [*Lund Medd. Ser. I*]{} [**100**]{}, 1 K G 1936 [Investigations on the stars in high galactic latitudes II. Photographic magnitudes and colour indices of about 4500 stars near the north galactic pole.]{} [*Stockholms Observatoriums Annaler*]{} [**12**]{}, 7.1 J and [Merrifield]{} M 1998 [*[Galactic astronomy]{}*]{} (Princeton, NJ: Princeton University Press) A G, [Berdyugin]{} A V and [Teerikorpi]{} P 2005 [Statistical biases in stellar astronomy: the Malmquist bias revisited]{} [**]{} [**362**]{}, 321 A S 1914 [*[Stellar movements and the structure of the universe]{}*]{} (London: Macmillan and co.) C and [Gomez]{} A E 1985 [The Malmquist correction]{} [**]{} [ **146**]{}, 387 R 1974 [The kinematics and ages of stars in Gliese’s catalogue]{} [ *Highlights of Astronomy*]{} [**3**]{}, 395 C, [Holmberg]{} J, [Portinari]{} L, [Fuchs]{} B and [Jahrei[ß]{}]{} H 2006 [On the mass-to-light ratio of the local Galactic disc and the optical luminosity of the Galaxy]{} [**]{} [**372**]{}, 1149 (*Preprint* ) A, [Fuchs]{} B, [Jahrei[ß]{}]{} H, [Flynn]{} C, [Dettbarn]{} C and [Rybizki]{} J 2015 [The local stellar luminosity function and mass-to-light ratio in the near-infrared]{} [**]{} [**451**]{}, 149 (*Preprint* ) , [Prusti]{} T, [de Bruijne]{} J H J, [Brown]{} A G A, [Vallenari]{} A, [Babusiaux]{} C, [Bailer-Jones]{} C A L, [Bastian]{} U, [Biermann]{} M, [Evans]{} D W and et al 2016 [The Gaia mission]{} [**]{} [**595**]{} A1 (*Preprint* ) , [Brown]{} A G A, [Vallenari]{} A, [Prusti]{} T, [de Bruijne]{} J H J, [Mignard]{} F, [Drimmel]{} R, [Babusiaux]{} C, [Bailer-Jones]{} C A L, [Bastian]{} U and et al 2016 [Gaia Data Release 1. Summary of the astrometric, photometric, and survey properties]{} [**]{} [**595**]{} A2 (*Preprint* ) K G and [Torres]{} G 2016 [Evidence for a Systematic Offset of -0.25 mas in the Gaia DR1 Parallaxes]{} [**]{} [**831**]{} L6 D W, [Riello]{} M, [De Angeli]{} F, [Busso]{} G, [van Leeuwen]{} F, [Jordi]{} C, [Fabricius]{} C, [Brown]{} A G A, [Carrasco]{} J M, [Voss]{} H and et al 2017 [Gaia Data Release 1. Validation of the photometry]{} [**]{} [**600**]{} A51 (*Preprint* ) J M, [Evans]{} D W, [Montegriffo]{} P, [Jordi]{} C, [van Leeuwen]{} F, [Riello]{} M, [Voss]{} H, [De Angeli]{} F, [Busso]{} G and et al 2016 [Gaia Data Release 1. Principles of the photometric calibration of the G band]{} [ ** ]{} [**595**]{} A7 (*Preprint* ) F, [Evans]{} D W, [De Angeli]{} F, [Jordi]{} C, [Busso]{} G, [Cacciari]{} C, [Riello]{} M, [Pancino]{} E, [Altavilla]{} G and et al 2017 [Gaia Data Release 1. The photometric data]{} [**]{} [**599**]{} A32 (*Preprint* ) W H, [Teukolsky]{} S A, [Vetterling]{} W T and [Flannery]{} B P 1992 [ *[Numerical Recipes in FORTRAN. The Art of Scientific Computing]{}*]{} (Cambridge, UK: Cambridge University Press) H 1974 [A new look at the statistical model identification]{} [*IEEE Transactions on Automatic Control*]{} [**19**]{}, 716 A R 2004 [How many cosmological parameters?]{} [**]{} [**351**]{}, L49 W and [Szydowski]{} M 2005 [Constraints on Dark Energy Models from Supernovae]{} in M [Turatto]{}, S [Benetti]{}, L [Zampieri]{} and W [Shea]{}, eds, [ *1604-2004: Supernovae as Cosmological Lighthouses*]{} vol 342 of [ *Astronomical Society of the Pacific Conference Series*]{} pp 508–516 P 1976 [An analytic expression for the luminosity function for galaxies.]{} [**]{} [**203**]{}, 297 N L, [Kotz]{} S and [Balakrishnan]{} N 1994 [*[Continuous univariate distributions. Vol. 1. 2nd ed.]{}*]{} (New York: [Wiley ]{}) L 2016 Pade approximant and minimax rational approximation in standard cosmology [*Galaxies*]{} [**4**]{}(1), 4 ISSN 2075-4434 <http://www.mdpi.com/2075-4434/4/1/4> Olver F W J e, Lozier D W e, Boisvert R F e and Clark C W e 2010 [*[NIST handbook of mathematical functions.]{}*]{} (Cambridge: [Cambridge University Press. ]{}) L 2017 A left and right truncated schechter luminosity function for quasars [*Galaxies*]{} [**5**]{}(2), 25 L 2013 [A right and left truncated gamma distribution with application to the stars ]{} [*Advanced Studies in Theoretical Physics*]{} [**23**]{}, 1139 M K and [Alqanoo]{} I M 2014 [Inference on The Doubly Truncated Gamma Distribution For Lifetime Data]{} [*International Journal Of Mathematics And Statistics Invention*]{} [**2**]{}, 1 R N 2000 [*[The Fourier transform and its applications]{}*]{} (New York: McGraw-Hill)
--- abstract: 'We discuss a new class of solutions to the Einstein equations which describe a primordial black hole (PBH) in a flat Friedmann background. Such solutions arise if a Schwarzschild black hole is patched onto a Friedmann background via a transition region. They are possible providing the black hole event horizon is larger than the cosmological apparent horizon. Such solutions have a number of strange features. In particular, one has to define the black hole and cosmological horizons carefully and one then finds that the mass contained within the black hole event horizon decreases when it is larger than the Friedmann cosmological apparent horizon, although its area always increases. These solutions involve two distinct future null infinities and are interpreted as the conversion of a white hole into a black hole. Although such solutions may not form from gravitational collapse in the same way as standard PBHs, there is nothing unphysical about them, since all energy and causality conditions are satisfied. Their conformal diagram is a natural amalgamation of the Kruskal diagram for the extended Schwarzschild solution and the conformal diagram for a black hole in a flat Friedmann background. In this paper, such solutions are obtained numerically for a spherically symmetric universe containing a massless scalar field, but it is likely that they exist for more general matter fields and less symmetric systems.' author: - '$^{1,2}$Tomohiro Harada[^1] and $^{1}$B. J. Carr[^2]' title: ' Super-horizon primordial black holes' --- Introduction ============ In a recent paper [@hc2004b] (henceforth Paper I) we studied numerically the growth of primordial black holes (PBHs) in a universe containing a massless scalar field but no other matter. Following Hamadé and Stewart [@hs1996], the double-null formulation of the Einstein equations was used. This is a powerful tool for investigating regions outside the cosmological horizon and inside the black hole horizon simultaneously. On the assumption that the PBH is formed from a local initial density perturbation which propagates causally, the black hole was modelled by matching a Schwarzschild solution to an exact flat Friedmann solution across a null surface. Initial data were specified on an outgoing and ingoing null surface. They were assumed to be exactly Friedmann on the outgoing surface and outside the matching boundary on the ingoing surface but some perturbation of Friedmann inside the matching boundary. In all the solutions considered in Paper I, the black hole event horizon (BHEH) was assumed to be smaller than the cosmological apparent horizon. However, in some circumstances (e.g. in the inflationary scenario), the perturbation - and perhaps even the black hole itself - may extend well beyond it. In this case, the only upper limit on its size comes from requiring that the perturbed region be part of our universe rather than a separate closed universe. The condition for this has been derived precisely for the situation in which the collapsing region is homogeneous and the equation of state is $p=k\rho$ [@hc2004a]. In particular, this includes the massless scalar field case, since this is equivalent to a stiff fluid with $k=1$ providing the gradient of the field is everywhere timelike and vorticity free (as expected). The peculiarity of the formation of PBHs in a universe with a stiff fluid was discussed in Ref. [@zn1980], and this suggests that the gradient of the scalar field could have an important hydrodynamical effect. We study such “super-horizon” solutions in this paper, although our numerical results only cover the scalar field case. It turns out that these scalar field solutions have some rather strange properties. For example, the mass contained within the BHEH decreases when the PBH is larger than the cosmological apparent horizon, although the area always increases. Since our system satisfies all energy conditions, the mass decrease of super-horizon PBHs we show here is completely different from the black hole mass decrease due to phantom energy accretion [@bde2004]. Also the conformal diagram for these solutions is interesting, being a natural extension of the Kruskal diagram to the cosmological context. The scalar field usually has a timelike gradient in the numerical simulations shown here and this means that it is equivalent to a stiff fluid. It is likely that there are analogues of these solutions for more general fluids with $p=k\rho$ providing $k>1/3$. However, we do not study these solutions here. In all these cases, we need to define cosmological and black hole horizons very carefully when their sizes are all of the same order. Formulation =========== Double-Null Formulation of Einstein equations {#sub:double-null} --------------------------------------------- As in paper I, we consider a massless scalar field $\Psi$ in general relativity, for which the stress-energy tensor is $$T_{ab}=\Psi_{,a}\Psi_{,b}-\frac{1}{2}g_{ab}\Psi^{,c}\Psi_{,c}.$$ The Einstein equations are $$R_{ab}-\frac{1}{2}g_{ab}R=8\pi T_{ab}, \label{eq:einstein}$$ and the equation of motion for the scalar field is $$\Box\Psi=\Psi^{;a}_{~~;a}=0. \label{eq:eom}$$ We focus on a spherically symmetric system, for which the line element can be written in the form $$ds^{2}=-a^{2}(u,v)dudv+r^{2}(u,v)(d\theta^{2}+\sin^{2}\theta d\phi^{2}),$$ where $u$ and $v$ are advanced and retarded time coordinates, respectively, $a$ is the metric function and $r$ is the “area radius” (the proper area of the sphere of constant $r$ being $4\pi r^2$). Equations (\[eq:einstein\]) and (\[eq:eom\]) then imply that we have 14 first-order partial differential equations and two auxiliary equations. These equations are given explicitly in Section 2 of [@hs1996]. We adopt units in which $G=c=1$. Misner-Sharp mass and trapping horizons {#sec:mass_horizon} --------------------------------------- The existence and position of marginal surfaces can be inferred from the form of the Misner-Sharp mass [@ms1964]. This is a well-behaved quasi-local mass in spherically symmetric spacetimes [@hayward1996], which can be written as $$m=\frac{r}{2}\left(1+\frac{4r_{,u}r_{,v}}{a^{2}}\right). \label{eq:misner_sharp}$$ Combining this with the equations in Section 2 of [@hs1996], we can derive the following useful relations: $$\begin{aligned} m_{,u}&=&-\frac{8\pi r^{2}r_{,v}(\Psi_{,u})^{2}}{a^{2}} , \label{eq:m_u}\\ m_{,v}&=&-\frac{8\pi r^{2} r_{,u}(\Psi_{,v})^{2}}{a^{2}}. \label{eq:m_v}\end{aligned}$$ In this paper, we adopt the “trapping horizon” framework introduced by Hayward [@hayward1993; @hayward1996] because this provides a systematic and mathematically transparent view of the sort of cosmological black holes treated here. In the context of this paper, trapping horizons and conventional apparent horizons are almost equivalent but they need not be in more general situations. Although the values for $r_{,v}$ and $r_{,u}$ are not geometrical invariants, their signs are, which leads to the following definitions. (i) A metric sphere is said to be trapped if $r_{,v}r_{,u}>0$. A sphere with $r_{,u}<0$ and $r_{,v}<0$ is future trapped, while one with $r_{,u}>0$ and $r_{,v}>0$ is past trapped. (ii) A metric sphere is said to be untrapped if $r_{,v}r_{,u}<0$. On an untrapped surface, $\partial_{v}$ is outgoing if $r_{,v}>0$ and ingoing if $r_{,v}<0$. More generally, a spacelike or null normal vector $z^{a}$ is outgoing if $z^{a}r_{,a}>0$ and ingoing if $z^{a}r_{,a}<0$. (iii) A metric sphere is said to be marginal if $r_{,v}r_{,u}=0$. A sphere with $r_{,v}=0$ is future marginal if $r_{,u}<0$, past marginal if $r_{,u}>0$ and bifurcating marginal if $r_{,u}=0$. It is described as outer marginal if $r_{,uv}<0$, inner marginal if $r_{,uv}>0$ and degenerate marginal if $r_{,uv}=0$. It is easily seen that a sphere is marginal if and only if $r=2m$, trapped if and only if $r<2m$ and untrapped if and only if $r>2m$. The closure of a hypersurface foliated by a future or past, outer or inner marginal sphere is called a (nondegenerate) trapping horizon. In Hayward’s approach, the black hole apparent horizon is replaced with a “future outer trapping horizon” (FOTH), while the cosmological (white hole) apparent horizon is replaced with a “past outer trapping horizon” (POTH). There is a critical difference between an apparent horizon and a trapping horizon. An apparent horizon is defined only on a prescribed spacelike hypersurface, which is usually required to be a Cauchy surface, so it exists only if the spacetime is strongly asymptotically predictable [@wald1983]. On the other hand, a trapping horizon is the trajectory of a marginal surface in the whole spacetime, and it is defined locally whenever a null foliation is possible. The spacetime does not need to be strongly asymptotically predictable. It should be stressed that Hawking’s “area theorem” [@hawking1971b] only refers to the BHEH. On the other hand, Hayward has shown that it also applies for a FOTH providing the null energy condition holds. Providing the black hole horizon is within the cosmological horizon and $u$ and $v$ are the standard double-null coordinates in the asymptotic Friedmann region, the FOTH and the POTH correspond to the conditions $r_{,v}=0$ and $r_{,u}=0$, respectively. However, the situation is more complicated if the black hole horizon is outside the cosmological horizon. In this case, we can still define trapping horizons by the conditions $r_{,v}=0$ and $r_{,u}=0$, but these are no longer everywhere identified with a FOTH and a POTH. Using the above equations, we can show that along a trapping horizon, on which $r=2m$, we have $$\left[a^{2}r_{,u}+16\pi r^{2}r_{,v}(\Psi_{,u})^{2}\right]du +\left[a^{2}r_{,v}+16\pi r^{2}r_{,u}(\Psi_{,v})^{2}\right]dv =0.$$ On trapping horizons, which have $r_{,v}=0$ and $r_{,u}=0$, we therefore have $$\begin{aligned} &&a^{2}du+16\pi r^{2}(\Psi_{,v})^{2}dv=0, \\ &&16\pi r^{2}(\Psi_{,u})^{2}du+a^{2}dv=0,\end{aligned}$$ respectively. We conclude that trapping horizons are non-timelike in this system. More precisely, a trapping horizon with $r_{,v}=0$ has $u=\mbox{const}$ if and only if $\Psi_{,v}=0$, while a trapping horizon with $r_{,u}=0$ has $v=\mbox{const}$ if and only if $\Psi_{,u}=0$. Except for these special cases, trapping horizons are spacelike. The form of $a(u,v)$ and $r(u,v)$ in the exact flat Friedmann model is given in Appendix B of Paper I. This implies that the cosmological particle horizon has $u=0$, while the cosmological apparent horizon, which is a POTH, has $3u+v=0$. This shows that the cosmological apparent horizon is spacelike and outside the particle horizon. The conformal diagram of the spacetime is indicated in Fig. \[fig:flat\_friedmann\], which shows the initial (big bang) spacelike singularity. The cosmological apparent horizon also coincides with the Hubble horizon in this case. If one has a black hole embedded in an exact or asymptotically flat Friedmann model and smaller than the cosmological particle horizon, the conformal diagram will change to the form indicated in Fig. \[fig:pbh\_clean\]. This now contains a BHEH, a FOTH and a final (black hole) spacelike singularity. Initial data for PBHs ===================== Structure of initial data ------------------------- The initial data are prescribed on the two null surfaces $u=u_{0}$ and $v=v_{0}$, with the region of calculation being the diamond $[u_{0},u_{1}]\times [v_{0},v_{1}]$. We have three independent functions on the two null surfaces: $a$, $\Psi$ and $r$. Two of them can be chosen freely and the third is determined by the constraint equations on the null surfaces. It is convenient to choose $$a(u_{0},v),a(u,v_{0}),\Psi(u_{0},v),\Psi(u,v_{0})$$ as the free initial data and to regard $$r(u_{0},v),r(u,v_{0})$$ as being determined by the initial value equations. We can regard $\Psi(u_{0},v)$ and $\Psi(u,v_{0})$ as the physical degrees of freedom in the initial data, while the choice for $a(u_{0},v)$ and $a(u,v_{0})$ fixes the gauge. In the flat Friedmann region, we adopt the coordinate system given in Appendix B1 of Paper I and impose flat Friedmann initial data for $a$ and $\Psi$ on $u=u_{0}$ for $v_{0}\le v\le v_{1}$. On $v=v_{0}$, we also choose flat Friedmann data for $a$ for $u_{0}\le u \le u_{1}$. For $\Psi$, we use the same data on the initial null surface $v=v_{0}$ for $u_{0}\le u\le u_{\rm m}$, but $\Psi=\mbox{const}$ for $u_{\rm m}+\Delta u<u\le u_{1}$. This is equivalent to Schwarzschild data in coordinates penetrating the black hole. The sudden transition from flat Friedmann data to Schwarzschild data results in a discontinuity at $u=u_{\rm m}$, which reduces the numerical accuracy. Hence we smooth the transition with some smoothing length $\Delta u$; we use a quadratic function between $u_{\rm m}$ and $u_{\rm m}+\Delta u$, so that $\Psi$ and $\Psi_{,u}$ are continuous. More precisely, we impose the following initial data for $a$ and $s\equiv \sqrt{4\pi}\Psi$: $$\begin{aligned} a^{2}(u,v_{0})&=&C^{2}\left(\frac{u+v_{0}}{2}\right) \label{eq:A_on_v0},\\ s(u,v_{0})&=& \left\{ \begin{array}{ll} \displaystyle\frac{\sqrt{3}}{2}\ln\left(\displaystyle\frac{u+v_{0}}{2}\right)+s_{0} & \quad (u<u_{\rm m})\\ \displaystyle\frac{\sqrt{3}}{2}\left[\displaystyle\frac{(\Delta u)^{2}-(u_{\rm m}+\Delta u-u)^{2}} {2\Delta u (u_{\rm m}+\Delta u)}+\ln\left( \displaystyle\frac{u_{\rm m}+v_{0}}{2}\right)\right]+s_{0} & \quad (u_{\rm m}\le u<u_{\rm m}+\Delta u) \\ \displaystyle\frac{\sqrt{3}}{2}\left[\displaystyle\frac{\Delta u} {2(u_{\rm m}+v_{0})}+ \ln\left(\displaystyle\frac{u_{\rm m}+v_{0}}{2}\right)\right]+s_{0} & \quad (u\ge u_{\rm m}+\Delta u) \end{array} \right. , \label{eq:s_on_v0}\end{aligned}$$ on the initial null surface $v=v_{0}$, and $$\begin{aligned} a^{2}(u_{0},v)&=&C^{2}\left(\frac{u_{0}+v}{2}\right), \\ s(u_{0},v)&=&\frac{\sqrt{3}}{2}\ln\left(\frac{u_{0}+v}{2}\right)+s_{0},\end{aligned}$$ on the initial null surface $u=u_{0}$. Here $C$ and $s_0$ are constants and, without loss of generality, we can choose $C=1$ and $s_{0}=0$. Note that although one has a Schwarzschild vacuum for $u\ge u_{\rm m}+\Delta u$, this situation only applies instantaneously at $v=v_0$, since the inflowing matter will fill this up immediately. Figure \[fig:local\_perturbation\] depicts the initial data for our numerical simulations, these being completely determined by the two parameters $u_{\rm m}$ and $\Delta u$. When the matching region is very narrow, the BHEH is smaller than the Friedmann cosmological apparent horizon if $u_{\rm m}>u_{\rm CAH}(v_{0})$. This corresponds to Fig. \[fig:local\_perturbation\](a), where $u=u_{\rm CAH}(v)$ denotes the trajectory of the cosmological apparent horizon. However, if $u_{\rm m}<u_{\rm CAH}(v_{0})$, the BHEH is larger than the Friedmann cosmological horizon, which corresponds to Fig. \[fig:local\_perturbation\](b). Note that the cosmological apparent horizon would always go within the matching radius if one took $v_0$ small enough, so when the black hole forms is crucial. Note also that when we refer to a POTH in the background Friedmann model, we refer to it as the (Friedmann) cosmological apparent horizon. As discussed later, this relates to the distinction between a black hole that forms from collapse and an eternal black hole that exists “ab initio”. Results ======= Black hole event horizon ------------------------ Our numerical code is based on that of Hamadé and Stewart [@hs1996] with a modification, described in Appendix A of Paper I, to ensure greater accuracy. The initial data are prescribed on the two null surfaces $v=v_{0}=1$ and $u=u_{0}=-2/3$, this also fixing the units. The calculated region is the diamond contained by $u=u_{0}$, $v=v_{0}$, $u=u_{1}$ and $v=v_{1}$. On the initial null surface $v=v_{0}$, we make the matching at $u=u_{\rm m}$ and use the smoothing length $\Delta u$. As time proceeds, the $u=\mbox{const}$ null rays will become ever more sensitive to $r$ near the BHEH, so the calculation is stopped at $v=v_{1}$, when the null rays become too coarse to be resolved. The parameters used are summarised in Table \[tb:models\] for six models. The BHEH is identified as the critical null ray $u=u_{\rm BHEH}$, the null rays with $u<u_{\rm BHEH}$ going to $r=\infty $ and those with $u>u_{\rm BHEH}$ returning to $r=0$. Therefore, the BHEH is only identified at the end of the calculation. Although the identification of the BHEH is rather imprecise, because of numerical errors and the finiteness of the calculated regions, it suffices for the physical interpretation of the results given below. It is found that all six models have a BHEH and Table \[tb:models\] indicates the initial ratio of the sizes of the BHEH and the Friedmann cosmological apparent horizon at $v=v_{0}$. This complements Table I of Paper I, which only shows cases (A to D) in which the ratio is less than $1$. All models except E have an initial BHEH larger than the Friedmann cosmological apparent horizon. In Model E the initial BHEH is slightly smaller than the Friedmann cosmological apparent horizon. Models $u_{\rm m}$ $\Delta u$ $u_{0}$ $u_{1}$ $v_{0}$ $v_{1}$ $m_{\rm BHEH}/m_{\rm CAH}$ -------- ------------- ------------ --------- --------- --------- --------- ---------------------------- E $-1/2$ 0.4 $-2/3$ 2 1 4.5 0.969 F $-1/2$ 0.02 $-2/3$ 2 1 4.5 2.12 G $-1/2$ 0.05 $-2/3$ 2 1 4.5 2.01 H $-1/2$ 0.2 $-2/3$ 2 1 4.5 1.52 I $-0.6$ 0.02 $-2/3$ 2 1 4.5 3.58 J $-0.4$ 0.02 $-2/3$ 2 1 4.5 1.31 : \[tb:models\] Model parameters and the initial mass ratios of BHEH to the Friedmann cosmological apparent horizon. Location of horizons -------------------- Figure \[fig:horizons\_uv\] shows the locations of the BHEH and trapping horizons in the $(u,v)$ plane. It also indicates the signs of $r_{,v}$ and $r_{,u}$. The region is future trapped if the signs are $(-,-)$, past trapped if they are $(+,+)$ and untrapped if they are $(+,-)$ or $(-,+)$. Trapping horizons may have either $r_{,v}=0$ or $r_{,u}=0$. There are two qualitatively different classes of models. In the first class, which includes all models except E, the initial BHEH is larger than the Friedmann cosmological apparent horizon. As seen from the figure, as $v$ increases from $v_{0}=1$, two trapping horizons, one with $r_{,v}=0$ and the other with $r_{,u}=0$, appear and cross each other in the future of the BHEH. In terms of the evolution with respect to $v$, before the crossing the trapping horizons with $r_{,v}=0$ and $r_{,u}=0$ correspond to a POTH and a FOTH, respectively. After the crossing, the situation is reversed, so the trapping horizons with $r_{,v}=0$ and $r_{,u}=0$ correspond to a FOTH and a POTH, respectively. After further evolution, the POTH crosses the BHEH. Thereafter it will coincide with the Friedmann cosmological apparent horizon, while the FOTH corresponds to the black hole apparent horizon. In the second class of models, which includes E, the initial BHEH is smaller than the Friedmann cosmological apparent horizon. Trapping horizons with $r_{,v}=0$ and $r_{,u}=0$ appear and remain in the future and past of the BHEH, respectively. These two trapping horizons do not cross each other and always correspond to the FOTH and POTH, respectively. PBH mass change --------------- The area radii of the BHEH and trapping horizons for these models are shown in Fig. \[fig:eh\_rad\]. It is seen that the area of the BHEH always increases for these models, which is consistent with the black hole area theorem [@hawking1971b]. If the initial BHEH is larger than the Friedmann cosmological apparent horizon, i.e., for all models other than E, the areas of the trapping horizons with both $r_{,v}=0$ and $r_{,u}=0$ first decrease as $v$ increases and then increase after crossing each other. It is interesting that, in terms of $v$, the two trapping horizons cross at a radius slightly larger than the BHEH at the crossing time. After the BHEH enters the future of the POTH, it soon gets much smaller than it. For these models, the FOTH appears just before the trapping horizons cross, although this is not so clear in the figure. For models where the initial BHEH is smaller than the Friedmann cosmological apparent horizon, i.e., Model E, the situation is standard. In terms of $r$, the FOTH is inside the BHEH, while the POTH is outside it. The BHEH soon gets much smaller than the POTH. Figure \[fig:eh\_mass\] shows the evolution of the Misner-Sharp mass of the BHEH and trapping horizons. For models where the initial BHEH is larger than the Friedmann cosmological apparent horizon, the mass of the BHEH first decreases and then increases. As we will see later, the mass of the BHEH decreases if and only if it is in the past trapped region. When the black hole gets out of the past trapped region, its mass starts to increase. The mass of the trapping horizon with $r_{,v}=0$ is only slightly smaller than that of the BHEH, despite the radii being considerably different. This is because the density inside the perturbed region is very small. After the BHEH crosses the trapping horizon with $r_{,u}=0$, which is a POTH at the crossing, the mass of the BHEH soon gets much smaller than the mass of the cosmological apparent horizon. The change in the BHEH mass may be seen more clearly in Fig. \[fig:mass\_accretion\], which shows the rate of mass increase $dm_{\rm BHEH}/dv$. For models other than E, this is initially negative but it then increases, crosses zero, reaches a positive maximum and then decreases. The larger the initial mass ratio of the BHEH to the Friedmann cosmological apparent horizon, the more negative the initial mass increase rate is. For Model E, where the initial BHEH is slightly smaller than the Friedmann cosmological apparent horizon, the accretion rate starts with a very small positive value, increases to a maximum and then decreases. The behaviour of the BHEH mass is clearly explained by Eq. (\[eq:m\_v\]), which can be rewritten as a black hole mass equation: $$\frac{dm_{\rm BHEH}}{dt}=-8\pi r^{2} \frac{(\Psi_{,v})^{2}}{a^{2}} \left(\frac{dr}{dt}\right)_{v=\mbox{const}}, \label{eq:accretion_t}$$ where $t=t(u+v)$ is any time coordinate which depends on $u+v$. Whether the black hole mass increases or decreases depends solely on the sign of $r_{,u}$, which is the null expansion along the $v=\mbox{const}$ direction. In the usual situation, the BHEH is in a region where $r_{,u}<0$ and its mass monotonically increases. However, when the BHEH is in a past trapped region, its mass monotonically decreases because $r_{,u}>0$. Equation (\[eq:accretion\_t\]) (or (\[eq:m\_v\])) also explains why the mass accretion rate starts very small for Model E. Since the BHEH is inside but very close to the cosmological apparent horizon, $r_{,u}$ starts off negative but very close to zero. After some evolution, $r_{,u}$ decreases well below zero and the accretion then increases. This suppression of accretion for a PBH as large as a cosmological apparent horizon is due purely to general relativistic effects, or relativistic cosmological expansion. Paper I discusses the qualitative difference between PBHs whose size is comparable to and much smaller than the cosmological apparent horizon. For models with the initial BHEH larger than the Friedmann cosmological apparent horizon, the areas and masses of both trapping horizons decrease as $v$ increases before they cross each other. After the crossing, they increase. This behaviour is completely consistent with the “second law” for trapping horizons, as formulated by Hayward [@hayward1996]. The theorem states, for example, that if the null energy condition holds, then the area and mass of the FOTH (POTH) with $r_{,v}=0$ do not decrease (increase) along the vector $z$ tangent to the horizon. This vector has the form $z=\beta \partial_{v}-\alpha \partial _{u}$ where $\beta>0$. Conversion from a white hole to a black hole -------------------------------------------- To understand the spacetime structure for solutions in which the PBH is larger than the cosmological apparent horizon, we concentrate on Model F. Figure \[fig:model\_e\_detail\] gives the detailed numerical results for this model. Figures \[fig:model\_e\_detail\](a) and (b) give the evolution of $r$ along the null geodesics with $v=\mbox{const}$ and $u=\mbox{const}$, respectively, while Figs. \[fig:model\_e\_detail\] (c) and (d) give that of $2m/r$ along the null geodesics with $v=\mbox{const}$ and $u=\mbox{const}$, respectively. It is seen that $r$ continues to increase along both the earlier null geodesics with $u=\mbox{const}$ and $v=\mbox{const}$. This is also consistent with Fig. \[fig:horizons\_uv\] (a). On the initial null ray $v=v_{0}=1$, $2m/r$ decreases monotonically as $u$ increases and crosses unity from above, as can be seen in Fig. \[fig:horizons\_uv\](c). Since we have chosen $\Psi=\mbox{const}$ for $u\geq u_{\rm m}+\Delta u$ on the initial null ray $v=v_{0}$, Eq. (\[eq:m\_u\]) implies $m=\mbox{const}$ here. Since $r_{,u}=0$ implies $2m/r=1$, the area radius along the null ray $v=v_{0}$ must monotonically increase as $u$ increases. This means that this null ray does not reach $r=0$ but another null infinity, different from the one which the null rays with $u=\mbox{const}$ reach. We can now see that the standard conformal diagram for PBHs, which is given by Fig. \[fig:pbh\_clean\], will not apply if the BHEH is larger than the cosmological apparent horizon. Rather the conformal diagram is given by Fig. \[fig:pbh\_big\]. The FOTH and POTH intersect at one point in this diagram. This point corresponds to a sphere which is a bifurcating marginal surface. It is clear that the spacetime can be interpreted as the conversion of a white hole to a black hole, since a PBH larger than the cosmological apparent horizon involves a past trapped region changing to a future trapped one. There are two distinct asymptotic regions: the future null infinity for rays with $u=\mbox{const}$ and the one for rays with $v=\mbox{const}$. These two asymptotic regions are associated with two disconnected untrapped regions. It is impossible for an observer to go from one untrapped region to the other by traversing the crossing point, the future trapped or past trapped region. The spacetime has no regular centre. Figure \[fig:pbh\_big\] is naturally interpreted as an amalgamation of the Kruskal diagram for the extended Schwarzschild solution, shown in Fig. \[fig:schwarzschild\_kruskal\], and the standard conformal diagram for a black hole in a flat Friedmann background, shown in Fig. \[fig:pbh\_clean\]. It will be recalled that the Kruskal diagram also contains a white hole changing into a black hole and two asymptotically flat regions. The analogy of this extension for the conformal diagram shown in Fig. \[fig:pbh\_clean\] is shown in Fig. \[fig:pbh\_big\_complete\]. This contains the big bang singularity, the black hole singularity and two asymptotically flat Friedmann regions. There are two BHEHs and two POTHs, these crossing in the black hole region. The regions below the two POTHs are past trapped but not precisely equivalent to a white hole since, unlike the situation in Fig. \[fig:schwarzschild\_kruskal\], there are no past null infinities and no past event horizons. Since the spacetime has no regular centre, it cannot contain a particle horizon as a light cone which emanates from the regular centre just after the big bang. Figure \[fig:pbh\_big\] is obviously contained within Fig. \[fig:pbh\_big\_complete\] but the present numerical calculations do not determine what happens in the region $v<v_0$. Discussion ========== If we accept both the conventional conformal diagram for PBHs shown in Fig. \[fig:pbh\_clean\] and the new one shown in Fig. \[fig:pbh\_big\] or Fig. \[fig:pbh\_big\_complete\], we have two distinct PBH causal structures. The conventional diagram has a regular centre $r=0$ and a single null infinity, while the new one has no regular centre and two null infinities. This raises the issue of what determines the causal structure of PBH spacetimes. If we consider standard PBH formation from initial data with a regular centre, then the causal structure will be described by the conventional diagram. Another issue concerns the transition between the two diagrams. If this transition is governed by parameters which can be changed continuously, it would be natural to assume that there is a threshold spacetime which separates them. We should note that the BHEH is null, while the cosmological apparent horizon is spacelike in the massless scalar field case (see Fig. \[fig:local\_perturbation\]). Hence, if the $u_{\rm BHEH}$ is smaller than $u=u_{\rm CPH}$ of the cosmological particle horizon ($0$ in the present coordinates), the BHEH can be always outside the cosmological apparent horizon if we take $v_{0}$ sufficiently small. In this case, $m_{\rm BHEH}(v_{0})>m_{\rm CAH}(v_{0})$ and hence the causal structure should be given by Fig. \[fig:pbh\_big\] or Fig. \[fig:pbh\_big\_complete\]. On the other hand, if $u_{\rm BHEH}>u_{\rm CPH}$, the causal structure is depicted by Fig. \[fig:pbh\_clean\] and the spacetime has a regular centre before the BHEH appears. Therefore, the transition spacetime corresponds to a PBH whose event horizon coincides with the cosmological particle horizon of the background cosmological solution. The causal structure of this critical spacetime is depicted in Fig. \[fig:pbh\_transition\]. The discussion of the mass variation of the BHEH can be understood in the more general spherically symmetric context from the counterparts of Eqs. (\[eq:m\_u\]) and (\[eq:m\_v\]) [@hayward1996; @hc2004a]. In our notation, these equations become $$\begin{aligned} m_{,u}&=&\frac{8 \pi r^{2}}{a^{2}}(T_{uv}r_{,u}-T_{uu}r_{,v}), \\ \label{eq:m_u_general} m_{,v}&=&\frac{8 \pi r^{2}}{a^{2}}(T_{uv}r_{,v}-T_{vv}r_{,u}). \label{eq:m_v_general}\end{aligned}$$ The combination of the two terms in parentheses on the right-hand side of Eq. (\[eq:m\_v\_general\]), related to null expansions, determines the time variation of the BHEH mass. In the case of a massless scalar field, the situation is simplified because $T_{uv}=0$ and $T_{vv}=(\Psi_{,v})^{2}\geq 0$. We therefore conclude that the black hole accretion vanishes when the event horizon coincides with the POTH with $r_{,u}=0$ and that it becomes negative when the event horizon is in a past trapped region. For general matter fields, it can be proved that the BHEH mass is non-decreasing if $r_{,v}>0$, $r_{,u}<0$ and the dominant energy condition hold on the event horizon (c.f. Proposition 5 of [@hayward1996]). However, if these assumptions are not satisfied, the mass variation of the BHEH is non-trivial. If the dominant energy condition holds, then the conditions $T_{uv}\ge 0$ and $T_{vv}\ge 0$ are necessarily satisfied. In the general case with $T_{uv}>0$, the BHEH mass still increases even when it is on a POTH with $r_{,u}=0$. When the BHEH is in a past trapped region, the first and second terms in parentheses on the right-hand side of Eq. (\[eq:m\_v\_general\]) make positive and negative contributions, respectively, to the black hole mass change. Even when the BHEH is in a past trapped region, its mass can still increase if the first term is dominant. This suggests that the BHEH has to be well outside the POTH in order to have suppressed accretion or mass decrease for general matter fields satisfying the dominant energy conditions. On the other hand, there could still be a lot of accretion for some other kinds of matter field. We have shown that the mass of the BHEH can decrease even if all possible energy and causality conditions hold. This suggests that the usual concept of event horizons is not appropriate for a proper discussion of black hole thermodynamics. This is why Hayward [@hayward1996; @hayward1993; @hayward1994; @hayward1998] introduced the concept of the FOTH (POTH) as a useful generalisation of the idea of a black hole (white hole) event horizon. As we saw in Section \[sec:mass\_horizon\], the idea is that all definitions should be quasi-local and this is useful in proving black hole properties. For example, when the black hole (white hole) horizon is defined as a FOTH (POTH), one can prove the monotonicity of the area and mass in the spherically symmetric situation. From this point of view, Fig. \[fig:pbh\_big\] describes the conversion of a white hole to a black hole by definition. Although the analysis in this paper has assumed spherical symmetry, it is likely that similar considerations apply more generally. If we consider slightly nonspherical PBH spacetimes, it is trivial to show that the strange properties discussed in this paper still hold. However, when the system is very far from spherical symmetry, we need to re-analyse the Einstein equations. Although the conditions for suppressed accretion or mass reduction may change, we believe the appearance of these properties is robust. Since a spacetime which contains a PBH larger than the cosmological apparent horizon cannot have a regular centre, such PBHs could not form through classical processes from a Friedmann background. This implies that a PBH formed through classical processes always increases its mass and soon gets much smaller than the cosmological horizon even if it is comparable with the cosmological scale at its formation. On the other hand, if we take quantum processes or inflation preceding to the massless scalar field dominated stage into account, the formation of such PBHs may be allowed. Moreover, there is a possibility that the universe initially contained a past trapped region which was converted into a future trapped region, as indicated in Fig. \[fig:pbh\_big\_complete\]. This kind of PBH might be described as “ab initio” or “eternal”. Conclusion ========== We have investigated PBHs in a flat Friedmann universe with a massless scalar field which are larger than the cosmological apparent horizon. Through fully general relativistic numerical calculations, we have found that there are two trapping horizons which cross. This corresponds to the conversion of a white hole into a black hole and to an initial reduction in the mass of the black hole event horizon. We have seen that these unusual features are peculiar to PBHs larger than the cosmological apparent horizon. The present analysis is confined to the spherically symmetric situation with a massless scalar field, so it is clearly important to examine how sensitive these peculiar features are to the assumed symmetry and type of matter field. In particular, it is interesting to study whether the condition for the black hole mass reduction applies more generally. If so, this will be important for black hole physics and black hole thermodynamics. It is also interesting to study under what circumstances such a PBH can arise in the early universe, especially in the context of inflationary cosmology, and how this relates to the separate universe condition. To make such an extension numerically or analytically, it would be very important to adopt the appropriate definition of the cosmological and black hole horizons and the trapping horizon framework should be suitable for the case in which the black hole is as large as the cosmological horizon. We will study these issues in a future paper. The formation of PBHs arising from the evolution of an effectively massless scalar field during inflation and its implication for the formation of galaxies and supermassive black holes in galactic nuclei has been recently discussed in Ref. [@rsk2001]. In this scenario, super-horizon PBHs may naturally arise due to the closed domain walls of size exceeding the cosmological horizon. Finally, it should be noted that the conversion of a wormhole to a black hole in a first order phase transition was studied in Ref. [@kssm1981] in the context of bubble nucleation. They showed that if a wormhole is created in the transition from a false to a true vacuum, it necessarily collapses to a black hole. Since our system only contains a scalar field with no potential and therefore no phase transition, the situation is rather different. However, it is quite interesting that similar phenomena are seen in different systems. We would thank S. A. Hayward for helpful comments and clarifying trapping horizon notation. TH is grateful to K. Nakao for helpful comments. TH was partly supported from JSPS. This work was partly supported by the Grant-in-Aid for the 21st Century COE “Center for Diversity and Universality in Physics” from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. [99]{} T. Harada and B. J. Carr, Phys. Rev. D[**71**]{}, 104010 (2005). R. S. Hamadé and J. M. Stewart, Class. Quantum Grav. [**13**]{}, 497 (1996). T. Harada and B. J. Carr, Phys. Rev. D[**71**]{}, 104009 (2005). N. A. Zabotin and P. D. Nasel’skii, Sov. Astron. Lett. [**6**]{}, 7 (1980). E. Babichev, V. Dokuchaev and Yu. Eroshenko, Phys. Rev. Lett. [**93**]{}, 021102 (2004); J. Ex. Th. Phys. [**100**]{}, 528 (2005). C. W. Misner and D. H. Sharp, Phys. Rev. [**136**]{}, B571 (1964). S. A. Hayward, Phys. Rev. D[**53**]{}, 1938 (1996). S. A. Hayward, gr-qc/9303006. R. M. Wald, [*General Relativity*]{}, (University of Chicago Press, Chicago, United States of America, 1983). S. W. Hawking, Phys. Rev. Lett. [**26**]{}, 1344 (1971). S. A. Hayward, Phys. Rev. D[**49**]{}, 831 (1994). S. A. Hayward, Class. Quantum Grav. [**15**]{}, 3147 (1998). S. G. Rubin, A. S. Sakharov and M. Yu. Khlopov, J. Exp. Theor. Phys. [**92**]{}, 921 (2001); M. Yu. Khlopov, S. G. Rubin and A. S. Sakharov, Astropart. Phys. [**23**]{}, 265 (2005). H. Kodama, M. Sasaki, K. Sato and K. Maeda, Prog. Theor. Phys. [**66**]{}, 2052 (1981). ![\[fig:flat\_friedmann\] The conformal diagram for a flat Friedmann spacetime with a massless scalar field. The cosmological apparent horizon, which is a past outer trapping horizon (POTH), is spacelike and outside the cosmological particle horizon (CPH).](1) ![\[fig:pbh\_clean\] The conformal diagram for a PBH smaller than the Friedmann cosmological particle horizon.](2) -- -- -- -- -- -- -- -- -- -- -- -- ![image](7) -- -- -- -- ![\[fig:pbh\_big\] Conformal diagram showing the possible causal structure for a PBH larger than the Friedmann cosmological apparent horizon. There are two distinct future null infinities and a past trapped (white hole) region is converted into a future trapped (black hole) region. The region with $u<u_{\rm m}$ is the usual flat Friedmann spacetime but the region $v<v_{0}$ is not calculated. See text for details.](9) ![\[fig:schwarzschild\_kruskal\] Conformal diagram of the maximally extended Schwarzschild spacetime, i.e., the Kruskal diagram. The black hole event horizon (BHEH) coincides with the future outer trapping horizon and the white hole event horizon (WHEH) coincides with the past outer trapping horizon.](10) ![\[fig:pbh\_big\_complete\] Conformal diagram of a possible causal structure of the maximally extended spacetime for a PBH larger than the Friedmann cosmological apparent horizon. There are two distinct future null infinities, corresponding to the conversion of a past trapped (white hole) region into a future trapped (black hole) region.](11) ![\[fig:pbh\_transition\] Conformal diagram of the critical PBH spacetime, where the black hole event horizon (BHEH) coincides with the cosmological particle horizon (CPH).](12) [^1]: Electronic address:harada@scphys.kyoto-u.ac.jp [^2]: Electronic address:B.J.Carr@qmul.ac.uk
--- abstract: | Simulation and bisimulation metrics for stochastic systems provide a quantitative generalization of the classical simulation and bisimulation relations. These metrics capture the similarity of states with respect to quantitative specifications written in the quantitative $\mu$-calculus and related probabilistic logics. We first show that the metrics provide a bound for the difference in [*long-run average*]{} and [*discounted average*]{} behavior across states, indicating that the metrics can be used both in system verification, and in performance evaluation. For turn-based games and MDPs, we provide a polynomial-time algorithm for the computation of the one-step metric distance between states. The algorithm is based on linear programming; it improves on the previous known exponential-time algorithm based on a reduction to the theory of reals. We then present PSPACE algorithms for both the decision problem and the problem of approximating the metric distance between two states, matching the best known algorithms for Markov chains. For the bisimulation kernel of the metric our algorithm works in time $\calo(n^4)$ for both turn-based games and MDPs; improving the previously best known $\calo(n^9\cdot\log(n))$ time algorithm for MDPs. For a concurrent game $G$, we show that computing the exact distance between states is at least as hard as computing the value of concurrent reachability games and the square-root-sum problem in computational geometry. We show that checking whether the metric distance is bounded by a rational $r$, can be done via a reduction to the theory of real closed fields, involving a formula with three quantifier alternations, yielding $\calo(|G|^{\calo(|G|^5)})$ time complexity, improving the previously known reduction, which yielded $\calo(|G|^{\calo(|G|^7)})$ time complexity. These algorithms can be iterated to approximate the metrics using binary search. address: - '[a]{}IST Austria (Institute of Science and Technology Austria)' - 'Computer Science Department, University of California, Santa Cruz' - '[c]{}Department of Computer Science, University of California, Los Angeles' author: - Krishnendu Chatterjeea - Luca de Alfarob - Rupak Majumdarc - Vishwanath Ramand bibliography: - 'dvlab.bib' title: Algorithms for Game Metrics --- Acknowledgments. {#acknowledgments. .unnumbered} ---------------- The first, second and fourth author were supported in part by the National Science Foundation grants CNS-0720884 and CCR-0132780. The third author was supported in part by the National Science Foundation grants CCF-0427202, CCF-0546170. We would like to thank the reviewers for their detailed comments that helped us make the paper better.
BI-TP 2007/35\ arXiv:0711.1743\ Y. Burnier, M. Laine, M. Vepsäläinen [*Faculty of Physics, University of Bielefeld, D-33501 Bielefeld, Germany\ *]{} We elaborate on the fact that quarkonium in hot QCD should not be thought of as a stationary bound state in a temperature-dependent real potential, but as a short-lived transient, with an exponentially decaying wave function. The reason is the existence of an imaginary part in the pertinent static potential, signalling the “disappearance”, due to inelastic scatterings with hard particles in the plasma, of the off-shell gluons that bind the quarks together. By solving the corresponding Schrödinger equation, we estimate numerically the near-threshold spectral functions in scalar, pseudoscalar, vector and axial vector channels, as a function of the temperature and of the heavy quark mass. In particular, we point out a subtlety in the determination of the scalar channel spectral function and, resolving it to the best of our understanding, suggest that at least in the bottomonium case, a resonance peak [*can*]{} be observed also in the scalar channel, even though it is strongly suppressed with respect to the peak in the vector channel. Finally, we plot the physical dilepton production rate, stressing that despite the eventual disappearance of the resonance peak from the corresponding spectral function, the quarkonium contribution to the dilepton rate becomes [*more pronounced*]{} with increasing temperature, because of the yield from free heavy quarks. December 2007 Introduction ============ Assuming the existence of a thermalized medium, with a temperature $T$, and of heavy quarks, with a mass $M \gg T$, there is a finite probability, given by the Boltzmann factor $\exp(-2 M/T)$, that an on-shell quark and antiquark are generated through thermal fluctuations. They could then annihilate, creating an off-shell photon, which may escape from the thermal system, and subsequently decay into a dilepton pair (for instance, $e^-e^+$ or $\mu^-\mu^+$). The characteristics of the energy distribution of these pairs offer an indirect probe on the strongly interacting dynamics taking place within the thermalized system. As a concrete application, properties of lepton pairs can be observed in heavy ion collision experiments (see, e.g., refs. [@exp]), and may serve as an indication of whether a thermalized state with a temperature above the deconfinement transition was momentarily reached during the evolution [@ms]. Given that various properties of the quarkonium system can be understood in great detail at zero temperature [@quarkonium], it could be assumed that describing quantitatively the heavy quark-antiquark system in a thermalized medium is a relatively simple task. After all, QCD is asymptotically free, so the effective coupling should decrease with the temperature, and ultimately confinement is lost as well. Somewhat surprisingly, this expectation appears to be overly optimistic. In fact, all standard approximation methods develop further systematic uncertainties at $T > 0$. For instance, direct lattice QCD reconstructions of the quarkonium [*spectral function*]{} [@lattold; @latt1; @latt2], which is a quantity determining the dilepton production rate, develop the new problem that an analytic continuation is needed from data collected on a short Euclidean time interval, to the observable defined in Minkowskian spacetime. Another popular class of approaches, so-called potential models [@models; @mp3], suffers from the proliferation of many independent non-perturbative definitions of a “static potential” which could be measured on the lattice [@jp; @lattpot] and inserted into a Schrödinger equation. A recently introduced method, the determination of the corresponding observable in strongly coupled $\mathcal{N} = 4$ Super-Yang-Mills theory [@ads], also contains unknown systematic errors from the point of view of QCD, which cannot be reduced by increasing the temperature, because the QCD coupling soon becomes weak [@gE2]. The method employed in this paper is resummed weak-coupling perturbation theory. It again suffers from novel difficulties at finite temperatures: curing infrared divergences necessitates carrying out complicated resummations [@dr; @htl], and even though a weak-coupling expansion in the QCD coupling constant $g$ can subsequently be defined, it has a strange structure, with relative corrections suppressed only by odd powers of $g$ [@jk; @chm], by logarithms like $g^n\ln(1/g)$ [@tt; @mdlog], or by powers of $g$ multiplied by non-perturbative coefficients [@linde]–[@nspt_mass]. Moreover, even if a number of coefficients were known, the convergence of the series could be slow [@bn] (see, however, refs. [@conv; @gE2]). Given all these problems, a suitable practical approach at the present date might be to compute the quarkonium spectral function and the dilepton production rate with many different methods, possessing complementary systematic errors, and to look for a consistent pattern, which could then also represent the situation in QCD. It is in this spirit that the purpose of the present paper is to pursue the side of resummed perturbative computations. The resummed perturbative approach to heavy quarkonium in hot QCD was initiated in refs. [@static]–[@imV], of which the present paper is a direct continuation. In particular, we expand and improve on the analysis of ref. [@og2]. We consider, first of all, the same observable as in ref. [@og2] (quarkonium spectral function in the vector channel), but discuss more extensively the dependence of the result on the temperature and on the heavy quark mass. Second, we carry out a new analysis for the spectral function in the scalar channel. This turns out to require more advanced numerical techniques than those used in ref. [@og2]. We also relate the pseudoscalar and axial vector spectral functions to the vector and scalar spectral functions. Finally, we elaborate on the physics implications of the results in more detail than before, both conceptually, i.e. with regard to the picture they suggest for the quarkonium system in a deconfined environment, and from the practical point of view, i.e. with regard to the dilepton production rate. General framework ================= We start by specifying somewhat more quantitatively the main ideas and equations of the resummed perturbative approach. A detailed derivation follows in \[se:Schr\], \[se:rho\], while a reader only interested in the numerical results could skip directly to \[se:num\] after the present section. Let $\hat\psi$ be a generic heavy quark field operator in the Heisenberg picture. The basic correlation function we consider is of the form C\_[&gt;]{}\^V(t;) \^3 (t,+) \^ W (t,-) (0,-) \_ W’ (0,+ ) , where $W$, $W'$ are Wilson lines connecting the adjacent operators, inserted in order to keep the Green’s function gauge-invariant; the metric is $\eta_{\mu\nu} = \mathop{\mbox{diag}}$($+$$-$$-$$-$); and the expectation value refers to $\langle...\rangle\equiv \mathcal{Z}^{-1} \tr [\exp(-\hat H/T)(...)]$, where $\mathcal{Z}$ is the partition function, $\hat H$ is the QCD Hamiltonian, and $T$ is the temperature. The superscript in $C^V_{>}$ refers to the vector channel; the subscript refers to the time-ordering in . We also consider scalar, pseudoscalar and axial vector correlators below; their precise definitions are given in \[se:Schr\].[^1] The significance of the Green’s function in is that if we take the limit $\vec{r,r'}\to\vec{0}$, and subsequently Fourier transform with respect to the time $t$, then we obtain a function which is trivially related to the heavy quarkonium spectral function, $\rho^V(\omega)$, in the vector channel: \^V() = 12 ( 1 - e\^[-]{}) \_[-]{}\^ t e\^[i t]{} C\_[&gt;]{}(t;) . This quantity is physically important, given that the production rate of $\mu^-\mu^+$ pairs (with a vanishing total spatial momentum $\vec{0} = \vec{q}_{\mu^-} + \vec{q}_{\mu^+}$ and a non-vanishing total energy $\omega = E_{\mu^-} + E_{\mu^+}$) from a system at a temperature $T$, is directly proportional to $\rho^V(\omega)$ [@dilepton]: = ( 1 + ) ( 1 - )\^12 [n\_]{}() , where we assumed $\omega \ge 2 m_\mu$; $e$ is the electromagnetic coupling; $c\in(\fr23,-\fr13)$ is the electric charge of the heavy quark; and ${n_\rmi{B}}(\omega) \equiv 1/[\exp({\omega}/{T}) - 1]$ is the Bose distribution function. Now, a systematic perturbative determination of the Green’s function in , and of the corresponding spectral function in , is quite difficult for energies $\omega$ close to the quark-antiquark threshold, $\omega\sim 2 M$. The reason is that in this regime infinitely many graphs, particularly so-called ladders, contribute at the same order. A further problem is that at finite temperatures, the rungs of the ladders, containing gluons, need to be dressed by thermal corrections. A way to resum these infinitely many dressed contributions is not to compute the correlator of directly, but rather to find a partial differential equation satisfied by this correlator, and then to solve this equation numerically. The partial differential equation in question is just the Schrödinger equation. Indeed, it is for the sake of being able to write a Schrödinger equation that we have introduced $\vec{r,r'}\neq \vec{0}$ in . To be more precise, let us consider $C_{>}^V$ in the limit that the heavy quark mass $M$ is very large. Then, as we will see, $C_{>}^V$ obeys { i \_t - } C\_[&gt;]{}\^V(t;) = 0 , with the initial condition C\_[&gt;]{}\^V(0;) = - 6 [N\_[c]{}]{} \^[(3)]{}() + ( ) , where ${N_{\rm c}}=3$. The terms specified explicitly in , result from a tree-level computation; in contrast, the potential denoted by $V_{>}(t,r)$ originates only at 1-loop order. It can be defined as the coefficient scaling as ${{\mathcal{O}}}(M^0)$, after acting on $C_{>}^V(t;\vec{r,r'})$ with the time derivative $i\partial_t$. The potential $V_{>}(t,r)$ depends, in general, on the temperature; we assume that $T$ is parametrically low compared with the heavy quark mass, $T\sim (g^2 ... g) M$ (cf. \[se:power\]). Now, it can be argued that in order to be parametrically consistent, the static potential in should be evaluated in the limit $t\gg r$ (cf. \[se:power\]). Then it obtains a simple form: in dimensional regularization (cf. Eqs. (4.3), (4.4) of ref. [@static]), \_[t]{} V\_[&gt;]{}(t,r) & = & - - ([m\_]{}r) + (g\^4) , where $C_F\equiv ({N_{\rm c}}^2-1)/2{N_{\rm c}}$; ${m_\rmi{D}}$ is the Debye mass parameter; and the function (x) 2 \_0\^ is finite and strictly increasing, with the limiting values $\phi(0) = 0$, $\phi(\infty) = 1$. The first term in corresponds to twice a thermal mass correction for the heavy quarks (cf. the first term inside the square brackets in ). The second term is a standard $r$-dependent Debye-screened potential. The third term represents an imaginary part: its physics is that almost static (off-shell) gluons may disappear due to inelastic scatterings with hard particles in the plasma. This is the phenomenon of Landau-damping, well-known in plasma physics. As a consequence of the imaginary part, the solution of the Schrödinger equation does not lead to a stationary wave function: rather, the bound state decays exponentially with time, representing a short-lived transient. In the following two sections, we discuss the origin of the formulae presented here, and their practical evaluation, in some more detail. We also extend the discussion to the other channels. We return to the numerical solution of the Schrödinger equation in \[se:num\]. Schrödinger equation and initial conditions =========================================== Our strategy for the derivation of the Schrödinger equation satisfied by the two-point correlation functions in various channels will be quite straightforward and “modest” here[^2]: we first compute the correlation functions in tree-level perturbation theory, and then expand in inverse powers of the heavy-quark mass. At this point we can identify the Schrödinger-equation and the initial condition for its solution. Subsequently, radiative corrections are expected to multiplicatively correct the terms that already appear at tree-level, and to add other terms which are allowed by symmetries, even if they would not appear at tree-level; the most important of these is the static potential. As long as there is a hierarchy between the different physical scales relevant for the problem (cf. \[se:power\]), and we are only sensitive to perturbative scales, general principles suggest that the system should remain local in the presence of radiative corrections, and that a truncation to a certain order is possible. Power counting -------------- Let us consider the parametric orders of magnitude of the various terms in , given the potential in . We recall, first of all, that the term $2 M$ plays no role, since it can always be eliminated through a trivial phase factor (cf. ). Around the quarkonium peak, the time derivative (or energy) is then of the order of the kinetic terms, i.e.$ \partial_t \sim \partial_r^2 / M $. If we, furthermore, equate kinetic energy with the Coulomb potential energy (assuming ${m_\rmi{D}}r {\mathop{\lsi}}1$, cf. below), we are lead to \_r \~ \~g\^2 M , \_t \~ \~g\^4 M . An essential question is now to decide how the temperature, $T$, is to be compared with these scales. Let us assume, first of all, that T \~g\^2 M () . Then ${m_\rmi{D}}r \sim g T r \sim g \ll 1$, and Debye screening plays essentially no role yet: we may assume the bound state to exist. In this limit, V\_[&gt;]{} \~ \~g\^4 M V\_[&gt;]{} \~g\^2 T ([m\_]{}r)\^2 \~g\^6 M , and the imaginary part can indeed be neglected. On the other hand, let us increase the temperature to T \~g M () . Then Debye screening plays an essential role, ${m_\rmi{D}}r \sim g T r \sim 1$, and we may assume that the bound state has melted: indeed, in this limit, V\_[&gt;]{} \~ \~g\^4 M V\_[&gt;]{} \~g\^2 T \~g\^3 M , so that the imaginary part of the potential, or the width of the state, dominates over the real part of the potential, or the binding energy. To summarise, the interesting temperature range is $T \sim (g^2 ... g) M$. In principle, parametrically consistent analyses in the two limiting cases may require different methods. In practice, we would like to have phenomenological access to the whole range; therefore, in the present paper we work (implicitly) in the situation where ${\mathop{\mbox{Re}}}V_{>} \sim {\mathop{\mbox{Im}}}V_{>}$, setting us somewhere in the middle of the range. For further reference, let us point out that in this situation, $r \nabla A \sim r {m_\rmi{D}}A {\mathop{\lsi}}A$, where $A$ is some gauge field component: the variation of the infrared gauge fields is parametrically small on the length scales set by the bound state radius. Vector channel -------------- Denoting V\^(x;) |(t,+) \^ W (t,-) , where $x\equiv (t,\vec{x})$, the vector channel correlator we consider is in general of the type C\_[&gt;]{}\^[V]{}(x;,’) = V\^(x;) V\_(0;-’) . For simplicity, we have left out hats from the fields in , as is appropriate once we go over to the path integral formulation in Euclidean spacetime. Now, even though we will carry out the computation of within QCD below, it will be useful to rewrite the operators considered in the language of NRQCD [@nrqcd0] (for a review, see ref. [@nrqcd]), because this allows to immediately see their scaling with the heavy quark mass $M$, and because this allows to relate various operators to each other in the large-$M$ limit. Following ref. [@fwt], we can start by carrying out a Foldy-Wouthuysen transformation, ( ) , | |( - ) , where $\overrightarrow{\!D}_{\!\!j} \,\equiv\, \overrightarrow{\!\partial}_{\!\!j}\! - i g A_j$, $\overleftarrow{\!D}_{\!\!j} \,\equiv\, \overleftarrow{\!\partial}_{\!\!j}\! + i g A_j$, and we assume a summation over spatial indices, $j=1,2,3$. Afterwards, we go over to the non-relativistic two-component notation by writing ( [c]{}\ ) , | ( \^, - \^) , where we already assumed a representation for the Dirac matrices with \^0 ( [cc]{} & 0\ 0 & - ) , \^k ( [cc]{} 0 & \_k\ -\_k & 0 ) , k = 1,2,3 . Here $\sigma_k$ are the Pauli matrices. Furthermore, it is useful to note that in NRQCD, the actions for $\phi$ and $\theta$ are of first order in time derivatives; consequently, one of the degrees of freedom propagates strictly forward in time, the other strictly backward in time, and a non-zero mesonic correlator at $t\neq 0$ is only obtained from structures like $\langle \phi^\dagger(...)\theta \; \theta^\dagger(...)\phi \rangle$. We now find that for $V^0$, the leading term with the desired structure is ${{\mathcal{O}}}(1/M)$ (this term is also a total derivative in the limit $\vec{r}\to\vec{0}$). Therefore, the correlator $C_{>}^{V}$ is dominated by the spatial components $V^k$. At ${{\mathcal{O}}}(M^0)$, these become V\^k(x;) = \^( t,+ ) \_k W ( t, - ) + \^( t,+ ) \_k W ( t, - ) . To the extent that interactions between the quark and antiquark are spin-independent (this is violated only at ${{\mathcal{O}}}(1/M)$), the Pauli-matrices play a trivial role in the two-point correlator made out of these operators, yielding eventually $\tr[\sigma_k \sigma_l]$, if $V^k$ and $V^l$ are being correlated. We now proceed to compute the correlator in at tree-level. We start in Euclidean spacetime[^3] and after a spatial Fourier transform (for the moment we keep, for generality, the spatial momentum non-zero, $\vec{q}\neq \vec{0}$, unlike in ), whereby & & C\_E\^[V]{}(,;,’) & & \^3 e\^[-i ]{} (, + ) \^ (,- ) (0,-) \_(0,+)\ & = & - [N\_[c]{}]{}\^3 e\^[-i ]{} [\_[-0.9ex]{}]{} e\^[i (p\_- s\_)+ i (-) + i (+) ]{} & = & - 8 [N\_[c]{}]{} [\_[-0.9ex]{}]{} (2)\^3 \^[(3)]{}( - -) e\^[i (p\_- s\_) + i (2 + ) ]{} & = & - 4 [N\_[c]{}]{} T\^2 \_[p\_, s\_]{} e\^[i (p\_- s\_) + i (2 + ) ]{} , where $\tilde p_{\rmi{0f}}= 2\pi T(n+\fr12) - i \mu$, $n \in {{\mathbb{Z}}}$, denotes fermionic Matsubara frequencies ($\mu$ is the quark chemical potential), and we have introduced the notation E\_ . The Matsubara sums can be carried out, by making use of T\_[p\_]{} & = & ,\ T\_[p\_]{} & = & - , valid for $0<\tau< \beta$. This yields && C\_E\^V(,;) = - [N\_[c]{}]{} e\^[i () ]{} && { [n\_]{}(E\_+) [n\_]{}(E\_-) e\^[(-)(E\_+E\_)]{} + && + [n\_]{}(E\_+) [n\_]{}(E\_+) e\^[(-)E\_+E\_+]{} + && + [n\_]{}(E\_-) [n\_]{}(E\_-) e\^[(-)E\_+E\_-]{} + && + [n\_]{}(E\_-) [n\_]{}(E\_+) e\^[(E\_+E\_)]{} } . In order to simplify the expression somewhat, we note that once we go over into the spectral function[^4], and restrict to frequencies (energies) around the quark-antiquark threshold, $|\omega - 2M| \ll M$, then only the first of the structures in contributes. Second, close enough to the threshold, the $\delta$-function expressing energy-conservation, $\delta(\omega - E_\vec{p} - E_\vec{p+q})$, forces the loop momentum $\vec{p}$ to be small, $|\vec{p}| \ll M$. We also assume the external momentum to be small, $|\vec{q}| \ll M$. Under these circumstances, we can expand E\_ M + , E\_ M + , and the relevant part of $C_E^{V}(\tau,\vec{q};\vec{r},\vec{r}')$ becomes && C\_E\^[V]{}(,;,’) & & - 6 [N\_[c]{}]{} e\^[i (2 + ) -]{} . Here we have also omitted effects of relative order $\exp(-[M\pm\mu]/T)$, by keeping only the leading terms in the exponentials. We note that after these simplifications, all dependence on the temperature and on the chemical potential has disappeared from the tree-level result. The real-time object we are ultimately interested in, is the analytic continuation C\_[&gt;]{}\^[V]{}(t,;) = C\_E\^[V]{}(it,;) . Noting from that -i \_ + , the dependence on $\vec{r}$ and $t$ in the exponential amounts to satisfying the Schrödinger equation { i \_t - } C\_[&gt;]{}\^[V]{}(t,;) = 0 . The initial condition for the solution is obtained by setting $t=0$ in (after use of ): we find C\_[&gt;]{}\^[V]{}(0,;) & = & - 6 [N\_[c]{}]{} e\^[i (2 + ) ]{} + ( ) & = & - 6 [N\_[c]{}]{} \^[(3)]{}() + ( ) . , justify , for the vector channel in the free limit. For future reference, let us also compute $\rho^V(\omega)$ explicitly (general expressions for free spectral functions can be found in refs. [@free]). (after $\tau\to it$) already shows the solution of , , and we can then directly remove the point-splitting, setting $\vec{r,r'} = \vec{0}$. Shifting $\vec{p}\to \vec{p} - \vec{q}/2$; taking the steps in footnote \[rhorec\]; and ignoring exponentially small terms and terms suppressed by ${{\mathcal{O}}}(1/M^2)$, we find \^V() & & - [6[N\_[c]{}]{}]{} ( ’ - ) = - (’) M\^[32]{} (’)\^12 , where ’ - . In the following, we will often for simplicity restrict to $\vec{q} = \vec{0}$ (like already in ), but we can now observe from , that the main effect of a non-zero $\vec{q} \neq \vec{0}$ is simply to shift the threshold location $2M$ by the center-of-mass kinetic energy $\vec{q}^2/4 M$. The analysis so far has been at tree-level. As argued in refs. [@static; @og2], however, the essential (temperature and $\omega$-dependent) 1-loop corrections can be taken into account simply by inserting the potential $V_{>}(\infty,r)$, given in , into . There are of course also other loop corrections, related for instance to the renormalization and definition of $M$ as a pole mass, and the overall normalization of the non-relativistic vector current in ; these corrections are in fact known to high order at zero temperature [@cs; @current],[^5] but are not essential at our current resolution, so we mostly omit them here. Scalar channel -------------- Denoting S(x;) |(t,+) W (t,-) , the scalar channel correlator we consider is of the type C\_[&gt;]{}\^[S]{}(x;,’) = S (x;) S (0;-’) . The correlator $C_{>}^{S}(x;\vec{0},\vec{0})$ is not directly physical[^6], but it does have the appropriate quantum numbers to give a contribution to the three-particle production rate $q\bar q\to \mu^-\mu^+\gamma$, i.e. a lepton–antilepton pair together with an on-shell photon. Moreover, it is frequently measured on the lattice, which will be our most direct reference point. We will ignore the issue of overall (re)normalization in the following, and concentrate on the shape of the spectral function (meaning its $\omega$-dependence in frequency space, or its $t$-dependence in coordinate space). It is again helpful to write $S (x;\vec{r})$ with the NRQCD notation. The steps in , indicate that at ${{\mathcal{O}}}(M^0)$, $S = \theta^\dagger \theta - \phi^\dagger \phi$, which does not lead to any non-trivial $t$-dependence. The leading non-trivial term reads S(x;) & = & ... + + ( ) , where $ \overleftrightarrow{\!D}_{\!\!j} \; \equiv \; W \! \overrightarrow{\!D}_{\!\!j}\! ( t,\vec{x} - {\vec{r}} / {2})\; - \overleftarrow{\!D}_{\!\!j}\! ( t,\vec{x} + {\vec{r}} / {2})\, W $. To simplify a bit, let us for now assume that the gauge fields are perturbative, so that the Wilson line can be approximated by the first term in its expansion; and that their variation is slow on the scale set by $|\vec{r}|$, as argued in \[se:power\] (in any case, $|\vec{r}|$ is taken to be zero at the end). Then we may write $ W \approx {{\mathbbm{1}}}+ i g \vec{r} \cdot \vec{A}(t,\vec{x}) $, $ \overrightarrow{\!D}_{\!\!j}\! ( t,\vec{x} - {\vec{r}} / {2})\! \;\approx\; \overrightarrow{\!\partial}_{\!\!j}\! - i g A_j(t,\vec{x}) + ig \vec{r}\cdot \nabla A_j(t,\vec{x}) / 2 $, $ \overleftarrow{\!D}_{\!\!j}\! ( t,\vec{x} + {\vec{r}} / {2})\! \;\approx\; \overleftarrow{\!\partial}_{\!\!j}\! + i g A_j(t,\vec{x}) + ig \vec{r}\cdot \nabla A_j(t,\vec{x}) / 2 $. We now note that \^( t,+ ) \_[j]{} ( t, - ) - 2 { \^( t,+ ) W ( t, - ) } . Therefore, to leading order in the large-$M$ expansion, and at least to some order in the weak-coupling expansion, we can identify S(x;) - \_ (x;) , where the components of $\vec{V}$ are given in . The relation between the vector and scalar channel correlators can be pushed one step further, if we consider directly the correlators, and . To leading order in the large-$M$ expansion, and imply that C\_[&gt;]{}\^[S]{}(x;,’) & = & S (x;) S (0;-’) & & \_[k,l=1]{}\^[3]{} (\_)\_k(\_)\_l V\_k (x;) V\_l (0;-’) & = & \_[k,l=1]{}\^[3]{} (\_)\_k(\_)\_l \_[kl]{} \_[j=1]{}\^[3]{} V\_j (x;) V\_j (0;-’) & = & - \_\_[’]{} \_[j=1]{}\^[3]{} V\^j (x;) V\_j (0;-’) & = & - \_\_[’]{} C\_[&gt;]{}\^[V]{}(x;,’) . We will be making use of this important relation later on. We now return to full QCD, and outline the computation of the 2-point scalar density correlator in at tree-level, again taking a spatial Fourier transfrom and, for generality, keeping track of a non-zero spatial momentum $\vec{q}\neq\vec{0}$ for the moment. Then, & & C\_E\^S(,;) & & \^3 e\^[-i ]{} (, + ) (, - ) (0,- ) (0,+ )\ & = & - [N\_[c]{}]{}\^3 e\^[-i ]{} [\_[-0.9ex]{}]{} e\^[i (p\_- s\_)+ i (-) + i (+) ]{} & = & - 4 [N\_[c]{}]{} [\_[-0.9ex]{}]{} (2)\^3 \^[(3)]{}( - -) e\^[i (p\_- s\_) + i (2 + ) ]{} & = & - 2 [N\_[c]{}]{} T\^2 \_[p\_, s\_]{} e\^[i (p\_- s\_) + i (2 + ) ]{} . Making use of , , this can be rewritten as && C\_E\^S(,;) = - [N\_[c]{}]{} e\^[i (2 + ) ]{} && { [n\_]{}(E\_+) [n\_]{}(E\_-) e\^[(-)(E\_+E\_)]{} + && + [n\_]{}(E\_+) [n\_]{}(E\_+) e\^[(-)E\_+E\_+]{} + && + [n\_]{}(E\_-) [n\_]{}(E\_-) e\^[(-)E\_+E\_-]{} + && + [n\_]{}(E\_-) [n\_]{}(E\_+) e\^[(E\_+E\_)]{} } . With the same considerations as between and , the interesting part of $C_E^S$ can be approximated as && C\_E\^S(,;) e\^[i (2 + ) -]{} && . Note again that after these simplifications, all dependence on the temperature and on the chemical potential has disappeared from the tree-level result. The exponential in is the same as in , whereby $C_{>}^S$ obeys the same Schrödinger equation as $C_{>}^V$, . The initial condition is different, however: setting $t=0$ in (after $\tau\to it$), we find C\_[&gt;]{}\^S(0,;) & = & - \_\^2 e\^[i (2 + ) ]{} + ( ) & = & - \_\^2 \^[(3)]{}() + ( ) . This agrees, of course, with what can be deduced from , . We note that all dependence on the external momentum $\vec{q}$ again only appears as a part of the center-of-mass energy $ 2 M + \vec{q}^2/4 M $, inside . For future reference, let us finally determine the spectral function, $\rho^S(\omega)$. (after $\tau\to it$) already shows the solution of , , and we can then directly remove the point-splitting, setting $\vec{r,r'} = \vec{0}$. Shifting $\vec{p}\to \vec{p} - \vec{q}/2$; taking the steps in footnote \[rhorec\]; and ignoring exponentially small terms, we find \^S() & & \^2 ( ’ - ) = (’) M\^[12]{} (’)\^[32]{} , where $\omega'$ is from . The analysis so far has been at tree-level. As discussed above , the relation in  is more general, however. Therefore, we can extract a beyond-the-leading order $\rho^S$ by simply applying to a beyond-the-leading order $\rho^V$. Other channels -------------- In Secs. \[ss:rhoV\], \[ss:rhoS\], we have discussed the correlators in the vector and scalar channels. Let us now show that in the limit of a large quark mass, the correlators in the pseudoscalar and axial vector channels are to a good approximation equivalent to either of these two. We note, first of all, that in the basis of , the matrix $\gamma_5$ becomes \_5 = i \^0\^1\^2\^3 = ( [cc]{} 0 &\ & 0 ) . Thereby the pseudoscalar density becomes P(x;) & & |(t,+) i \_5 W (t,-)\ & = & i + ( ) , where again only structures of the type $\theta^\dagger\phi$ and $\phi^\dagger\theta$ have been kept. The non-trivial two-point correlator comes from the cross-term between the two structures in , and ignoring the spin-dependent corrections of ${{\mathcal{O}}}(1/M)$, a comparison with then shows that C\_[&gt;]{}\^[P]{}(x;,’) -13 C\_[&gt;]{}\^[V]{}(x;,’) , where $ C_{>}^{P}(x;\vec{r},\vec{r}') \equiv \langle P(x;\vec{r}) P(0;-\vec{r}') \rangle $, and $C_{>}^{V}$ is defined in . The axial vector, on the other hand, can be defined as A\^(x;) |(t,+) \_5 \^ W (t,-) . In the case of $V^\mu$, we found that the dominant contribution is given by the spatial components, but for the axial vector, the roles have interchanged: the leading term is A\^0(x;) = - + ( ) . Comparing with , we find C\_[&gt;]{}\^[A\^0]{}(x;,’) A\^0(x;) A\^0(0;-’) C\_[&gt;]{}\^[P]{}(x;,’) -13 C\_[&gt;]{}\^[V]{}(x;,’) . In lattice studies, however, attention is sometimes restricted to the spatial components $A^k$; repeating the previous steps, we find A\^k(x;) & & - P(x,) + & & + . The first term is a total derivative, and the second term has a structure close to that in , given that only the crossterm contributes in a correlation function. Therefore, paralleling the argument in , we find \^3 C\_[&gt;]{}\^(x;) & & \^3 A\^k(x;) A\^k (0;-’) & & \_[klm]{}\_[kl’m’]{} \^3 V\^l (x;) V\^[l’]{} (0;-’) & = & - \_[klm]{}\_[kl’m’]{} \_[ll’]{} \^3 C\_[&gt;]{}\^[V]{}(x;,’) & = & 2 \^3 C\_[&gt;]{}\^[S]{}(x;,’) . To summarize, , and show that the pseudoscalar and axial correlators do not lead to any qualitatively new structures. Method to construct the spectral functions ========================================== In the previous section, we have set up the Schrödinger equation and initial conditions satisfied by the vector channel correlator $C_{>}^{V}$, and shown that the corresponding correlators in the other channels can be obtained from $C_{>}^{V}$ through various relations. The aim now is to extract the spectral functions corresponding to these correlators. To achieve this goal, it is useful to convert the time-dependent Schrödinger equation directly to frequency space. Let (t;) e\^[i 2 M t]{} C\_[&gt;]{}\^[V]{}(t,;) , and (t;) e\^[i 2 M t]{} C\_[&gt;]{}\^[S]{}(t,;) - 13 (t;) . The corresponding frequency representations are defined by (’;) \_[-]{}\^ t e\^[i ’ t]{} (t;) , (’;) \_[-]{}\^ t e\^[i ’ t]{} (t;) , and the spectral functions are then obtained from (cf. ) \^V(’) & = & \_ 12 (’;) ,\ \^S(’) & = & \_ 12 (’;) , where $\omega'$ is from and we have omitted exponentially small corrections. We now recall from ref. [@static] that the imaginary part of $V_>(t,r)$ () is odd in $t\to -t$. Furthermore, we recall from \[se:power\] that a consistent perturbative solution allows (or, to be more precise, demands) considering the limit $|t| \gg r$. Denoting V\_[&gt;]{}(r) \_[t+ ]{} V\_[&gt;]{}(t,r) , the equations to be solved () then read (t;) & = & i \_t (t;) , t &gt; 0 ,\ (t;) & = & i \_t (t;) , t &lt; 0 , where we indicated explicitly that the imaginary part is negative for $t\to+ \infty$ [@static; @imV], and defined a Hermitean differential operator $\hat H$ through H - + V\_[&gt;]{}(r) . Since the effective Hamiltonian is time-independent both for $t<0$ and for $t>0$, we can formally solve , : (t;) = { [ll]{} e\^[-i H t - | V\_[&gt;]{}(r)| t]{} (0;) & , t &gt; 0\ e\^[-i H t + | V\_[&gt;]{}(r)| t]{} (0;) & , t &lt; 0\ . , where, according to , , (0;) = - 6 [N\_[c]{}]{}\^[(3)]{}() . Taking a Fourier-transform, we get (’;) & = & \_[-]{}\^ t e\^[i’ t]{} (t;) & = & { \^[-1]{} . e\^[it(’ - H) - | V\_[&gt;]{}(r)| t]{} |\^\_[0]{} + & & + \^[-1]{} . e\^[it(’ - H) + | V\_[&gt;]{}(r)| t]{} |\^[0]{}\_[-]{} }(0;) & = & { \^[-1]{} - \^[-1]{} } (0;) . To give a concrete meaning to the inverses in , we define a function $\tilde \Psi(\omega';\vec{r,r'})$ as the solution of the equation (’;) = - 6 [N\_[c]{}]{}\^[(3)]{}() . Then the result of can be rewritten as (’;) = - 2 . According to , , , the spectral functions are now obtained from \^V(’) & = & - \_ ,\ \^S(’) & & \_ . To summarize, we have reduced the determination of the spectral functions to the solution of a time-independent inhomogeneous Schrödinger equation, . As the next step, following ref. [@ps], we introduce the ansatz (’;) \_[l=0]{}\^ \_[m=-l]{}\^[l]{} Y\_[lm]{}()Y\_[lm]{}\^\*(’) . Here $Y_{lm}$ are the spherical harmonics, normalised as $ \int \! {\rm d}\Omega \, Y^*_{lm}(\Omega) Y_{l'm'}(\Omega) = \delta_{ll'} \delta_{mm'} $, where $ {\rm d}\Omega = {\rm d}\!\cos\theta\, {\rm d}\phi $, and satisfying \_[lm]{} Y\^\*\_[lm]{}(’) Y\_[lm]{}() = (- ’) (- ’) (-’) . The $\delta$-function can be written as \^[(3)]{}( - ’) = (r-r’) (- ’) , whereby becomes g\_l(’;r,r’) = - 6 [N\_[c]{}]{}(r-r’) , with H\_r - + + V\_[&gt;]{}(r) . The remaining goal is to reduce the problem to the solution of the homogeneous equation. Following refs. [@ps; @mp3], we introduce the ansatz g\_l A g\_[&lt;]{}\^l(r\_[&lt;]{}) g\_[&gt;]{}\^l(r\_[&gt;]{}) , where $g_{<}^l$ is a solution of the homogeneous equation regular at zero; $g_{>}^l$ is a solution of the homogeneous equation regular at infinity; and $ r_{<} = \mathop{\mbox{min}}(r,r'), r_{>} = \mathop{\mbox{max}}(r,r') $. Obviously, the function $\tilde g_l$ is symmetric in $r\leftrightarrow r'$, and continuous at $r=r'$. Given the well-known form of the solution $g_{<}^l$, it must thus behave as g\_l \~\[ r\^[l+1]{} + (r\^[l+2]{})\] \[ (r’)\^[l+1]{} + ((r’)\^[l+2]{}) \] at small $r,r'$. For the vector channel spectral function, , now imply that \^V(’) = - \_[r,r’0]{} , i.e. that only the S-wave ($l=0$) solution of the homogeneous part of contributes. Consider then the scalar channel. According to , the scalar channel spectral function can be extracted from the same function $\tilde\Psi$ as the vector channel one, by taking two derivatives and then extrapolating $r,r'\to 0$. Inspecting , we see that we at least get a contribution from the P-wave ($l=1$). However, according to , it is also possible to get a contribution from the [*subleading S-wave terms*]{}, $\tilde g_0 \sim [r+{{\mathcal{O}}}(r^2)][r'+{{\mathcal{O}}}((r')^2)]$. As far as we can see, this contribution was omitted in ref. [@mp3]. We relegate a more detailed discussion on how to write the solutions for the spectral functions $\rho^V, \rho^S$ to Appendix A, given that the further steps are quite technical in nature, and give here just the final formulae. Introducing the dimensionless variables $\varrho\equiv r \alpha M$ and $\alpha \equiv g^2 C_F/4\pi$, the vector channel spectral function from can be simplified to = - \_[0]{} \_\^ . { } |\_[g\^0\_[&lt;]{}() = - \^2/2 + ...]{} , while the scalar channel spectral function becomes = \_[0]{} \_\^ . { + } |\_[g\^1\_[&lt;]{}() = \^2 - \^3/4 + ...]{} . We remark that because of the factor 36, the first (S-wave) term is numerically subdominant in , and would be totally negligible, were it not for the fact that is does lead to a resonance peak, unlike the second term. Numerical results ================= In the previous section, we have reduced the numerical determination of the vector and scalar channel spectral functions to , , respectively. In these equations the functions $g_{<}^{l}$, $l=0,1$, denote the regular solutions of the homogeneous part of , g\_[&lt;]{}\^l = 0 , where $\hat H_r$ is from . Further details can be found in Appendix A. In practice, the procedure of determining $\rho^V$, $\rho^S$ starts from some small value, $\varrho\equiv \delta$, with for instance $\delta = 10^{-2}$, at which point we impose as initial conditions the properties of the regular solutions at small $\varrho$, $g^0_{<}(\delta) = \delta - \delta^2/2 + ...$ , $g^1_{<}(\delta) = \delta^2 - \delta^3/4 + ...$ . We then integrate towards larger $\varrho$, constructing simultaneously the quantities in , . After a while, $g^0_{<}(\varrho)$ and $g^1_{<}(\varrho)$ start to grow rapidly and the integrals in , settle to their asymptotic values. Subsequently, we check that the results obtained are independent of the starting point $\delta$. The numerics is straightforward and poses no problems. Apart from the pole mass $M$, the solution depends on what is plugged in for $g^2$ and ${m_\rmi{D}}$. We employ here simple analytic expressions that can be extracted from Ref. [@adjoint], g\^2 , [m\_]{}\^2 , . We also fix ${{\Lambda_{{\overline{\mbox{\tiny\rm{MS}}}}}}}\simeq 300$ MeV; for the uncertainties related to this, see 2 of ref. [@static]. =5.0cm  =5.0cm  =5.0cm =5.0cm  =5.0cm  =5.0cm The results for $-\rho^V/\omega^2$ ( divided by $-\omega^2/M^2$) are shown in \[fig:rhoV\_M\], \[fig:rhoV\_T\], and those for $\rho^S/\omega^2$ ( divided by $\omega^2/M^2$) in \[fig:rhoS\_M\], \[fig:rhoS\_T\]. The results are given in a range of $\omega$ where relativistic corrections, i.e. terms of higher order in a Taylor expansion in $(\omega-2 M)/M$, are estimated to be at most at the 10% level. We show a scan of mass values, given that the inherent theoretical uncertainties of the charm and bottom pole masses are several hundred MeV (for a pedagogic discussion, see ref. [@mb]), and that in lattice simulations there are further uncertainties, related to scale setting etc, which make it difficult to sit precisely at the physical point. As far as the other channels are concerned, we recall from , , that \^P -13 \^V ; \^[A\^0]{} -13 \^V ; \^ 2 \^S . =5.0cm  =5.0cm  =5.0cm =5.0cm  =5.0cm  =5.0cm Comparison with lattice ----------------------- As of today, lattice reconstructions of the spectral functions in various channels [@lattold; @latt1; @latt2] suffer from significant uncertainties. Apart from the usual problems, it may be mentioned that the Compton wavelength associated with the heavy quarks tends to be of the order of the lattice spacing, so that we may expect even more significant discretization artifacts than in the usual quenched or 2+1 light flavour simulations; and that the analytic continuation from Euclidean lattice data to the Minkowskian spectral function necessarily involves model input, whose uncertainties are difficult to quantify. Nevertheless, it has been claimed that the latter types of uncertainties may be under reasonable control from a practical point of view [@mem]. The most recent lattice results in this spirit can be found in refs. [@latt1; @latt2]. It has become fashionable recently not to compare directly the spectral functions, but the Euclidean correlators for which direct lattice data exists. Though this removes the uncertainties related to the analytic continuation, it also comes with a heavy price: most of the structure in a Euclidean correlator is determined by values of $\omega$ far from the threshold, $\omega\ll 2M$ or $\omega \gg 2 M$, so that the actual physics we are interested in tends to be hidden in tiny effects somewhere in the middle of the Euclidean time interval. For this reason, we do not consider Euclidean correlators to be as interesting as the spectral functions, and touch only the latter in the following. Most of the lattice data exists for the charmonium case. The temperatures where the charmonium peak disappears from the spectral function are rather low, however; in fact they are in a regime where our analysis is probably not yet justified. Assuming the charmonium pole mass to be in the range $M\sim (1.5...2.0)$ GeV, we nevertheless observe from \[fig:rhoV\_M\](left) that at $T\approx 250$ MeV a certain “enhancement” can still be seen in the vector (and thus, in the pseudoscalar) channel. This then disappears at higher temperatures. In contrast, in the scalar channel, \[fig:rhoS\_M\](left), there is practically no structure. These observations are certainly not in conflict with the lattice results of refs. [@latt1; @latt2]. Furthermore, we may note that the absolute magnitudes of $\rho_V$ and $\rho_S$ in \[fig:rhoV\_M\](left), \[fig:rhoS\_M\](left) are qualitatively in a similar relation to each other as the spectral functions measured on the lattice: the difference of about an order of magnitude is due to the $1/M^2$-suppression in the scalar case. At the same time, it needs to be kept in mind that in the scalar case the operators require renormalization, and that we have in any case not computed radiative corrections to the absolute magnitudes of the spectral functions, so that the comparison cannot be taken too seriously. Data for the bottomonium case, where our predictions should be more reliable, can be found in ref. [@latt1]. There is again an inherent uncertainty of several hundred MeV in the bottom quark pole mass, but realistic values are presumable in the range $M\sim (4.5...5.0)$ GeV. According to \[fig:rhoV\_M\], \[fig:rhoV\_T\] (middle to right), there is now a clear peak in the vector channel spectral function, up to a temperature of perhaps 500 MeV. In the scalar channel case, \[fig:rhoS\_M\], \[fig:rhoS\_T\] (middle to right), the structure is much less pronounced, but a tiny enhancement can be observed up to a temperature of about 400 MeV. These results are qualitatively in better agreement with the lattice data in ref. [@latt1] than the potential model results of ref. [@mp3], where no peak was found in the scalar channel case; as we have explained in \[se:rho\], the discrepancy can be traced back to a difference in the reconstruction of the spectral function from a Schrödinger equation. Nevertheless, in practice, it should again be stressed that systematic uncertainties of the lattice data are certainly too large to make a quantitative comparison. Dilepton rate ------------- =7.5cm  =7.5cm Apart from the spectral functions, it is interesting to plot also the physical observable, the dilepton production rate given in . This is shown in \[fig:rate\]. The significant difference with respect to the vector channel spectral function is the existence of the Boltzmann factor (or, to be more precise, Bose-Einstein factor) in . Obviously, for a fixed frequency around the threshold, $\omega\sim 2 M$, the Boltzmann factor $\exp(-\omega/T)\sim \exp(-2 M/T)$ introduces a strong dependence of the dilepton rate on the temperature or, for a given temperature, on the mass. The exponential boosts the rate at high temperatures, and makes it decrease rapidly at large frequencies. Thereby the dilepton rate shows a much stronger resonance-like behaviour than the spectral function, \[fig:rhoV\_M\]. In particular, some kind of a peak structure remains visible in the dilepton rate in \[fig:rate\] even at temperatures which are so high that there is only a smooth step-like behaviour visible in the spectral function in \[fig:rhoV\_M\]. Physical picture of heavy quarkonium in a thermal plasma ======================================================== Conceptually, the most important difference between our analysis and traditional potential models [@models; @mp3] is the existence of an imaginary part in the static potential, . Physically, the imaginary part implies that quarkonium at high temperatures should not be thought of as a stationary state. Rather, the norm of its wave function decays exponentially with (Minkowski) time. This is due to the fact that, apart from experiencing Debye screening, there is also a finite probability for the off-shell gluons binding the two quarks to disappear, due to Landau damping, i.e. inelastic scatterings with hard particles in the plasma. Once $T\sim gM$, the imaginary part is in fact parametrically larger than the binding energy (cf. \[se:power\]). At the same time, for low enough temperatures, $T\sim g^2 M$, the imaginary part plays a subdominant role (cf. \[se:power\]). It may be useful to remark that if, on the contrary, one goes to a Euclidean lattice, then a non-zero wave function [*can*]{} be defined at any finite value of the “imaginary time” coordinate $\tau$, $0 < \tau < \beta$. Introducing also gauge-fixing, such wave functions have been measured with Monte Carlo simulations in ref. [@wf] (for a recent review, see ref. [@wfrev]). With regard to the discussion above, the physical significance of such wave functions for Minkowski-time observables is not obvious; hence we do not discuss them here. Conclusions =========== The purpose of this paper has been to experiment, as generally as possible, with the resummed perturbative framework that was introduced in refs. [@static; @og2], in order to offer one more handle on the properties of heavy quarkonium in hot QCD, thus supplementing the traditional approaches based on potential models and on lattice QCD. The key ingredient of our approach is a careful definition of a finite-temperature real-time static potential that can be inserted into a Schrödinger equation obeyed by certain heavy quarkonium Green’s functions. The potential in question, denoted by $\lim_{t\to\infty}V_{>}(t,r)$, has both a real and an imaginary part (cf. ). An important conceptual consequence from the existence of an imaginary part is that heavy quarkonium should not be thought of as a stationary state at high temperatures, but as a short-lived transient, with the quark and antiquark binding together only for a brief moment before unattaching again. On the more technical level we have noted that, in terms of , the vector channel spectral function gets a contribution only from the S-wave, $l=0$, while the scalar channel spectral function gets a contribution both from the S-wave and P-wave, $l=0,1$. Here we differ from the potential model analysis in ref. [@mp3] where, as far as we can see, only $l=1$ was considered for the scalar channel. The reason for the difference is discussed at the end of \[se:rho\]. The difference is significant, since the S-wave contribution introduces a small reasonance peak to the scalar channel spectral function as well. The phenomenological pattern we find for the spectral functions within this framework is not too different from indications from lattice QCD: scalar channel charmonium displays practically no reasonance peak above a temperature of 200 MeV; vector channel charmonium has some peak-like structure up to a temperature of about 300 MeV; scalar channel bottomonium is again weakly bound but does show a small enhancement up to a temperature of about 400 MeV; vector channel bottomonium can support a resonance peak up to a temperature of about 500 MeV. (Because of unknown higher order corrections, these numbers are subject to uncertainties of several tens of MeV.) At the same time, we stress that in the physical dilepton rate, \[fig:rate\], the quarkonium peak always becomes [*more pronounced*]{} with increasing temperature, irrespective of the disappearance of the resonance structure from the spectral function. This boost is due to an interplay of the free quark continuum in the spectral function, and the Boltzmann factor $\exp(-\omega/T)$. There are a few directions in which our work could be extended, in order to go beyond a purely perturbative approach. In particular, the imaginary part of the real-time static potential has been measured with classical lattice gauge theory simulations in ref. [@imV], and could thus to some extent be used in a non-perturbative setting. Hopefully, the real part of our static potential could also be related to quantities that are measurable with lattice Monte Carlo methods, thereby allowing us to probe more reliably the phenomenologically interesting temperature regime around a few hundred MeV. Acknowledgements {#acknowledgements .unnumbered} ================ M.L. thanks T. Hatsuda and S. Kim for useful suggestions, and the Isaac Newton Institute for Mathematical Sciences, where part of this work was carried out, for hospitality. This work was partially supported by the BMBF project [*Hot Nuclear Matter from Heavy Ion Collisions and its Understanding from QCD*]{}. Numerical method for finding the spectral functions =================================================== In this appendix we provide details concerning the numerical method that we have used for determining the vector and scalar channel spectral functions. The basic approach is from ref. [@ps], where it was applied for the vector channel at zero temperature; the method was extended to the scalar channel case in ref. [@mp3]. Our presentation is rather close to that in ref. [@mp3], but we choose to spell out the details anew due to the fact that, as already mentioned in \[se:rho\], we find one additional term in the scalar channel case. Furthermore, the existence of an imaginary part in our static potential simplifies certain points of the analysis. We should point out that the method presented here appears to be numerically superior to that introduced for the vector channel in ref. [@og2]. Vector channel -------------- We proceed with the evaluation of . Given the ansatz in , it remains to determine $A, g_{<}^{l}, g_{>}^{l}$, and then to extrapolate $r,r'\to 0$. We thus need to know, in particular, the asymptotic behaviours of the functions $g_{<}^{l}, g_{>}^{l}$ near the origin. Let $g^l_\rmi{r}$ and $g^l_\rmi{i}$ be the solutions regular and irregular around the origin, respectively: g\^l\_ & = & r\^[l+1]{} \_[n=0]{}\^ a\_n r\^n a\_0 r\^[l+1]{} ,\ g\^l\_ & = & g\^l\_ (r) \_\^r r’ - . We may then choose g\^l\_[&lt;]{}(r) & = & g\^l\_(r) ,\ g\^l\_[&gt;]{}(r) & = & g\^l\_(r) + B\^l g\^l\_(r) , where the coefficient $B^l$ is defined such as to guarantee the regularity of $g^l_{>}(r)$ at infinity, B\^l = -\_[r]{} = - \_\^ r’ . Combining , , , we can write g\^l\_[&gt;]{}(r) = - g\^l\_(r) \_r\^ r’ . Let us next compute the coefficient $A$ in . Integrating both sides of with $ \int_{r'-0^+}^{r'+0^+} \! {\rm d}r \, (...) $, yields A = . Involving a Wronskian, this expression is independent of the position $r'$ at which it is evaluated, so we can do this at small $r'$. Then we can use the asymptotic forms from , , to find that A = -6 [N\_[c]{}]{}M . Note that this expression is independent of $l$. We finally take the limit $r,r'\to 0$, while keeping $r < r'$, so that $r_{<} \equiv r$, $r_{>} \equiv r'$. Inserting , , and into , yields \^V(’) & = & \_[r,r’0]{} & = & - \_[r’0]{} { \_[r’]{}\^r” } , where we assumed $a_0$ to be real. Let us now analyse the origin of the imaginary part in . It will be convenient to express the $r$-dependence in terms of the dimensionless variable $\varrho \equiv r \alpha M$, where $\alpha \equiv g^2 C_F/4\pi$. In these units, the homogeneous Schrödinger equation () reads g\^l\_() = 0 , implying g\^l\_() = \^[l+1]{} - \^[l+2]{} + ... . At some order the solution also develops an imaginary part; let us write an ansatz g\^l\_() = \^[l+1]{} - \^[l+2]{} + … + i \_1 \^x , \_1 . The imaginary part in behaves as $\sim i \gamma_2 \varrho^2$ at small $\varrho$. Inserting into the Schrödinger equation, we get for the leading imaginary term i \_1 \^[x-2]{}\[x(x-1) - l(l+1)\] + i \_2 \^2 \^[l+1]{} = 0 , implying $x= l+5$. Returning to , there are in principle two possibilities for the origin of the imaginary part. However, according to , , $ \lim_{r'\to 0} {\mathop{\mbox{Im}}}[g_\rmi{r}^0/r']\int_{r'}^\infty {\rm d}r'' {\mathop{\mbox{Re}}}\{ 1/[g_\rmi{r}^0(r'')]^2 \} \sim \lim_{r'\to 0} (r')^4 /(r') = 0 $. Therefore the imaginary part can only arise from $ {\mathop{\mbox{Im}}}\{ 1/[g_\rmi{r}^0(r'')]^2 \} $. Inserting the asymptotic form of $ {\mathop{\mbox{Re}}}[g_\rmi{r}^0/r'] $ from ; using the variable $\varrho$; and noting that this corresponds to the choice $a_0 = \alpha M$, we then obtain . It is useful to crosscheck that produces the correct result in the free limit. In the free case there is no $i |{\mathop{\mbox{Im}}}V_{>}(r)|$ in , and a factor $i\epsilon \equiv i0^+$ needs to be inserted instead, to pick up the correct (retarded) solution. In dimensionless units, the homogeneous equation then becomes g\^0\_ () = 0 , where $\hat\omega' \equiv \omega'/M$. We denote k . The solution with the correct behaviour around the origin (with $a_0 = \alpha M$) reads g\^0\_ () = ( k) . We can write = - k { } . The integral in can now be carried out; the substitution at the upper end gives a contribution from the exponentially growing terms $\exp(-i k \varrho' )$, present both in the cosine and in the sine. Their ratio gives $-i$, and the total is then = - { ([+i ]{})\^[12]{} i } = - (’) (’)\^[1/2]{} . This indeed agrees with . Scalar channel -------------- In the scalar channel case, the equations to be solved are , , ; the ansatz for the solution is in , with $A$ given by . Let us first work out the contribution from the mode $l=0$ (S-wave). According to , , , , the relevant term of $\tilde\Psi$, denoted by $\delta_o\tilde\Psi$, is \_0 (’;) = - A g\_\^0(r) g\_\^0(r’) \_[r’]{}\^r” . Inserting into , making use of , and going over into the dimensionless variable $\varrho$, we get = \_[,’0]{} { ( ) ( \_[’]{}\^ ” ) } . According to , the first term inside the curly brackets is $ \lim_{\varrho\to 0} {\rm d}_\varrho ( g_\rmi{r}^0 / \varrho) = -1/2 $, so that we get = - \_[’0]{} { ( \_[’]{}\^ ” ) } . In principle there are again two possible origins for the imaginary part. However, as we saw in the vector channel case, $ {\mathop{\mbox{Im}}}[g_\rmi{r}^0/\varrho'] \int_{\varrho'}^\infty {\rm d}\varrho'' {\mathop{\mbox{Re}}}\{ 1/[g_\rmi{r}^0(\varrho'')]^2 \} \sim (\varrho')^3 $, so that a non-zero contribution can only arise from $ {\mathop{\mbox{Im}}}\{ 1/[g_\rmi{r}^0(\varrho'')]^2 \} $. Furthermore, the derivative can only act on the combination multiplying the integral, since { } { } { } -2 \_1 (’)\^2 . Making use of $ \lim_{\varrho'\to 0} {\rm d}_{\varrho'} ( g_\rmi{r}^0 / \varrho') = -1/2 $, the S-wave contribution to the scalar spectral function thus becomes = \_[0]{} . \_\^ { } |\_[g\^0\_() = - \^2/2 + ...]{} . In other words, comparing with , $\delta_0 \rho^S(\omega') = - \alpha^2 \rho^V(\omega')/12$; the factor $\alpha^2$ is a manifestation of the suppression $\sim \nabla_\vec{r}^2/M^2$ apparent in , combined with the parametric order of magnitude of $\nabla_\vec{r}/M$ from . Consider then the contribution from the mode $l=1$ (P-wave). The relevant term from , denoted by $\delta_1\tilde\Psi$, is \_1 (’;) = A \_[m=-1]{}\^[1]{} Y\_[1m]{}(,) Y\_[1m]{}\^\*(’,’) . Hence we will need Y\_[10]{}(,) = , Y\_[11]{}(,) = e\^[i]{} . In order to take the derivatives in , we stay with radial coordinates, so that \_ = \_r + \_ + \_ . Moreover, we choose again $r<r'$, so that $r_{<} \equiv r$, $r_{>} \equiv r'$. We will set $\Omega' = \Omega$ after taking the derivatives in , so that the basis is orthogonal. Making use of , , the terms $m=\pm 1$ both yield \_[’]{}\_ \_1 & = & A { \^2+ } , while the term $m=0$ yields \_[’]{}\_ \_1 & = & A { \^2+ \^2 } . Summing together, we get \_[’]{}\_ \_1 & = & A { + 2 } . We now insert $ g^1_{<}(r) = g^1_\rmi{r}(r) $, $ g^1_{>}(r') = - g^1_\rmi{r}(r') \int_{r'}^\infty \! {\rm d} r'' \, {1}/{[g^1_\rmi{r}(r'')]^2} $ from , , and recall that at small $r$, $ g^1_\rmi{r}(r) \approx \varrho^2 = (r \alpha M)^2 $. Thereby \_[,’]{} \_[’]{}\_ \_1 = - (M)\^3 \_[’0]{} { ( \_[’]{}\^ ) + \_[’]{}\^ } . Inserting this into , and making use of , we get = \^3 \_[’0]{} { \_[’]{}\^ - } . Once again, we need to inspect the origin of the imaginary part. According to , , $ {\mathop{\mbox{Re}}}[g^1_\rmi{r}(\varrho')] \sim (\rho')^2 $, $ {\mathop{\mbox{Im}}}[g^1_\rmi{r}(\varrho')] \sim (\rho')^6 $, and consequently { } , { } { } - \_1 ’ , so that the only possibility is to consider $ {\mathop{\mbox{Im}}}\{ {1} / {[g^1_\rmi{r}(\varrho'')]^2} \} $. The prefactor multiplying this can be trivially determined, and we end up with = \^3 . \_[0]{} \_\^ { } |\_[g\^1\_() = \^2 - \^3/4 + ...]{} , in analogy with . Combining , , the complete scalar channel spectral function can be written as in . To conclude, let us again check that the procedure introduced does yield the correct tree-level result. Somewhat unfortunately, the first term in does not contribute in this limit: the subleading term in would be of ${{\mathcal{O}}}(\varrho^{l+3})$ in the free case, so that $g_\rmi{r}^0/\varrho\sim\varrho^2$ in , and $\delta_0\rho^S$ vanishes. However, the second term in survives. In dimensionless variables, the homogeneous Schrödinger equation reads g\^1\_() = 0 . Since there is no imaginary potential, we have had to introduce $\epsilon \equiv 0^+$ to pick up the retarded solution. The solution normalised to give the desired small-$\varrho$ behaviour \[$g^1_\rmi{r}(\varrho) = \varrho^2 + ...$\] is g\^1\_() = , where $k$ is from . We note that [ \^[-2]{} ]{} = , whereby & = & \^3 \_ { 19 ( +i )\^[32]{} } & = & \^3 { ( +i )\^[32]{} i } = (’) (’)\^[32]{} . This indeed agrees with . [99]{} B. Alessandro [*et al.*]{} \[NA50 Collaboration\], Eur. Phys. J.  C [39]{} (2005) 335 \[hep-ex/0412036\]; A. Adare [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev. Lett.  [98]{} (2007) 232301 \[nucl-ex/0611020\]; R. Arnaldi [*et al.*]{} \[NA60 Collaboration\], Nucl. Phys.  A [783]{} (2007) 261 \[nucl-ex/0701033\]. T. Matsui and H. Satz, Phys. Lett. B [178]{} (1986) 416; H. Satz, J. Phys. G [32]{} (2006) R25 \[hep-ph/0512217\]. N. Brambilla [*et al.*]{},  [*Heavy quarkonium physics,*]{} hep-ph/0412158. T. Umeda, K. Nomura and H. Matsufuru, Eur. Phys. J. C [39S1]{} (2005) 9 \[hep-lat/0211003\]; M. Asakawa and T. Hatsuda, Phys. Rev. Lett.  [92]{} (2004) 012001 \[hep-lat/0308034\]; S. Datta, F. Karsch, P. Petreczky and I. Wetzorke, Phys. Rev. D [69]{} (2004) 094507 \[hep-lat/0312037\]; H. Iida, T. Doi, N. Ishii, H. Suganuma and K. Tsumura, Phys. Rev. D [74]{} (2006) 074502 \[hep-lat/0602008\]. A. Jakovác, P. Petreczky, K. Petrov and A. Velytsky, Phys. Rev.  [D 75]{} (2007) 014506 \[hep-lat/0611017\]. G. Aarts, C. Allton, M.B. Oktay, M. Peardon and J.I. Skullerud, Phys. Rev.  D [76]{} (2007) 094513 \[arXiv:0705.2198\]. S. Digal, P. Petreczky and H. Satz, Phys. Lett. B [514]{} (2001) 57 \[hep-ph/0105234\]; C.Y. Wong, Phys. Rev. C [72]{} (2005) 034906 \[hep-ph/0408020\]; F. Arleo, J. Cugnon and Y. Kalinovsky, Phys. Lett. B [614]{} (2005) 44 \[hep-ph/0410295\]; D. Cabrera and R. Rapp, Phys. Rev.  D [76]{} (2007) 114506 \[hep-ph/0611134\]; W.M. Alberico, A. Beraudo, A. De Pace and A. Molinari, Phys. Rev.  D [77]{} (2008) 017502 \[arXiv:0706.2846\]. A. Mócsy and P. Petreczky, Phys. Rev.  D [77]{} (2008) 014501 \[arXiv:0705.2559\]. O. Jahn and O. Philipsen, Phys. Rev. D [70]{} (2004) 074504 \[hep-lat/0407042\]. O. Kaczmarek and F. Zantow, hep-lat/0506019; Y. Maezawa, N. Ukita, S. Aoki, S. Ejiri, T. Hatsuda, N. Ishii and K. Kanaya \[WHOT-QCD Collaboration\], Phys. Rev.  D [75]{} (2007) 074501 \[hep-lat/0702004\]; M. Döring, K. Hübner, O. Kaczmarek and F. Karsch, Phys. Rev.  D [75]{} (2007) 054504 \[hep-lat/0702009\]. K. Peeters, J. Sonnenschein and M. Zamaklar, Phys. Rev.  D [74]{} (2006) 106008 \[hep-th/0606195\]; H. Liu, K. Rajagopal and U.A. Wiedemann, Phys. Rev. Lett.  [98]{} (2007) 182301 \[hep-ph/0607062\]; M. Chernicoff, J.A. García and A. Güijosa, JHEP [09]{} (2006) 068 \[hep-th/0607089\]; E. Cáceres, M. Natsuume and T. Okamura, JHEP [10]{} (2006) 011 \[hep-th/0607233\]; S.D. Avramis, K. Sfetsos and D. Zoakos, Phys. Rev.  D [75]{} (2007) 025009 \[hep-th/0609079\]; C. Hoyos, K. Landsteiner and S. Montero, JHEP [04]{} (2007) 031 \[hep-th/0612169\]; R.C. Myers, A.O. Starinets and R.M. Thomson, JHEP [11]{} (2007) 091 \[arXiv:0706.0162\]. M. Laine and Y. Schröder, JHEP [03]{} (2005) 067 \[hep-ph/0503061\]. P. Ginsparg, Nucl. Phys. B 170 (1980) 388; T. Appelquist and R.D. Pisarski, Phys. Rev. D 23 (1981) 2305; K. Kajantie, M. Laine, K. Rummukainen and M. Shaposhnikov, Nucl. Phys. [B 458]{} (1996) 90 \[hep-ph/9508379\]. R.D. Pisarski, Phys. Rev. Lett.  [63]{} (1989) 1129; J. Frenkel and J.C. Taylor, Nucl. Phys. B [334]{} (1990) 199; E. Braaten and R.D. Pisarski, Nucl. Phys. B [337]{} (1990) 569; J.C. Taylor and S.M.H. Wong, Nucl. Phys. B [346]{} (1990) 115. J.I. Kapusta, Nucl. Phys. B [148]{} (1979) 461; C. Zhai and B. Kastening, Phys. Rev.  [D 52]{} (1995) 7232 \[hep-ph/9507380\]; A. Vuorinen, Phys. Rev. D [67]{} (2003) 074032 \[hep-ph/0212283\]. S. Caron-Huot and G.D. Moore, arXiv:0708.4232. T. Toimela, Phys. Lett. B [124]{} (1983) 407; P. Arnold and C. Zhai, Phys. Rev.  [D 50]{} (1994) 7603 \[hep-ph/9408276\]; [*ibid.*]{}  [51]{} (1995) 1906 \[hep-ph/9410360\]; K. Kajantie, M. Laine, K. Rummukainen and Y. Schröder, Phys. Rev. D [67]{} (2003) 105008 \[hep-ph/0211321\]. A.K. Rebhan, Phys. Rev.  D [48]{} (1993) 3967 \[hep-ph/9308232\]. A.D. Linde, Phys. Lett. [B 96]{} (1980) 289; D.J. Gross, R.D. Pisarski and L.G. Yaffe, Rev. Mod. Phys. [53]{} (1981) 43. P. Arnold and L.G. Yaffe, Phys. Rev.  D [52]{} (1995) 7208 \[hep-ph/9508280\]; M. Laine and O. Philipsen, Phys. Lett.  B [459]{} (1999) 259 \[hep-lat/9905004\]. A. Hietanen, K. Kajantie, M. Laine, K. Rummukainen and Y. Schröder, [JHEP]{} [01]{} (2005) 013 \[hep-lat/0412008\]; A. Hietanen and A. Kurkela, [JHEP]{} [11]{} (2006) 060 \[hep-lat/0609015\]; F. Di Renzo, M. Laine, V. Miccio, Y. Schröder and C. Torrero, [JHEP]{} [07]{} (2006) 026 \[hep-ph/0605042\]. E. Braaten and A. Nieto, Phys. Rev. D 53 (1996) 3421 \[hep-ph/9510408\]. J.P. Blaizot, E. Iancu and A. Rebhan, [Phys. Rev.]{} [D 68]{} (2003) 025011 \[hep-ph/0303045\]; M. Laine and M. Vepsäläinen, JHEP [02]{} (2004) 004 \[hep-ph/0311268\]; P. Giovannangeli, Nucl. Phys. B [738]{} (2006) 23 \[hep-ph/0506318\]. M. Laine, O. Philipsen, P. Romatschke and M. Tassler, JHEP [03]{} (2007) 054 \[hep-ph/0611300\]. M. Laine, JHEP [05]{} (2007) 028 \[arXiv:0704.1720\]. M. Laine, O. Philipsen and M. Tassler, JHEP [09]{} (2007) 066 \[arXiv:0707.2458\]. L.D. McLerran and T. Toimela, Phys. Rev. D [31]{} (1985) 545; H.A. Weldon, Phys. Rev. D [42]{} (1990) 2384; C. Gale and J.I. Kapusta, Nucl. Phys. B [357]{} (1991) 65. A. Pineda and J. Soto, Nucl. Phys. B (Proc. Suppl.)  [64]{} (1998) 428 \[hep-ph/9707481\]; N. Brambilla, A. Pineda, J. Soto and A. Vairo, Nucl. Phys. B [566]{} (2000) 275 \[hep-ph/9907240\]. W.E. Caswell and G.P. Lepage, Phys. Lett. B [167]{} (1986) 437. N. Brambilla, A. Pineda, J. Soto and A. Vairo, Rev. Mod. Phys.  [77]{} (2005) 1423 \[hep-ph/0410047\]. J.G. Körner and G. Thompson, Phys. Lett.  B [264]{} (1991) 185. K.G. Chetyrkin and M. Steinhauser, Phys. Rev. Lett.  [83]{} (1999) 4001 \[hep-ph/9907509\]. A. Czarnecki and K. Melnikov, Phys. Rev. Lett.  [80]{} (1998) 2531 \[hep-ph/9712222\]; M. Beneke, A. Signer and V.A. Smirnov, Phys. Rev. Lett.  [80]{} (1998) 2535 \[hep-ph/9712302\]. T. Umeda, R. Katayama, O. Miyamura and H. Matsufuru, Int. J. Mod. Phys.  A [16]{} (2001) 2215 \[hep-lat/0011085\]. T. Hatsuda, J. Phys. G [34]{} (2007) S287 \[hep-ph/0702293\]. M.J. Strassler and M.E. Peskin, Phys. Rev.  D [43]{} (1991) 1500. K. Kajantie, M. Laine, K. Rummukainen and M. Shaposhnikov, Nucl. Phys. B [503]{} (1997) 357 \[hep-ph/9704416\]. F. Karsch, E. Laermann, P. Petreczky and S. Stickan, Phys. Rev.  D [68]{} (2003) 014504 \[hep-lat/0303017\]; G. Aarts and J.M. Martínez Resco, Nucl. Phys.  B [726]{} (2005) 93 \[hep-lat/0507004\]; A. Mócsy and P. Petreczky, Phys. Rev. D [73]{} (2006) 074007 \[hep-ph/0512156\]; G. Aarts and J. Foley, JHEP [02]{} (2007) 062 \[hep-lat/0612007\]. M. Beneke, hep-ph/9911490. G. Aarts, C. Allton, J. Foley, S. Hands and S. Kim, Phys. Rev. Lett.  [99]{} (2007) 022002 \[hep-lat/0703008\]; H.B. Meyer, Phys. Rev.  D [76]{} (2007) 101701 \[arXiv:0704.1801\]. [^1]: In ref. [@static], we set $\vec{r}' = \vec{0}$, and denoted the correlator by $ \check C_{>}(t,\vec{r}) \equiv C_{>}^V(t;\vec{r,0}) $. However, with certain channels, it will be advantageous to keep $\vec{r}'\neq\vec{0}$, because then the singularities from the static potential at $\vec{r} = \vec{0}$, and from the initial condition of the Schrödinger equation at $\vec{r} = \vec{r}'$, do not overlap. [^2]: A more systematic approach might follow by generalizing the framework of PNRQCD [@pnrqcd] to finite $T$. [^3]: Since both Euclidean and Minkowskian objects appear in this paper, we try to distinguish between them by denoting the former with a tilde. In particular, $\tilde P = (\tilde p_{\rmi{0f}},\mathbf{p})$ denotes fermionic Euclidean four-momenta, while $\tilde\gamma_\mu$ stand for Euclidean Dirac matrices, satisfying $\{\tilde \gamma_\mu, \tilde \gamma_\nu \} = 2 \delta_{\mu\nu}$. Any further unspecified conventions can be found in ref. [@static]. [^4]: Take first a Fourier transform, $ \tilde C_E(\omega_{\rmi{b}}) = \int_0^\beta \! {\rm d}\tau \, e^{i \omega_{\rmi{b}}\tau} C_E(\tau) $, where $\omega_{\rmi{b}}$ is a bosonic Matsubara frequency; then carry out the analytic continuation $ \rho(\omega) = \frac{1}{2i} [ \tilde C_E(-i[\omega+i 0^+]) - \tilde C_E(-i[\omega-i 0^+]) ] $. A typical term in $C_E(\tau)$, of the form $ \exp(\Delta_1 \tau + \Delta_2 (\beta-\tau)) $, becomes $ \rho(\omega) = - \pi ( e^{\beta \Delta_1} - e^{\beta \Delta_2} ) \delta(\omega + \Delta_1 - \Delta_2) $. [^5]: To 1-loop order, $ M = m_{{{\overline{\mbox{\tiny\rm{MS}}}}}}(m_{{{\overline{\mbox{\tiny\rm{MS}}}}}}) (1 + g^2 C_F/4\pi^2) $, $ V^k_{{\mbox{\tiny\rm{NRQCD}}}}(x;\vec{0}) = V^k_{{\mbox{\tiny\rm{QCD}}}}(x;\vec{0}) (1 + g^2 C_F/2\pi^2) $. [^6]: It may be noted, for instance, that the scalar density requires renormalization, unlike the vector current.
--- abstract: 'B cells develop high affinity receptors during the course of affinity maturation, a cyclic process of mutation and selection. At the end of affinity maturation, a number of cells sharing the same ancestor (i.e. in the same “clonal family”) are released from the germinal center; their amino acid frequency profile reflects the allowed and disallowed substitutions at each position. These clonal-family-specific frequency profiles, called “substitution profiles”, are useful for studying the course of affinity maturation as well as for antibody engineering purposes. However, most often only a single sequence is recovered from each clonal family in a sequencing experiment, making it impossible to construct a clonal-family-specific substitution profile. Given the public release of many high-quality large B cell receptor datasets, one may ask whether it is possible to use such data in a prediction model for clonal-family-specific substitution profiles. In this paper, we present the method “Substitution Profiles Using Related Families” (SPURF), a penalized tensor regression framework that integrates information from a rich assemblage of datasets to predict the clonal-family-specific substitution profile for any single input sequence. Using this framework, we show that substitution profiles from similar clonal families can be leveraged together with simulated substitution profiles and germline gene sequence information to improve prediction. We fit this model on a large public dataset and validate the robustness of our approach on an external dataset. Furthermore, we provide a command-line tool in an open-source software package (<https://github.com/krdav/SPURF>) implementing these ideas and providing easy prediction using our pre-fit models.' author: - | Amrit Dhar$^{1,2,*}$, Kristian Davidsen$^{2,*}$, Frederick A. Matsen IV$^{2,\dag}$, Vladimir N. Minin$^{3,\dag}$\ \ $^1$Department of Statistics, University of Washington, Seattle\ $^2$Fred Hutchinson Cancer Research Center\ $^3$Department of Statistics, University of California, Irvine\ $^{*}$joint first authors\ $^{\dag}$corresponding authors: [matsen@fredhutch.org](matsen@fredhutch.org), [vminin@uci.edu](vminin@uci.edu) bibliography: - 'main.bib' title: Predicting B Cell Receptor Substitution Profiles Using Public Repertoire Data --- Introduction {#introduction .unnumbered} ============ In the therapeutic antibody discovery and engineering field, researchers commonly isolate antibodies from animal or human immunizations and screen for functional properties such as binding to a target protein. Following the initial screening process, a small number of well-behaving antibodies (hits) are isolated for more rigorous examination of their biophysical properties in order to determine their potential as a therapeutic. After this stage, only a few final antibodies remain as lead candidates. However, even these carefully selected antibodies often have immunogenic peptides or other undesirable properties such as poor thermo/chemical stability and aggregation tendencies. To address these problems, the art of antibody engineering has emerged [@igawa2011engineering], with numerous rational design strategies developed to mitigate aggregation. Researchers have removed hydrophobic surface patches to avoid aggregation [@clark2014remediating; @casaz2014resolving; @courtois2016rational; @geoghegan2016mitigation], “deimmunized” complementarity-determining regions by screening immunogenic peptides and mutating positions detrimental for peptide MHCII binding [@harding2010immunogenicity], and improved thermostability through stable framework grafting [@mcconnell2014general] and targeted mutagenesis using predictions from proprietary structure/sequence analysis software [@seeliger2015boosting]. Although referred to as “rational”, the choice of which amino acid to use for a site-directed mutation is often made using 1) the germline as a reference, 2) biochemical similarity between amino acids, or 3) the highest probability amino acid from a generic substitution matrix (e.g. BLOSUM) [@henikoff1992amino]. However, neither of these three methods are explicitly designed to conserve antibody functionality (i.e. binding to the same epitope with the same kinetics), so mutations are likely to have negative side effects on affinity. These considerations motivate a prediction problem: given a B cell receptor (BCR) sequence, which positions can be modified, and to which amino acids, without drastically changing the binding properties of the resulting BCR? An immunization-derived antibody has already implicitly explored the mutational space through the population of B cells from which it derives, referred to as its clonal family (CF). The members of a CF are raised during affinity maturation in a germinal center and carry fitness information about the effect of amino acid substitutions. A profile of the observed substitutions aggregated over all the B cells in a CF reveals which sites are more conserved, which sites can be more freely edited, and which amino acids can be used for replacements. However, we generally do not sequence all the B cells that are released from a germinal center so the information to make such a substitution profile is lost. Thus, we can formulate a more specific version of our prediction problem: given bulk BCR data and a single input sequence, can we infer the most likely per-site substitutions that are allowed in its true germinal center lineage? We begin by reviewing the natural mutation and selection process of germinal center affinity maturation. The Darwinian selection undertaken inside a germinal center is driven by B cells’ ability to bind the antigen through the membrane-embedded BCR. The population of B cells in a germinal center is under stringent selection while being highly mutated, driving the cell population towards higher and higher affinity until the germinal center is dissolved. Each germinal center is seeded by around one hundred naive B cells, but eventually internal competition makes one or a few of these lineages take over the whole germinal center [@tas2016visualizing]. Although B cells in the germinal center reaction experience an extraordinarily high mutation rate ($10^6$ fold higher than the regular somatic mutation rate [@victora2012germinal]), they rarely harbor more than 15% mutations at the DNA level [@Briggs2017]. However, since they must maintain some degree of antigen specificity to survive during the course of the germinal center reaction, lineages evolve in small incremental steps [@kepler2014reconstructing; @kuraoka2016complex] and therefore, even lineages that drift far away from their naive B cell ancestor most likely maintain the same epitope specificity throughout the germinal center reaction [@Schmidt2013-jw]. We can describe the combination of germinal center mutation and selection dynamics by computing per-site amino acid frequency vectors from observed BCR sequence data. We follow previous authors in calling site-specific amino acid probability vectors “substitution profiles”, where each vector in a profile stores the probabilities of observing the 20 different amino acids at a given site [@sheng2017gene]. We use the concept of a clonal family, defined by a shared naive DNA sequence, to segment BCR sequences into evolutionarily-related groups [@ralph2016likelihood]. CF inference is highly informed by nucleotide sequences and therefore performed using DNA sequences. This makes DNA-level information necessary even though germinal center selection operates at the protein level and synonymous codons do not possess any fitness advantages (modulo transcription rate differences and codon bias, which we follow many others in ignoring here). The per-site amino acid frequency vectors described above form the substitution profile estimates; the substitution profile estimates converge to the true substitution profiles as the number of sequences sampled from the same CF tends to infinity. ![ Amino acid substitution profiles viewed from three different perspectives: High-throughput sequencing data () yields large amounts of VDJ sequences, but because of uneven sampling many CFs will be sampled just once, resulting in poor representations of the amino acid substitution profiles of those true CFs. “Substitution Profiles Using Related Families” () is a statistical framework that integrates large scale Rep-Seq data to predict amino acid substitution profiles for singleton CFs. affinity maturation will test many different mutations and the resulting CFs reflect the amino acid substitution profiles that we attempt to predict. []{data-label="fig:overview_figure"}](figures/overview_figure_i3.pdf){width="\textwidth"} Most CFs do not contain enough sequences in order to get a detailed substitution profile estimate. Indeed, most CFs in repertoire sequencing (Rep-Seq) samples have few members and a large fraction are singletons due to the exponential nature of the CF size distribution [@ralph2016likelihood]. Additionally, many antibody screening methods are not geared towards whole repertoire sequencing. One may wish, then, to enhance the substitution profile estimates for data-sparse CFs with substitution profile information from similar CFs. In this paper, we present “Substitution Profiles Using Related Families” (SPURF), a penalized tensor regression framework that integrates multiple sources of information to predict the CF-specific amino acid frequency profile for a single input BCR sequence (). Some of these information sources include substitution profiles for CFs in large, publicly available BCR sequence datasets and germline gene sequence information. We combine the local context-specific profile information with global profile information derived from other related germinal centers by regularizing the noisy local substitution profile estimate and pooling it closer towards more robust global profile estimates. Even though each germinal center focuses on binding to a unique epitope context, there are structural and possibly functional properties associated with BCR sequences that are common across germinal centers that we can leverage. In addition, our inference machinery uses both standard and spatial lasso penalties as model regularizers and, as a result, furnishes sparse, interpretable parameter estimates. While our output type shares some similarities to that described by @sheng2017gene, the proposed objective, approach, and details differ (e.g. they predict substitution profiles for gene families, we predict substitution profiles for CFs). We enable substitution profile prediction for single input BCR sequences based on profiles derived from a high-quality repertoire dataset that contains B cell samples from many human donors. To demonstrate the usefulness of our technique, we validate SPURF on an external dataset containing CFs extracted from a single human donor. Lastly, we implement SPURF in an open-source software package (<https://github.com/krdav/SPURF>), which outputs a predicted CF-specific substitution profile and an associated logo plot based on a single input BCR sequence. Methods {#methods .unnumbered} ======= Overview {#overview .unnumbered} -------- The aim of our model is to take a single sequence and predict the substitution profile for the CF from which this single sequence derived. For this prediction problem, we have no direct information about this desired substitution profile other than the information contained in the input sequence itself, but we may use other information (e.g. from the inferred germline gene, simulated substitutions, or information derived from published BCR sequence datasets). For large CFs, a CF-specific substitution profile can be constructed simply by counting and making a per-site frequency matrix, with the rows of the matrix representing each of the 20 amino acids, and the columns being the sequence positions. For training, we extract a collection of such large CFs and use them to build “ground truth” CF-specific substitution profiles as a training set for fitting the model. A randomly sampled single sequence is then taken out from each of these large CFs to predict the substitution profile, which is compared to the ground truth. We refer to these single sequences, sampled from large CFs, as subsamples. To make the best possible prediction, we need a flexible model framework that can accommodate different sources of information seamlessly (). For example, previous work by @sheng2016effects and @Kirik2017-bc suggests that the various V genes have different characteristic paths of diversification. We can obtain a data-driven summary of that intuition by building profiles from large Rep-Seq data sets stratified by V gene. We may also think that the neutral substitution process is an important factor in determining substitution profiles [@sheng2016effects]. We can quantify that sort of information by repeatedly simulating the neutral substitution process using a context-sensitive model [@cui2016model]. To make predictions using these types of information, we need a way of describing the various sites, and a way of integrating the information across the sites. We use the AHo numbering scheme [@honegger2001yet] to provide a single coordinate system to all sequences via its fixed-length numbering vector going from 1 to 149. Given this coordinate system, we use a site-wise weighted average of the input predictive profiles using a $\boldsymbol{\alpha}$ weight vector for each source of profile information. To train this model, we fit the $\boldsymbol{\alpha}$ vectors by minimizing some objective function that quantifies the difference between the predicted profiles (where the prediction uses the subsampled sequence and the external profile information) and the “ground truth” substitution profiles from the large CFs. Any objective function could be used, but here we provide implementations of two such functions, a “fine-grained” $L_2$-error-based objective and a “coarse-grained” Jaccard-similarity-based objective [@jaccard1912distribution]. ![ SPURF uses a per-site linear combination of substitution profiles from diverse sources to predict complete substitution profiles from a single member of a CF. At the top are the different profiles that serve as inputs to the model, some directly related to the naive sequence ($\widehat{\mathbf{X}}_{{\text{naiveAA}}}$ and $\widehat{\mathbf{X}}_{{\text{neut}}}$), and others partitions of the public Rep-Seq datasets ($\widehat{\mathbf{X}}_{{\text{vgene}}}$ and $\widehat{\mathbf{X}}_{{\text{vsubgrp}}}$). To predict a substitution profile, a weighted average is taken over the input sequence $\mathbf{X}$ and external profiles $\mathbf{X}^* = \bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}, \widehat{\mathbf{X}}_{{\text{neut}}}, \widehat{\mathbf{X}}_{{\text{vsubgrp}}}\bigr\}$ (see the dashed line bubble). The vertical blue arrow indicates that the weighted average (in the dashed line bubble) occurs at each of the 149 AHo positions. Once a predicted profile is generated, this is compared to ground truth using either $L_2$ error or Jaccard similarity as a performance metric. The $\boldsymbol{\alpha}$ vectors are estimated by optimizing the objective function, which also includes a statistical regularization term to prevent overfitting (not shown for simplicity). []{data-label="fig:model_overview"}](figures/model_overview_i4.pdf){width="100.00000%"} We use two forms of regularization to avoid overfitting the many parameters of this model. This includes a standard lasso penalty to shrink weights to zero that do not contribute significantly to prediction performance [@tibshirani1996regression]. We also use a fused lasso penalty [@tibshirani2005sparsity; @tibshirani2014adaptive] to smooth differences between parameters at nearby sites in the sequence. These regularization terms have tuning parameters that regulate the strength of the penalties and are estimated using cross-validation. Given this setup, a forward stepwise selection procedure is run with cross-validation to pick the set of external profiles to use in the final model. As a last check, this model is tested on an external dataset to give a fair estimate of the prediction performance. Data {#data .unnumbered} ---- We divide input data into two parts, with each part for a respective purpose: 1) model fitting and model testing and 2) providing “public” substitution profiles over clustered data to be used by our model. Throughout this work, we are careful to not use the same data for both purposes as this would bias our estimates; as a final validation, we test SPURF on an external dataset which is only used in this validation. Because we do not model sequence error, we only include high-quality data that we have high confidence in. We collect post-processed data files from 6 published works on Rep-Seq, which we refer to as repertoire data 1 to 6 (RD1-6): 1. RD1 from @gupta2017hierarchical, which is an Illumina MiSeq re-sequencing of the samples in @laserson2014high, where they sequence multiple time-points before and after influenza vaccination of 3 donors using the 454 pyrosequencing platform. 2. RD2 from @vander2017dysregulation, from a study of the auto-immune disease Myasthenia Gravis (MG), in which 9 MG patients and 4 healthy donors participate. 3. RD3 from @stern2014b, containing data from different tissues in a study of B cell response in 4 multiple sclerosis patients. 4. RD4 from @tsioris2015neutralizing, from a study of neutralizing antibodies against the West Nile virus by sequencing naive and memory cells from 7 virus infected donors. 5. RD5 from @shugay2014towards, from a study of Rep-Seq error correction by sequencing naive, plasma, and memory cells from a single healthy donor. 6. RD6 from @meng2017atlas, from the “B cell tissue atlas” acquired from the ImmuneDB web portal. All datasets are acquired in their post-processed form with read processing performed as described in their respective publications. The first five datasets (RD1-5) are prepared from unique molecular identifier (UMI) barcoded cDNA spanning the whole VDJ region and sequenced on the Illumina MiSeq platform using overlapping paired-end reads. Using the UMI, these reads are processed to address both PCR and sequencing errors giving high confidence reads [@shugay2014towards]. Briefly, UMIs are used for error correction in conjunction with either of the pRESTO [@vander2014presto] or MIGEC [@shugay2014towards] processing pipelines and an appropriate Phred quality score cutoff. Paired-end reads are assembled using pRESTO and only the set of high confidence assembled reads constitute the final dataset used in this work. RD6 is the only dataset not prepared with UMIs; however, it is sequenced directly from genomic DNA (gDNA) instead of the more common practice of sequencing mRNA. Sequencing gDNA has the benefit of avoiding mutations introduced by the transcription machinery as well as mutations introduced in the RT-PCR step. On the other hand, DNA sequencing is not able to discriminate between expressed versus unexpressed BCRs (e.g. in the case of faulty VDJ recombination) and therefore we apply aggressive filtering of non-functional BCR sequences. We prefer quality over quantity and therefore avoid datasets from the 454 technology because of their higher indel frequencies compared to those from Illumina technologies [@loman2012performance]. Individual sequence files are merged based on donor identity so that the number of sample files matches the number of donors; this process yields 33 donor files. The donor files are then annotated and partitioned into CFs using the [`partis`]{} software [@ralph2016consistency; @ralph2016likelihood]. Each donor file is run separately from the other files so CFs are defined by their unique [`partis`]{}-inferred naive sequence and donor identity. To ensure we obtain the highest quality and most biologically relevant sequences, [`partis`]{} is run in its most restrictive mode, discarding all reads with VDJ recombinations that are deemed as unproductive because of out-of-frame N/P junction nucleotides, missing invariant codons, or stop codons inside the VDJ region; furthermore, the most accurate [`partis`]{} partitioning mode (“full”) is used to get the best CF estimates. Lastly, productive VDJ-recombined sequences are removed if they contain indels to assure concordance between the length of the naive sequence and the length of the read sequences in its CF. At this stage, some sequences contain ambiguous bases (e.g., because of primer masking); these are allowed to pass only if the ambiguous bases are inside the first or last 30 nucleotides of the VDJ region (equivalent to the length of the potentially masked PCR primers), otherwise they are discarded. This is a way of substituting the error-prone ends with neutral bases that minimize variance and maintain a conservative estimate of the substitutions; we also note that this has no apparent effect on the subsequently-described estimates ( and ). For all sequences that pass this requirement, ambiguous bases are substituted with bases from the naive sequence in batches of 3 nucleotides (i.e. one codon) at a time until all ambiguous bases are resolved. Sequences are then translated into their respective amino acid sequences and de-duplication of repeated amino acid sequences is done within each CF. Because our statistical methodology operates on these amino acid sequences, we use the word “sequence” in subsequent sections to refer to these amino acid sequences. All CFs with fewer than 5 unique sequences are discarded. From these remaining CFs, their inferred naive sequences are used for antibody sequence numbering with the ANARCI software [@dunbar2015anarci] under the AHo numbering scheme [@honegger2001yet]. As a result of our restriction to non-indel sequences, all sequences within a given CF have equal length; thus, the AHo numbering from the naive sequence can be positionally transferred to all its CF-related read sequences. Finally, for each CF, the amino acid usage is extracted as a vector of counts at each AHo position. This overall dataset, which we call the “aggregated” dataset, contains 518,174 sequences distributed over 31,893 CFs and is built as a matrix of counts with rows denoting CFs and columns representing AHo positions and amino acid identities. All data used to build this aggregated dataset is public and freely available. We provide processed data partitioned into CFs upon request. ### Model Fitting Dataset {#model-fitting-dataset .unnumbered} To fit our CF-specific substitution profile prediction model, it is desirable to use the CFs from the aggregated dataset with the most sequence members so we can train using the observed substitution profiles with the least amount of noise; on the other hand, it is also desirable to extract CFs from as many donors as possible to avoid overfitting towards a few similar donors. To achieve both goals, we pick 500 CFs as a “model fitting” dataset as follows. We first exclude any CFs with less than 100 sequences from being eligible to be picked. We then cycle through donors, each time picking the largest remaining eligible CF. If a donor does not have any remaining eligible CFs, it is skipped. The process ends when 500 CFs are found; all unpicked CFs are used as the “public” dataset. In addition, we perform subsampling for each CF in the model fitting dataset; this is the information from which we would like to predict the full profile. First, a single sequence is randomly chosen from each CF, then [`partis`]{} is re-run using each of these subsampled sequences to re-do the VDJ annotation and naive sequence inference. For some inferred naive DNA sequences, a stop codon is incidentally present in the N/P nucleotides of the junction region; these are considered spurious and replaced by the identically positioned codon from the input sequence. We stress that the CF-specific annotation and naive sequence are inferred solely based on the subsampled sequence itself and are not determined using information from the other CF sequence members. Additionally, the parameters used within the [`partis`]{} clustering and annotation procedure are derived from an external dataset. Once we finish the [`partis`]{} inference process on the subsampled sequences, we construct the amino acid count matrix for these same sequences; we denote these substitution profiles as the “subsampled” profiles because they are subsampled from the “full” profiles in the model fitting dataset. ### Simulation of Neutral Substitution Profiles {#simulation-of-neutral-substitution-profiles .unnumbered} For each of the 500 subsampled substitution profiles, we also simulate a neutral substitution profile via a context-sensitive model. For each subsampled sequence, we calculate its number of somatic hypermutations (SHMs) and introduce that number of mutations sequentially into the inferred naive DNA sequence according to the BCR-specific neutral substitution model S5F [@cui2016model]. Once the last mutation is introduced, the simulated DNA sequence is translated into an amino acid sequence and stored as a sample of the neutral substitution process. This procedure is repeated 10,000 times and the count profile aggregated over all the samples is referred to as the “neutral” profile. ### External Validation Dataset {#external-validation-dataset .unnumbered} For validation, a test set, called “Briggs”, is made from the healthy donor single cell droplet sequencing dataset described in [@Briggs2017]. Briefly, the data is made by passing 3 million B cells into 6 emulsion pools, each droplet with a unique barcode, and then reverse transcribing mRNA inside these droplets, attaching both a droplet and a molecular barcode. After breaking the emulsion, cDNA is sequenced and processed using UMI consensus building using pRESTO. The highest-quality UMI consensus sequence is extracted from each drop and aggregated into the final heavy chain dataset, which is then further partitioned into CFs using [`partis`]{}. Finally, the validation dataset is built up in the same manner as the model fitting dataset, where the only difference is that we allow smaller CFs to enter this dataset (minimum 28 sequences) in order to increase the number of extracted CFs to 100. For this external dataset, sequences and processed data partitioned into CFs are available upon request. --------------- --------------------- ----------------- ------------------------ ---------------------- ------------------------- ---------------------- Dataset $N_{\text{donors}}$ $N_{\text{CF}}$ Total $N_{\text{seq}}$ Min $N_{\text{seq}}$ Median $N_{\text{seq}}$ Max $N_{\text{seq}}$ Aggregated 33 31,893 518,174 5 9 2,709 Model fitting 15 500 98,887 100 147 2,709 Public 33 31,393 419,287 5 8 104 Briggs 1 100 6,702 28 44 370 --------------- --------------------- ----------------- ------------------------ ---------------------- ------------------------- ---------------------- : Number of donors ($N_{\text{donors}}$), number of CFs ($N_{\text{CF}}$), number of sequences from all CFs (Total $N_{\text{seq}}$), smallest CF size (Min $N_{\text{seq}}$), median CF size (Median $N_{\text{seq}}$), and maximum CF size (Max $N_{\text{seq}}$). “Aggregated” is the base dataset aggregating RD1-6. “Model fitting” refers to the dataset with the 500 largest CFs from the “Aggregated” dataset. “Public” is the dataset left after the “Model fitting” dataset is extracted from the “Aggregated” dataset. “Briggs” is the external validation dataset used for testing. []{data-label="table:ds_sumstats"} Input Data Tensor {#input-data-tensor .unnumbered} ----------------- Before we present our penalized tensor regression model, we first describe how the input data for the model is constructed, building off the data descriptions in the last subsection. Throughout the rest of this section, we assume the count matrices are normalized to frequencies and reorganized into three-dimensional tensors (i.e. arrays) as follows. For any substitution profile tensor $\mathbf{T} = \{T_{i,j,k}\}$, let $T_{i,j,k}$ denote the substitution frequency of the $k$th amino acid at the $j$th AHo position for the $i$th CF; we represent the subsampled, full, and public substitution profile tensors as $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$, respectively. Our goal is to use the subsampled profiles $\mathbf{X}$ to predict the corresponding full substitution profiles $\mathbf{Y}$ (i.e. we want to construct a function $F(\mathbf{X})$ such that $F(\mathbf{X}) \approx \mathbf{Y}$). We incorporate information from the public dataset $\mathbf{Z}$ to enhance these predictions. In addition to the subsampled profiles, we use other types of substitution profiles within $F(\mathbf{X})$: 1. Public substitution profiles segmented by the inferred V-subgroup label ($\widehat{\mathbf{X}}_{{\text{vsubgrp}}}$); 2. Public substitution profiles segmented by the inferred V-gene label ($\widehat{\mathbf{X}}_{{\text{vgene}}}$); 3. Inferred naive sequence “substitution profiles” ($\widehat{\mathbf{X}}_{{\text{naiveAA}}}$); 4. Public substitution profiles segmented by the inferred naive sequence ($\widehat{\mathbf{X}}_{{\text{naiveAA-clust}}}$); 5. Public substitution profiles segmented by the original frequency profiles ($\widehat{\mathbf{X}}_{{\text{clust}}}$); 6. Neutral substitution profiles ($\widehat{\mathbf{X}}_{{\text{neut}}}$). To compute the external profiles in $\widehat{\mathbf{X}}_{{\text{vsubgrp}}}$ (resp. $\widehat{\mathbf{X}}_{{\text{vgene}}}$), we cluster the public dataset $\mathbf{Z}$ by averaging its CF-specific substitution profiles according to the [`partis`]{}-inferred [@ralph2016consistency] IMGT defined [@lefranc2001nomenclature] V-subgroup (resp. V-gene) labels and then assign each row in $\mathbf{X}$ to a V-subgroup (resp. V-gene) cluster profile according to its V-subgroup (resp. V-gene) identity. We obtain the second set of profiles $\widehat{\mathbf{X}}_{{\text{naiveAA}}}$ by using the [`partis`]{}-inferred naive sequences as substitution profiles (these profiles contain zeros and ones because they are based on one sequence only); we re-emphasize that these naive sequences are inferred based only on the corresponding subsampled sequences in $\mathbf{X}$. We cluster the public dataset $\mathbf{Z}$ once more by running K-means clustering based on the inferred naive sequences in $\mathbf{Z}$ and obtain our third set of substitution profiles $\widehat{\mathbf{X}}_{{\text{naiveAA-clust}}}$ by assigning each CF in $\mathbf{X}$ to its closest cluster centroid. The additional cluster profiles $\widehat{\mathbf{X}}_{{\text{clust}}}$ are obtained similarly as above, except in this case, we run K-means clustering based on the original frequency profiles in $\mathbf{Z}$. The K-means clustering procedure is run over a grid of cluster sizes ranging from 2 to 120 using the algorithm described by @hartigan1979algorithm with the standard euclidean distance metric. Lastly, the tensor $\widehat{\mathbf{X}}_{{\text{neut}}}$ contains the simulated S5F neutral substitution profiles, which are described in the previous subsection. The frequency tensors $\widehat{\mathbf{X}}_{{\text{vsubgrp}}}$ and $\widehat{\mathbf{X}}_{{\text{vgene}}}$ are important to include in our analysis because these profiles capture substitution information at the level of the V subgroup (V1, V2, ...) and V gene (V1-5, V2-2, ...), respectively; this is similar to the types of profiles obtained in [@sheng2017gene]. As described in the introduction, most germinal center lineages do not accumulate many mutations relative to the naive sequence so substitution profiles based solely on the naive sequence (like $\widehat{\mathbf{X}}_{{\text{naiveAA}}}$) may be informative for predicting the mutational patterns at conserved residue positions. In addition, we believe that the $\widehat{\mathbf{X}}_{{\text{naiveAA-clust}}}$ cluster profiles are useful as the naive sequence can greatly influence the pattern of substitutions in a CF due to local sequence context. Unlike the $\widehat{\mathbf{X}}_{{\text{vsubgrp}}}$ and $\widehat{\mathbf{X}}_{{\text{vgene}}}$ substitution profiles, which are based on IMGT labeling schemes, the profiles in $\widehat{\mathbf{X}}_{{\text{naiveAA-clust}}}$ (and $\widehat{\mathbf{X}}_{{\text{clust}}}$) are determined by a data-driven clustering procedure, which allows us to group CFs in $\mathbf{Z}$ in a more intricate fashion. The simulated neutral substitution profiles $\widehat{\mathbf{X}}_{{\text{neut}}}$ are able to provide some insight into the CF-specific SHM processes without the corresponding clonal selection effects. To condense our model presentation, we introduce a four-dimensional tensor $\mathbf{X}^*$ that combines as many of the input profiles mentioned previously as we would like, where $p$, the size of the fourth tensor dimension, represents the number of external profiles used. We define $\mathbf{X}^* \equiv \{X^*_{i,j,k,l} \}$ to be the input data tensor that incorporates all the external information we want to use in our substitution profile predictions; note that $i \in \{ 1, ..., N_{CF} \}$ ($N_{CF}$ CFs in the tensors), $j \in \{ 1, ..., 149 \}$ (149 AHo positions), $k \in \{ 1, ..., 20 \}$ (20 amino acids), and $l \in \{ 1, ..., p \}$ ($p$ external profiles). Each element $X^*_{i,j,k,l}$ represents a substitution frequency as described above for $\mathbf{T}_{i,j,k}$; for instance, $X^*_{5,130,1,4}$ represents the substitution frequency of the first amino acid (i.e. alanine) at the 130th AHo position for the 5th CF in the 4th profile in the tensor. In addition, we use the indexing symbol $\bullet$ to extract all elements of a particular array dimension of a tensor (i.e. $\mathbf{X}^*_{10, 50, \bullet, 2}$ specifies the full substitution profile of the 20 amino acids at the 50th AHo position for the 10th CF in the 2nd profile in the tensor). This setup allows us to easily include as many external profiles as we would like. Model Formulation {#model-formulation .unnumbered} ----------------- Given the subsampled profiles $\mathbf{X}$ and all the external profiles $\mathbf{X}^*$, we compute a weighted average to form an estimator of $\mathbf{Y}$. Our independent-across-sites model $F(\mathbf{X}) = \bigl[ f(\mathbf{X}_{\bullet,1,\bullet}), ..., f(\mathbf{X}_{\bullet,149,\bullet}) \bigr]$ is specified as follows: $$f(\mathbf{X}_{\bullet,j,\bullet}) \equiv f(\mathbf{X}_{\bullet,j,\bullet}; \boldsymbol{\alpha}_{j,\bullet}) = \sum_{l=1}^p \alpha_{j,l} \cdot \mathbf{X}^*_{\bullet,j,\bullet,l} + \Bigl( 1 - \sum_{l=1}^p \alpha_{j,l} \Bigr) \cdot \mathbf{X}_{\bullet,j,\bullet},$$ where $\boldsymbol{\alpha} = \{ \alpha_{j,l} \}$; $0 \leq \alpha_{j,l} \leq 1$; $0 \leq \sum_{l=1}^p \alpha_{j,l} \leq 1$ represents the site-specific weights of the different external profiles for $j = 1, ..., 149$ and $l = 1, ..., p$. Although we consider $f$ to be a function of the per-site data $\mathbf{X}_{\bullet,j,\bullet}$, the frequencies $\mathbf{X}^*_{\bullet,j,\bullet,l}$ are computed using sequence-level, site-dependent information. With $149 \times p$ parameter values of $\boldsymbol{\alpha}$, this is a highly parameterized model so we include regularization terms to prevent overfitting and obtain sparse, interpretable parameter estimates. Specifically, we use standard and spatial (fused) lasso penalties to achieve these goals. Standard lasso penalties shrink individual parameters to zero and are commonly used to obtain sparse solutions in regression problems [@tibshirani1996regression]. It has been shown that regression models using standard lasso penalties provide more accurate predictions than models using best subset selection penalties when there is a low signal-to-noise ratio [@hastie2017extended], which probably holds true in our problem as well. In addition, standard lasso penalties are convex functions, which is important in a regression problem as it guarantees that a local minimum is indeed a unique global solution [@boyd2004convex]. On the other hand, fused lasso penalties shrink the differences between parameters to zero and are useful in regression problems with spatially-related covariates [@tibshirani2005sparsity]. We believe that the $\boldsymbol{\alpha}$ parameters have a spatial relationship (i.e. adjacent residues are under similar constraints); for instance, given that the mutations in the framework regions are largely related to antibody stability, it makes sense that we would weight external profile information similarly in those regions. The fusion penalty in this setting enforces smoothness of the $\boldsymbol{\alpha}$ trend across the AHo positions. For example, if we penalize first-order differences of the $\boldsymbol{\alpha}$ trend, the fitting procedure will necessarily favor trends that have no slope (i.e. that are piecewise constant). We can obtain more flexible piecewise polynomial $\boldsymbol{\alpha}$ trends by penalizing higher-order successive differences of $\boldsymbol{\alpha}$ [@tibshirani2014adaptive]. In our modeling framework, the standard lasso penalty is represented as $\sum_{j=1}^{149} \sum_{l=1}^p |\alpha_{j,l}| = \bigl|\bigl| \boldsymbol{\alpha} \bigr|\bigr|_1$ and the fused lasso penalty is specified by $\sum_{l=1}^p \bigl|\bigl| \nabla^d (\boldsymbol{\alpha}_{\bullet,l}) \bigr|\bigr|_1$, where $|| \cdot ||_q$ denotes the $L_q$ norm and $\nabla^d ( \cdot )$ represents the $d$th difference operator. This $\nabla^d ( \cdot )$ operator accepts a vector $\mathbf{v}$ as input (call its length $n_{\mathbf{v}}$) and outputs a length-$(n_{\mathbf{v}} - d)$ vector that results from successively differencing adjacent elements $d$ times. In the special case when $d = 1$, the fusion penalty becomes $\sum_{l=1}^p \bigl|\bigl| \nabla^1 (\boldsymbol{\alpha}_{\bullet,l}) \bigr|\bigr|_1 = \sum_{j=2}^{149} \sum_{l=1}^p |\alpha_{j,l} - \alpha_{j-1,l}|$; the $|\alpha_{j,l} - \alpha_{j-1,l}|$ terms can be interpreted as first-order discrete derivatives. Our unpenalized objective function can be written as: $$L_2^{\boldsymbol{\alpha}} \equiv L_2^{\boldsymbol{\alpha}}(\mathbf{Y}, F(\mathbf{X})) = \frac{1}{149 \cdot N_{CF}} \sum_{j=1}^{149} \bigl|\bigl| \mathbf{Y}_{\bullet,j,\bullet} - f(\mathbf{X}_{\bullet,j,\bullet}; \boldsymbol{\alpha}_{j,\bullet}) \bigr|\bigr|_2^2,$$ where, as in the last subsection, $N_{CF}$ denotes the number of CFs in $\mathbf{X}$ and $\mathbf{Y}$; we refer to this objective as “$L_2$ Error”. Our penalized estimation problem is defined in the following manner: $$\begin{gathered} \begin{gathered} \widehat{\boldsymbol{\alpha}} = \underset{\boldsymbol{\alpha}}{{\text{argmin}}} \ L_2^{\boldsymbol{\alpha}}(\mathbf{Y}, F(\mathbf{X})) + \lambda_1 \bigl|\bigl| \boldsymbol{\alpha} \bigr|\bigr|_1 + \lambda_2 \sum_{l=1}^p \bigl|\bigl| \nabla^d (\boldsymbol{\alpha}_{\bullet,l}) \bigr|\bigr|_1, \\ \text{s.t. } 0 \leq \alpha_{j,l} \leq 1, \ 0 \leq \sum_{l=1}^p \alpha_{j,l} \leq 1, \ \forall j,l, \end{gathered} \label{eq:min_problem}\end{gathered}$$ where $\lambda_1, \lambda_2 \geq 0$ and $d \in \mathbb{N}$ signify tuning parameters. The differencing order $d$ is used to specify a given level of smoothness in the spatial $\boldsymbol{\alpha}$ trend estimates because the $\sum_{l=1}^p \bigl|\bigl| \nabla^d (\boldsymbol{\alpha}_{\bullet,l}) \bigr|\bigr|_1$ term in the above minimization problem encourages $\boldsymbol{\alpha}$ trends that have $d$th order discrete derivatives close to 0 (i.e. that are piecewise polynomials of order $d-1$). In addition, careful selection of $\lambda_1$ and $\lambda_2$ is required to obtain an adequate model fit. Unfortunately, this is a constrained optimization problem with a multivariate output and there are not any obvious ways to minimize such an objective without resorting to general-purpose optimizers. Therefore, in all our experiments, we use the L-BFGS-B algorithm [@byrd1995limited] to fit the above model. Jaccard Similarity {#jaccard-similarity .unnumbered} ------------------ While the model described above has computational and statistical appeal, in engineering applications it is mostly interesting to know the high-frequency amino acid predictions; however, our penalized objective function focuses attention on the complete substitution profiles and not exclusively the high-frequency amino acids. To provide a metric more closely aligned with antibody engineering goals, we utilize the Jaccard similarity metric, which can be used to measure differences between predicted and observed high-frequency amino acid sets. Sets of high-frequency amino acids are defined at each position by a minimum frequency cutoff $t$; Jaccard similarities are then computed between the observed and predicted sets and averaged across each CF and AHo position in the dataset. The Jaccard similarity metric [@jaccard1912distribution] measures the similarity between two finite sets. Specifically, for any sets $A$ and $B$, the similarity metric $J(A,B)$ is defined as the ratio of the intersection size $|A \cap B|$ to the union size $|A \cup B|$. It has these properties: $0 \leq J(A,B) \leq 1$; $J(A,B)=1$ when $A=B$ and $J(A,B)=0$ when $A \cap B = \emptyset$ (empty set). To formally establish our use of Jaccard similarity, we define the following notation. Let $\mathcal{Y}_{i,j} = \{y \in \mathbf{Y}_{i,j,\bullet} \mid y \geq t\}$ represent the set of amino acid frequencies at AHo position $j$ for CF $i$ that has observed frequencies greater than or equal to the cutoff $t$ and denote $\boldsymbol{\mathcal{Y}} \equiv \{ \mathcal{Y}_{i,j} \}$ for $i = 1, ..., N_{CF}$ and $j = 1, ..., 149$. We define $\widehat{\mathcal{F}}^{\mathbf{X}}_{i,j}$ and $\widehat{\boldsymbol{\mathcal{F}}}^{\mathbf{X}} \equiv \{ \widehat{\mathcal{F}}^{\mathbf{X}}_{i,j} \}$ to be the analogous quantities for the predicted amino acid frequencies. If we let $\mathcal{A}(\mathcal{Y}')$ denote a function that accepts as input an amino acid frequency set $\mathcal{Y}'$ (i.e. $\mathcal{Y}_{i,j}$ or $\widehat{\mathcal{F}}^{\mathbf{X}}_{i,j}$) and outputs the corresponding set of amino acid identities, then our Jaccard similarity objective can be written as: $$J_t^{\boldsymbol{\alpha}} \equiv J_t^{\boldsymbol{\alpha}}(\mathbf{Y}, F(\mathbf{X})) = \frac{1}{149 \cdot N_{CF}} \sum_{i=1}^{N_{CF}} \sum_{j=1}^{149} J\bigl(\mathcal{A}(\mathcal{Y}_{i,j}), \mathcal{A}(\widehat{\mathcal{F}}^{\mathbf{X}}_{i,j})\bigr),$$ which is referred to as the “Jaccard Similarity” objective. We can define a penalized Jaccard estimation problem by substituting $-J_t^{\boldsymbol{\alpha}}(\mathbf{Y}, F(\mathbf{X}))$ for $L_2^{\boldsymbol{\alpha}}(\mathbf{Y}, F(\mathbf{X}))$ in Equation . Jaccard similarity optimization is difficult using derivative-based optimization because of its discrete nature, so we use a smooth approximation of the aforementioned metric for model fitting in our experiments (see Supplementary subsection Smoothed Jaccard Similarity). Forward Stepwise Selection {#forward-stepwise-selection .unnumbered} -------------------------- We devise a forward stepwise selection procedure to help us determine the combination of external profiles that best predict the outcome of interest, which can be penalized $L_2$ Error or Jaccard Similarity. In this procedure, we initially try all possible external profiles in the model separately and determine the best fit using 5-fold cross-validation. We cache the best model from the initial step and continue fitting models with two external profiles; the first external profile is fixed to be the best profile from the previous round and the second profile can be any possible remaining external profile. We continue this iterative scheme until we reach a prespecified limit on the number of external profiles allowed in $\mathbf{X}^*$. It is important to note that to ease computation, we perform forward selection using the unpenalized variants of our models. Even though this procedure is greedy and not as thorough as all-subsets selection, we believe this technique provides the best trade-off between accuracy and efficiency. We provide the implementation of our stepwise procedures at <https://github.com/krdav/SPURF>. Inference Pipeline {#inference-pipeline .unnumbered} ------------------ We apply a 80%/20% training/test split to the model fitting dataset described above. We first run the forward stepwise selection procedure with a maximum profile limit of five to approximately determine the best profile groupings starting with a single profile and ending with a group of five profiles. Using the profile groupings from the previous step, we fit the penalized version of the model and use 5-fold cross-validation to obtain estimates of the relevant tuning parameters, which consist of the lasso penalty weights $\lambda_1$, $\lambda_2$ and the differencing order $d$; note that we report unpenalized performance estimates when we run cross-validation. After we determine the optimal tuning parameters via cross-validation, we fit the penalized model using the entire training portion of the model fitting dataset and the best tuning parameters and cache the resulting parameter estimates of $\boldsymbol{\alpha}$. Once we obtain the estimates of $\boldsymbol{\alpha}$ from the penalized model, we can use them to compute the chosen performance metric on the testing portion of the model fitting dataset and any other validation dataset of interest. Results {#results .unnumbered} ======= As described in the methods (the Inference Pipeline subsection), we first need to infer the best profile groupings to use in penalized model fitting. To determine these groupings, we run the forward stepwise selection procedure for both the $L_2$ error function and the smoothed Jaccard objective function with a frequency cutoff $t = 0.2$ (). For both objective functions, the forward selection path is the same until $\mathbf{X}^* = \bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}, \widehat{\mathbf{X}}_{{\text{neut}}}, \widehat{\mathbf{X}}_{{\text{vsubgrp}}}\bigr\}$. For the $L_2$ loss function, model performance is the best when $\mathbf{X}^* = \bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}, \widehat{\mathbf{X}}_{{\text{neut}}}, \widehat{\mathbf{X}}_{{\text{vsubgrp}}}\bigr\}$ even though there are diminishing returns for using profiles beyond $\mathbf{X}^* = \bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}\bigr\}$. In a similar fashion, the Jaccard similarity estimates tend to be highest when $\mathbf{X}^* = \bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}\bigr\}$, despite the almost identical model performance from just using $\mathbf{X}^* = \bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}\bigr\}$. For the subsequent penalized model fitting step, we choose to evaluate the $\bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}, \widehat{\mathbf{X}}_{{\text{neut}}}\bigr\}$ and $\bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}, \widehat{\mathbf{X}}_{{\text{neut}}}, \widehat{\mathbf{X}}_{{\text{vsubgrp}}}\bigr\}$ profile groupings with the $L_2$ objective and $\bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}\bigr\}$ and $\bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}\bigr\}$ with the smoothed Jaccard similarity objective. ------------------------------ --------------- ------------------------------------------- ----------------------------------------- ---------------------------------------- ------------------------------------------- ----------------------------------------------------------- \[-9pt\] Objective Function \[-10pt\] $\varnothing$ $\widehat{\mathbf{X}}_{{\text{naiveAA}}}$ $\widehat{\mathbf{X}}_{{\text{vgene}}}$ $\widehat{\mathbf{X}}_{{\text{neut}}}$ $\widehat{\mathbf{X}}_{{\text{vsubgrp}}}$ $\widehat{\mathbf{X}}_{{\text{naiveAA-clust}}\text{-}5}$ 0.110 0.0542 0.0459 0.0456 0.0455 0.0456 \[-10pt\] Jaccard Similarity $\varnothing$ $\widehat{\mathbf{X}}_{{\text{naiveAA}}}$ $\widehat{\mathbf{X}}_{{\text{vgene}}}$ $\widehat{\mathbf{X}}_{{\text{neut}}}$ $\widehat{\mathbf{X}}_{{\text{vsubgrp}}}$ $\widehat{\mathbf{X}}_{{\text{naiveAA-clust}}\text{-}85}$ ($t = 0.2$) 0.9170 0.9322 0.9324 0.9323 0.9319 0.9318 ------------------------------ --------------- ------------------------------------------- ----------------------------------------- ---------------------------------------- ------------------------------------------- ----------------------------------------------------------- : Results of forward stepwise selection on our $L_2$ and smooth Jaccard objective functions. The performance estimates shown in the table are obtained using 5-fold cross-validation. Going from left to right, each column represents the best profile addition into $\mathbf{X}^*$ with the associated CV performance estimate. For Jaccard, we fit using the smooth Jaccard objective, but report exact Jaccard similarity estimates, both using frequency cutoff $t = 0.2$. Note that we fix the prespecified limit on the number of external profiles allowed in $\mathbf{X}^*$ to be 5. $\varnothing$ represents the model using only the input sequence. []{data-label="table:stepwise"} We now use the approximate profile groupings obtained from the forward stepwise selection procedure to fit our regularized models. The penalized estimation problem has additional tuning parameters that must be determined. In our experiments, we cross-validate over penalty parameters; $\lambda_1, \lambda_2 = 10^{-7}, 5.05 \times 10^{-6}, 10^{-5}$; the differencing order, $d = 1, 2, 3$; and the two profile groupings specified above for both the $L_2$ error and Jaccard similarity objectives. The best regularized $L_2$ model uses $\mathbf{X}^* = \bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}, \widehat{\mathbf{X}}_{{\text{neut}}}, \widehat{\mathbf{X}}_{{\text{vsubgrp}}}\bigr\}$, while the best regularized Jaccard model utilizes $\mathbf{X}^* = \bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}\bigr\}$ (). In summary, using many external profiles is important for predicting the complete substitution profiles, while the inferred naive sequence is the only external profile deemed useful for our model to accurately predict the observed high-frequency amino acids (where high-frequency is defined by being at least 20% of the observed amino acids). In addition to predictive performance, we are also interested in understanding how the estimated parameter weights from our best regularized $L_2$ model vary across the different external profiles in $\mathbf{X}^*$ and antibody regions. For convenience, we aggregate the estimates of $\boldsymbol{\alpha}$ associated with the V gene ($\widehat{\mathbf{X}}_{{\text{vgene}}}$ and $\widehat{\mathbf{X}}_{{\text{vsubgrp}}}$) and with the full naive sequence ($\widehat{\mathbf{X}}_{{\text{naiveAA}}}$ and $\widehat{\mathbf{X}}_{{\text{neut}}}$) as these sets of profiles are intuitively similar (); the V-gene and V-subgroup profiles are both derived by averaging over different IMGT V germline gene labeling schemes and the simulated S5F neutral substitution profiles originate from the CF-specific inferred naive sequence. Antibody heavy chain (and light chain) sequences can be partitioned into framework regions (FWKs) and complementarity-determining regions (CDRs) by the AHo definitions [@honegger2001yet]; the BCR binding affinity is largely determined by the CDRs (especially by the heavy chain CDR3), while the FWKs encode the structural constraints of the BCR and thus can be strongly conserved [@tomlinson1995structural]. The $\widehat{\mathbf{X}}_{{\text{vgene}}}$ and $\widehat{\mathbf{X}}_{{\text{vsubgrp}}}$ profiles are extremely important for prediction at FWK1-FWK3, which is not surprising as V germline genes extend from the FWK1 to the beginning of the CDR3. In contrast, the $\widehat{\mathbf{X}}_{{\text{naiveAA}}}$ and $\widehat{\mathbf{X}}_{{\text{neut}}}$ external profiles are heavily weighted in the CDR3 and FWK4; this result is also intuitive because the CDR3 is highly variable across CFs as it is a strong determinant of antigen-binding specificity, the $\widehat{\mathbf{X}}_{{\text{naiveAA}}}$ and $\widehat{\mathbf{X}}_{{\text{neut}}}$ profiles are our only CF-specific sources of external information, and the V gene specific profiles cannot provide any information beyond the end of the V gene. Furthermore, the FWKs have, on average, more support from the external profiles compared to the CDRs, which is consistent with our understanding of antibody biochemistry as the FWKs are structurally constrained and thus need to be more conserved compared to the more flexible CDRs. We note that the middle of the CDR3 has artificially low estimates of $\boldsymbol{\alpha}$ because most of the AHo positions in the CDR3 have only a few or no defined sequence positions in the dataset (). ![A stacked barplot of the estimated parameter values of $\boldsymbol{\alpha}$ from the best regularized $L_2$ model. For convenience, we aggregate the estimates of $\boldsymbol{\alpha}$ associated with $\widehat{\mathbf{X}}_{{\text{vgene}}}$ and $\widehat{\mathbf{X}}_{{\text{vsubgrp}}}$ (blue) and with $\widehat{\mathbf{X}}_{{\text{naiveAA}}}$ and $\widehat{\mathbf{X}}_{{\text{neut}}}$ (red). The black vertical lines represent the boundaries between the different CDRs and FWKs. []{data-label="fig:alpha_profile_plot_collapsed"}](figures/alpha_profile_plot_collapsed.png){width="\textwidth"} While our penalized modeling framework allows for easy interpretation of the parameter estimates, ultimately the quality of the $\boldsymbol{\alpha}$ estimates is determined by their performance on independent test datasets. Specifically, we compute the $L_2$ error ($L_2^{\boldsymbol{\alpha}}$) and Jaccard similarity ($J_{0.2}^{\boldsymbol{\alpha}}$) between the predicted and observed profiles associated with both the testing portion of the model fitting dataset and the Briggs validation dataset (); we remind readers that these predictions are made based on the subsampled (i.e. single-sequence) profiles in the aforementioned datasets and compared to the corresponding actual substitution profiles through the $L_2^{\boldsymbol{\alpha}}$ and $J_{0.2}^{\boldsymbol{\alpha}}$ performance metrics (). Our model improves upon the “baseline” prediction performance, where “baseline” refers to predictions made using only the input sequence (i.e. model predictions with all parameter values of $\boldsymbol{\alpha}$ set to 0). ----------------------------- ------------ --------------------- -------- \[-9pt\] Objective Function Model Type Model fitting: test Briggs \[-9pt\] Best 0.0492 0.0511 Baseline 0.114 0.129 \[-9pt\] Jaccard Similarity Best 0.9289 0.9227 ($t = 0.2$) Baseline 0.9156 0.9053 ----------------------------- ------------ --------------------- -------- : The model performance results from predicting on independent datasets. We provide results for both the testing portion of the model fitting dataset and the Briggs validation dataset. Note that the term “baseline” refers to predictions made using only the input sequence (i.e. model predictions with all parameter values of $\boldsymbol{\alpha}$ set to 0). []{data-label="table:pred"} In addition, we also want to know how well our model performs in the different antibody regions (i.e. FWKs/CDRs). To answer this question, we compute the same metrics as shown in for the different FWKs and CDRs (). To provide some insight into the variability of the model performance estimates in the different regions, we calculate bootstrap standard errors, which are expressed as error bars in . We see that our substitution profile prediction model performs well in the CDRs relative to the baseline model. This is an important finding because antigen binding is largely determined by the sequence segments in the CDRs, and especially CDR3. In fact, our models seem to provide the greatest improvement in performance in the CDR3, which is also the hardest region to predict because it has the highest amount of sequence variability. Another important takeaway is that the prediction performance is better in FWKs than CDRs, which is presumably because FWKs have lower variance and are more conserved compared to CDRs. In summary, our prediction models are able to systematically integrate different data sources to make better predictions of the per-site amino acid compositions in CFs. ![The model performance results across the different antibody regions on the model fitting test dataset and the Briggs validation dataset. In these plots, we compare the performances from our best models to the baseline predictive performances using only the input sequence (i.e. model predictions with all parameter values of $\boldsymbol{\alpha}$ set to 0). The error bars show bootstrap standard errors. []{data-label="fig:pred_fwkcdr"}](figures/pred_fwkcdr.png){width="\textwidth"} Our model also improves the prediction of the highest-frequency amino acid at a given position, referred to here as the mode (). Indeed, the counts in the bottom-left cells (cases where the model is correctly predicting the actual mode given an incorrect input sequence amino acid) are larger than the counts in the top-right cells (vice-versa). In addition, the input sequence amino acids that are not the true modes but correctly predicted by the model to be the actual modes are all germline reversions, which is consistent with the $\widehat{\mathbf{X}}_{{\text{naiveAA}}}$ profile being heavily weighted in our prediction model (). In the opposite case, where the input sequence amino acid is correct but the model prediction is wrong, all the counts consist of germline predictions as well. In summary, many of the mode predictions are just germline reversions and, in fact, most of these predictions are to the true modes (i.e. the actual highest-frequency amino acids); however, most of the input sequence amino acids are the true modes already ($\approx$ 99%). [cc|C[1.45cm]{}C[1.45cm]{}]{} & & &\ & &\ &\ & & &\ & & Yes & No\ & & &\ Is input amino & Yes & 10,473$\mid$465 & 156$\mid$0     \ acid the mode? & No &349$\mid$0 & 170$\mid$395 [cc|C[1.45cm]{}C[1.45cm]{}]{} & & &\ & &\ &\ & & &\ & & Yes & No\ & & &\ Is input amino & Yes & 10,541$\mid$376 & 178$\mid$1     \ acid the mode? & No &474$\mid$0 & 196$\mid$393 The in-sample and out-of-sample prediction performances demonstrate that our SPURF inference pipeline is able to obtain accurate and robust estimates of $\boldsymbol{\alpha}$. Specifically, prediction performance is consistently similar but slightly worse when comparing the Briggs dataset to the model fitting test set, which likely reflects two things: 1) the median number of sequences per CF in the Briggs set is lower than in the test set () and 2) the model fitting dataset is sampled from the same donors as the dataset for cross-validation. Regardless, the differences between the test and Briggs datasets are small, which provides evidence in support of our model performance estimates. Subjective assessments of the inferred substitution profiles coincide with our description of the $L_2$ error metric, namely that fine-grained amino acid substitution information is captured by SPURF (). The SPURF model setup produces interpretable and meaningful profile weights (; per-profile decomposition in ). The input sequence is strongly weighted in the CDRs, indicating that substitutions in these regions are both specific and conserved within the CF and, therefore, cannot easily utilize the information from other CFs. The weight on the V gene specific profiles spikes at CDR1 and at the end of FWK3, which is at the heavy and light chain interface. We note that, as expected, the weight on the V gene specific profiles is minimal downstream of FWK3 as this is the end of the V gene and the beginning of the V-D junction region. As such, nothing prevents the V gene profiles from having a high weight downstream of FWK3, but the model framework has chosen these meaningful weights without any manual interference. We ascribe this shrinkage feature of the weights to the standard lasso penalty built into SPURF. The profiles that are derived from the inferred naive sequence ($\widehat{\mathbf{X}}_{{\text{naiveAA}}}$, $\widehat{\mathbf{X}}_{{\text{neut}}}$) take up the missing weight of the V gene profiles as these are highly weighted in the CDR3 and FWK4. ![Positional profile weights $\boldsymbol{\alpha}$ mapped to an antibody protein structure (PDB: 5X8L). The antigen (PD-L1) appears as a purple surface at the top of the images, the light chain appears in white cartoon, and the heavy chain is displayed using a blue to red color gradient; the grey dashed lines mark the CDR loops. The color gradient represents the possible values of profile weights in $\boldsymbol{\alpha}$ and goes from blue at a zero weight to red at the maximum weight for the profile. The display in panels **B** and **C** is rotated relative to panel **A** to better show results for CDR1 and CDR3; as a consequence, the CDR2 loop is hidden behind the CDR1. Panel **A** shows that the input sequence has high weight at the CDR1 and CDR2, panel **B** illustrates that the naive sequence and the neutral substitution profile have high weight at the CDR3 and FWK4, and panel **C** demonstrates that the V gene and V subgroup profiles are highly weighted in parts of the CDR1 but more generally in the FWKs, especially at the heavy and light chain interface. []{data-label="fig:alpha_on_protein"}](figures/alpha_on_protein_i2.png){width="\textwidth"} Discussion {#discussion .unnumbered} ========== In this paper, we present SPURF, a statistical framework for predicting CF-specific amino acid frequency profiles from single input BCR sequences by leveraging multiple sources of external information. We use standard and spatial lasso penalties to prevent our model from overfitting and obtain sparse, interpretable estimates of the profile weights, expressed by an $\boldsymbol{\alpha}$ matrix. The spatial lasso penalizes extreme differences between spatially-adjacent profile weights, while the standard lasso penalties promote simpler models by shrinking parameter values in $\boldsymbol{\alpha}$ to 0 if the associated external profiles are not useful predictors. We show that our method not only performs well on the held-out (test) portion of our model fitting dataset but also provides accurate predictions on the Briggs external validation dataset. Indeed, we did not obtain the Briggs validation dataset until after we ran our model inference pipeline on the model fitting dataset. Our work can be seen as a prediction-based extension of the work of @sheng2017gene and @Kirik2017-bc. This previous work illustrates that amino acid substitution profiles differ between germline genes, a finding supported by the context specificity of somatic hypermutation [@cui2016model]. In our work, we provide a prediction algorithm that takes a single BCR sequence from a clonal family as input and outputs a CF-specific substitution profile estimate for the whole VDJ region. We believe that this work will be a useful tool for antibody engineering in situations when it is important to maintain antibody binding affinity to the same epitope. The predicted profiles from SPURF can be used to choose the sites that are most tolerable for mutagenesis and the substitutions that are most likely to maintain binding specificity; as such, this information can be used to engineer antibodies with better biophysical properties. To our knowledge, SPURF is the first prediction algorithm for B cell CF substitution profiles. There are many possible extensions. In our SPURF inference pipeline, we subsample single BCR sequences from CFs to use as model input; unfortunately, this means that our modeling analysis is conditional on a dataset that does not account for the variability associated with the subsampling process. One obvious means of fixing the above problem is to draw multiple subsamples from each CF and treat these multiple “observations” per CF within a dataset as a clustered data or weighted least squares problem. In addition, our model fitting dataset consists of only the largest CFs because we need accurate CF-specific substitution profile estimates to serve as the ground truth. This non-random sampling technique could potentially bias our analysis results; however, this appears unlikely given our model’s performance on the external Briggs validation dataset. Furthermore, our approach models per-site amino acid composition in a CF and accounts for interactions between sites only through the fusion lasso penalties. It is well known from other protein studies that spatially-adjacent amino acid residues evolve jointly [@jones2011psicov; @ekeberg2013improved], presumably to maintain structural stability, or in the case of antibodies to stabilize the interface between heavy and light chains [@wang2009interactions]. In the context of antibodies, residues in the FWKs have the potential to co-evolve (e.g. FWK residues flanking the CDRs could co-evolve to stabilize the stem leading to the more flexible CDRs). Thus, figuring out how to incorporate more detailed interaction effects in our model is an important avenue for future research. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Jason A. Vander Heiden and Steven H. Kleinstein for sharing post-processed data (dataset 1-4), Mikhail Shugay for sharing post-processed data (dataset 5), and Uri Hershberg for providing the ImmuneDB data (dataset 6). We would also like to thank Juno Therapeutics, Inc. for providing and preparing the single cell dataset used as our external validation. This research was supported by National Institutes of Health grants R01 GM113246, R01 AI12096, and U19 AI117891. Amrit Dhar was supported by an NSF IGERT DGE-1258485 fellowship. The research of Frederick Matsen was supported in part by a Faculty Scholar grant from the Howard Hughes Medical Institute and the Simons Foundation. Supplementary Materials {#supplementary-materials .unnumbered} ======================= Model Interpretation {#model-interpretation .unnumbered} -------------------- In this subsection, we provide statistical motivation for our penalized regression model, which can be interpreted as specifying an ensemble of multinomial logistic regression models at each AHo position. We use some of the notation mentioned in the methods section and introduce new notation as needed. We begin by describing the structure of the multinomial logistic regression models and then discuss how we perform model averaging with these component models to form the estimator $F(\mathbf{X})$ as stated in the methods section. We conclude this subsection by showing that our regularized minimization problem can be characterized as a maximum a posteriori (MAP) inference problem. Suppose we observe $M$ amino acids at the $j$th AHo position for the $i$th CF; for simplicity, we let $y_1, ..., y_M$ denote the observed amino acids. We assume that $y_1, ..., y_M$ are drawn independently from a common multinomial distribution with 20 possible categories and define a logistic regression model for the substitution probabilities that does not include covariates. The standard way to formulate such a model is as follows: $$\text{log}\biggl(\frac{\text{P}(y_m = c)}{\text{P}(y_m = 20)}\biggr) = \beta^{(c)}, \qquad \forall c \in \{ 1, ..., 20 \},$$ where $c$ indexes a particular amino acid, $\beta^{(20)} \equiv 0$, and $m = 1, ..., M$. We can equivalently represent the model as: $$\text{P}(y_m = c) = \frac{\text{exp}(\beta^{(c)})}{\sum_{c'=1}^{20} \text{exp}(\beta^{(c')})}, \qquad \forall c \in \{ 1, ..., 20 \},$$ where $m = 1, ..., M$. If we let $\widehat{p}_c$ denote the observed proportion of amino acid $c$ in the sample, then it is easy to show that maximizing the multinomial likelihood of the $M$ observations with respect to the $\beta^{(c)}$ parameters leads to the following parameter estimates: $$\widehat{\beta}^{(c)} = \text{log}\biggl(\frac{\widehat{p}_c}{\widehat{p}_{20}}\biggr), \qquad \forall c \in \{ 1, ..., 20 \},$$ which implies that: $$\widehat{\text{P}}(y_m = c) = \widehat{p}_c, \qquad \forall c \in \{ 1, ..., 20 \},$$ where $\widehat{\text{P}}(y_m = c)$ represents the logistic regression estimate of $\text{P}(y_m = c)$. Therefore, this multinomial logistic regression model provides simple, intuitive estimates of the substitution probabilities. While these logistic regression estimates may seem trivial, the underlying framework allows for easy integration of CF-specific and site-specific information into our model. The above model considers the substitution probabilities at only one AHo position so one could fit this multinomial logistic regression model at each of the 149 AHo positions in a CF to obtain a complete substitution profile estimate. In this paper, each CF-specific substitution profile estimate is a maximum likelihood estimate (MLE) obtained by fitting 149 multinomial regression models to the observed amino acid data in the CF. Given that our proposed modeling procedure computes a per-site weighted average between the subsampled profiles $\mathbf{X}$ and all the external profile estimates $\mathbf{X}^*$, one can interpret this model $F(\mathbf{X})$ as defining an ensemble of multinomial logistic regression estimates at each AHo position. To specify this relationship more clearly, we denote the likelihood of $\mathbf{Y}_{i,j,\bullet}$ as follows: $$\begin{gathered} \mathbf{Y}_{i,j,\bullet} \sim \text{MVN}\bigl(\boldsymbol{\mu}_{i,j,\bullet}, \sigma^2 \mathbb{I}_{20}\bigr),\\ \boldsymbol{\mu}_{i,j,\bullet} \equiv \sum_{l=1}^p \alpha_{j,l} \cdot \mathbf{X}^*_{i,j,\bullet,l} + \Bigl( 1 - \sum_{l=1}^p \alpha_{j,l} \Bigr) \cdot \mathbf{X}_{i,j,\bullet},\end{gathered}$$ where $\text{MVN}$ means multivariate normal, $\boldsymbol{\mu}_{i,j,\bullet}$ signifies the mean vector of $\mathbf{Y}_{i,j,\bullet}$ and defines our ensemble model, $\sigma^2$ represents an unknown variance parameter, $\mathbb{I}_{20}$ symbolizes the $20 \times 20$ identity matrix, and $p$ represents the number of external profiles in $\mathbf{X}^*$. Note that $\boldsymbol{\mu}_{i,j,\bullet}$ depends on the multinomial logistic regression estimates described previously as both $\mathbf{X}$ and $\mathbf{X}^*$ contain the MLE-based substitution profile estimates. In addition, the form of $\boldsymbol{\mu}_{i,j,\bullet}$ relates to our previous definition of $F(\mathbf{X})$ by observing that $\boldsymbol{\mu}_{\bullet,j,\bullet} = f(\mathbf{X}_{\bullet,j,\bullet}; \boldsymbol{\alpha}_{j,\bullet})$. We can also integrate the inequality constraints of the ensemble weights $\alpha_{j,l}$ into the likelihood function by including the indicator term $\mathbbm{1}_{\boldsymbol{\alpha}} \equiv \mathbbm{1}\{ 0 \leq \alpha_{j,l} \leq 1; \ 0 \leq \sum_{l=1}^p \alpha_{j,l} \leq 1; \ \forall j,l \}$. The lasso penalties can be incorporated into our model through the use of sparsity-inducing prior distributions. Specifically, the priors placed on $\boldsymbol{\alpha}$ can be represented as: $$\text{P}(\boldsymbol{\alpha}_{\bullet,l}) \propto \text{exp}\biggl(-\lambda_1 \bigl|\bigl| \boldsymbol{\alpha}_{\bullet,l} \bigr|\bigr|_1 - \lambda_2 \bigl|\bigl| \nabla^d (\boldsymbol{\alpha}_{\bullet,l}) \bigr|\bigr|_1\biggr), \qquad \forall l \in \{ 1, ..., p \},$$ where $\lambda_1, \lambda_2 \geq 0$ and $d \in \mathbb{N}$ are the same tuning parameters specified in the methods section [@park2008bayesian; @kyung2010penalized; @faulkner2017locally]. These Laplace-like prior distributions can be expressed as scale mixtures of normal distributions with independent gamma distributed variances [@kyung2010penalized]; for a more comprehensive discussion on shrinkage priors of this form, we refer readers to [@faulkner2017locally]. The posterior $\text{P}(\boldsymbol{\alpha} | \mathbf{Y})$ can be presented in the following manner: $$\begin{aligned} \text{P}(\boldsymbol{\alpha} | \mathbf{Y}) &\propto \text{P}(\mathbf{Y} | \boldsymbol{\alpha}) \text{P}(\boldsymbol{\alpha})\\ &\propto \prod_{i=1}^{500} \prod_{j=1}^{149} \text{P}(\mathbf{Y}_{i,j,\bullet} | \boldsymbol{\alpha}_{j,\bullet}) \prod_{l=1}^p \text{P}(\boldsymbol{\alpha}_{\bullet,l}).\end{aligned}$$ Note that we assume the $\mathbf{Y}_{i,j,\bullet}$ vectors are independent conditional on $\boldsymbol{\alpha}_{j,\bullet}$ and the prior distribution on $\boldsymbol{\alpha}$ factorizes across $l \in \{ 1, ..., p \}$. The MAP estimate of the ensemble weights $\boldsymbol{\alpha}$ is obtained by maximizing $\text{P}(\boldsymbol{\alpha} | \mathbf{Y})$ and is equivalent to the estimate that minimizes the regularized objective function shown in the methods section. The latter assertion can be seen as the posterior on $\boldsymbol{\alpha}$ can be monotonically transformed into our penalized minimization problem (up to a constant factor in $\boldsymbol{\alpha}$). Of course, there are limitations to this interpretation of our modeling framework. For instance, $\mathbf{Y}_{i,j,\bullet}$ is a frequency vector, yet we model the likelihood of $\mathbf{Y}_{i,j,\bullet}$ using a multivariate normal distribution, which has support over all real numbers; we could potentially remedy this problem by modeling the likelihood of $\mathbf{Y}_{i,j,\bullet}$ as a Dirichlet distribution as its negative log-likelihood looks similar to a cross-entropy loss function. In addition, we specify priors on $\boldsymbol{\alpha}$ that also have support over the real line, which is not realistic. Our assumption that the $\mathbf{Y}_{i,j,\bullet}$ vectors are conditionally independent given $\boldsymbol{\alpha}_{j,\bullet}$ is used solely for presentation purposes and does not hold in practice because the substitution profile data in both $\mathbf{X}$ and $\mathbf{X}^*$ are correlated across AHo positions. Despite these issues with our statistical representation of the penalized regression model, our results demonstrate that the model is useful for predicting CF-specific substitution profiles in data-sparse situations. Smoothed Jaccard Similarity {#smoothed-jaccard-similarity .unnumbered} --------------------------- As we stated in the methods section, optimization on the Jaccard similarity objective function is difficult because this metric is locally flat with respect to our parameter values of $\boldsymbol{\alpha}$. For some small changes in $\boldsymbol{\alpha}$, the averaged Jaccard similarity can remain at the same value because the Jaccard sets continue to hold the same elements. This is a problem because the L-BFGS-B optimization algorithm uses gradient information to determine its search direction for $\boldsymbol{\alpha}$ and the Jaccard similarity gradients are often zero due to the reasoning given above, which results in premature termination of the L-BFGS-B optimizer. We now describe an approach to “smooth” the Jaccard similarity objective function that directly addresses these concerns. For notational simplicity, we let $\{ a_i \}_{i=1:20}$ and $\{ b_i \}_{i=1:20}$ denote the actual and predicted amino acid frequencies, respectively, at a particular AHo position for a given CF. As before, $t$ represents the cutoff separating high versus low frequency amino acids. We also introduce the following indicator function $f(a, t) \equiv \mathbbm{1}\{ a \geq t \}$ for any amino acid frequency $a$. If we further let $A = \mathcal{A}(\{a_i \mid a_i \geq t\})$ and $B = \mathcal{A}(\{b_i \mid b_i \geq t\})$ with $\mathcal{A}(\cdot)$ as defined in the methods section, then the Jaccard similarity between sets $A$ and $B$ can be rewritten as: $$J(A,B) \equiv \frac{|A \cap B|}{|A \cup B|} = \frac{\sum_{i=1}^{20} f(a_i, t) f(b_i, t)}{\sum_{i=1}^{20} \min\bigl\{1, f(a_i, t) + f(b_i, t)\bigr\}}.$$ The local flatness of the Jaccard similarity objective is due to the constant regions of $f(a_i, t)$ and the non-smooth curvature of $f(a_i, t)$ at the jump point $t$. It turns out that $f(a_i, t)$ can also be described as the limit of the following function: $$f_{\epsilon}(a_i, t) = \frac{1}{1 + e^{-\epsilon(a_i - t)}},$$ as $\epsilon \rightarrow \infty$. Thus, to obtain a “smooth” transformation of $J(A,B)$, we replace $f(a_i, t)$ with $f_{\epsilon}(a_i, t)$ in the above equation of $J(A,B)$ and set $\epsilon$ (i.e. the steepness parameter) to be a small number. plots the function $f_{\epsilon}(a_i, 0.2)$ against $a_i \in [0,1]$ for various values of $\epsilon$. ![A plot of the function $f_{\epsilon}(a_i, 0.2)$ against $a_i \in [0,1]$ for various values of $\epsilon$. As $\epsilon$ gets larger, $f_{\epsilon}(a_i, 0.2)$ tends to the indicator function $f(a_i, 0.2)$. []{data-label="fig:smooth_jacc"}](figures/B_0_2_plot.pdf){width="60.00000%"} Fortunately, the use of this “smooth” Jaccard similarity function allows the L-BFGS-B optimization algorithm to converge properly. To use this “smoothed” objective function in the right manner, we were interested in finding the largest values of $\epsilon$ that still permitted proper L-BFGS-B convergence. We utilized $\epsilon = 23$ throughout all our Jaccard similarity experiments because we found that this value of $\epsilon$ satisfied our selection criteria specified previously. Supplementary Figures/Tables {#supplementary-figurestables .unnumbered} ---------------------------- ![A stacked barplot of the estimated parameter values of $\boldsymbol{\alpha}$ from the best regularized $L_2$ model. The black vertical lines represent the boundaries between the different CDRs and FWKs. Due to the AHo antibody numbering used [@honegger2001yet], some positions are assigned to a gap character (an AHo position that does not map to a sequence position). The percentage of CFs that are not assigned to gap characters is shown in the bottom plot for each AHo position. The input sequence is heavily weighted in regions with high gap percentages because of the standard lasso penalty included in our model. The conserved Tryptophan amino acid is observed as a spike in the $\widehat{\mathbf{X}}_{{\text{vgene}}}$ and $\widehat{\mathbf{X}}_{{\text{naiveAA}}}$ profile weights following the end of CDR1 (position 43 in the AHo scheme). The conserved Cysteine amino acid that defines the beginning of CDR3 is not readily observed, presumably because this is invariant in all profiles. Generally, the input sequence has less weight in CDR3 and FWK4, which indicates that there is some conservation during affinity maturation. Beyond CDR3 and FWK4, there is a general trend that the input sequence has higher weight in the CDRs than in the FWKs, which suggests that there is a higher level of conservation in the FWKs than in the CDRs during affinity maturation. A more surprising observation is the spike in the $\widehat{\mathbf{X}}_{{\text{vgene}}}$, $\widehat{\mathbf{X}}_{{\text{vsubgrp}}}$, and $\widehat{\mathbf{X}}_{{\text{neut}}}$ weights at AHo position 83 near the beginning of FWK3 (the “outer” loop); this could indicate a conserved position not previously described. []{data-label="fig:alpha_plot"}](figures/alpha_plot.png){width="\textwidth"} ----------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------- ----------------------- --------------- -------- \[-9pt\] Objective Function $\widehat{\mathbf{X}}^*$ $\widehat{\lambda}_1$ $\widehat{\lambda}_2$ $\widehat{d}$ \[-9pt\] $L_2$ Error $\bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}, \widehat{\mathbf{X}}_{{\text{neut}}}, \widehat{\mathbf{X}}_{{\text{vsubgrp}}}\bigr\}$ $10^{-7}$ $10^{-5}$ $3$ 0.0453 \[-9pt\] Jaccard Similarity ($t = 0.2$) ----------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------- ----------------------- --------------- -------- : The results from fitting the regularized models using 5-fold cross-validation. We present the optimal tuning parameters selected from $\lambda_1, \lambda_2 = 10^{-7}, 5.05 \times 10^{-6}, 10^{-5}$ and $d = 1, 2, 3$ and show the associated cross-validated performance estimates. Note that the possible choices of $\mathbf{X}^*$ for the $L_2$ error metric include the $\bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}, \widehat{\mathbf{X}}_{{\text{neut}}}\bigr\}$ and $\bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}, \widehat{\mathbf{X}}_{{\text{neut}}}, \widehat{\mathbf{X}}_{{\text{vsubgrp}}}\bigr\}$ groupings, while the $\bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}\bigr\}$ and $\bigl\{\widehat{\mathbf{X}}_{{\text{naiveAA}}}, \widehat{\mathbf{X}}_{{\text{vgene}}}\bigr\}$ groupings are the possible $\mathbf{X}^*$ choices for the smoothed Jaccard similarity objective. []{data-label="table:cv"} ![A logo plot displaying the input sequence, predicted profile, and true profile (ordered from top to bottom) for an arbitrary CF in the Briggs dataset. The logos are plotted using AHo numbers (1-149) and AHo positions undefined in the sequence are shown as empty columns. The predicted profile (middle) captures much of the amino acid composition information associated with the full profile (bottom). []{data-label="fig:juno_logo"}](figures/logo_stacked_19.pdf){width="\textwidth"} ![Positional profile weights $\boldsymbol{\alpha}$ mapped to an antibody protein structure (PDB: 5X8L). The antigen (PD-L1) appears as a purple surface at the top of the images, the light chain appears in yellow cartoon, and the heavy chain is displayed using a blue to red color gradient. The color gradient represents the possible values of profile weights in $\boldsymbol{\alpha}$ and goes from blue at a zero weight to red at the maximum weight for the profile. The black dashed lines mark the CDR loops; note that the CDR2 loop is hidden behind the CDR1. The colored balls represent the AHo-defined FWK/CDR boundaries. The black arrows indicate regions of high profile weight. The $\widehat{\mathbf{X}}_{{\text{naiveAA}}}$ profile is heavily weighted in CDR3 and FWK4. The $\widehat{\mathbf{X}}_{{\text{vgene}}}$ profile weighting is fairly even from FWK1 through FWK3; it spikes slightly in CDR1 and completely disappears beyond FWK3, which is expected as the V-D junction region starts past the end of FWK3. The $\widehat{\mathbf{X}}_{{\text{neut}}}$ profile weighting is fairly even across sites but spikes near the beginning of FWK3 (the “outer” loop). The $\widehat{\mathbf{X}}_{{\text{vsubgrp}}}$ profile weighting is distributed similarly to that of the $\widehat{\mathbf{X}}_{{\text{vgene}}}$ profile with the exception of a spike at the end of FWK3 (i.e. at the heavy and light chain interface). []{data-label="fig:alpha_on_protein_all4"}](figures/alpha_on_protein_all4_i2.png){width="90.00000%"}
--- abstract: 'We numerically investigate the jet propagation through a rotating collapsing Wolf-Rayet star with detailed central engine physics constructed based on the neutrino-driven collapsar model. The collapsing star determines the evolution of mass accretion rate, black hole mass and spin, all of which are important ingredients for determining the jet luminosity. We reveal that neutrino-driven jets in rapidly spinning Wolf-Rayet stars are capable of breaking out from the stellar envelope, while those propagating in slower rotating progenitors fail to jet breakout due to insufficient kinetic power. For progenitor models with successful jet breakouts, the kinetic energy accumulated in the cocoon could be as large as $\sim 10^{51}$erg and might significantly contribute to the luminosity of the afterglow emission or to the kinetic energy of the accompanying supernova if nickel production takes place. We further analyze the post breakout phase using a simple analytical prescription and conclude that the relativistic jet component could produce events with an isotropic-luminosity $L_{p(iso)}\sim 10^{52}$erg/s and isotropic-energy $E_{j(iso)}\sim 10^{54}$erg. Our findings support the idea of rapidly rotating Wolf-Rayet stars as plausible progenitors of GRBs, while slowly rotational ones could be responsible for low luminosity or failed GRBs.' address: - '$^1$Yukawa Institute for Theoretical Physics, Kyoto University, Oiwake-cho, Kitashirakawa, Sakyo-ku, Kyoto, 606-8502, Japan' - '$^2$Advanced Research Institute for Science & Engineering, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555, Japan' author: - 'Hiroki Nagakura$^{1,2}$' title: 'The Propagation of Neutrino-Driven Jets in Wolf-Rayet Stars' --- Introduction ============ Long-duration Gamma-Ray Bursts (GRBs) are thought to originate from the death of massive stars . It is widely recognized that the study of GRBs provides the important knowledge on the final evolutionary stage in the life of massive stars. Although the nature of GRBs remains elusive, one viable scenario to produce a GRB is the neutrino-driven collapsar model [@1993ApJ...405..273W; @1999ApJ...524..262M]. The gravitational collapse of the rapidly rotating core is believed to create a fast rotational Kerr black hole. The copious amounts of neutrinos and their anti particles, which are emitted from the hot accretion disk, annihilate and create an electron positron fireball around the rotational axis [@1989Natur.340..126E]. The baryon-starved fireball is believed to give rise to a relativistic collimated outflow, and eventually produce a GRB after the beam has broken free from the stellar progenitor. Over the years, substantial work has been made towards understanding if neutrino-driven collapsar jets can produce the required relativistic outflow (see e.g. @1999ApJ...524..262M [@2003ApJ...588L..25F; @2006ApJ...641..961L; @2007ApJ...659..512N; @2008ApJ...673L..43D; @2009ApJ...692..804L; @2010ApJ...720..614H; @2011ApJ...737....6S; @2011MNRAS.410.2385T]). However, there are still many open questions. In terms of the central engine, one of the largest uncertainties is the efficiency of energy deposition by neutrinos. Although numerical studies are very powerful methods to investigate the energy deposition rate by neutrinos and subsequent evolution of the jet, we need to solve General Relativistic Neutrino Radiation Hydrodynamics with microphysics for several tens of seconds after the black hole formation. This is certainly challenging as the typical life time of the central engine is roughly six orders of magnitude longer than the dynamical timescale of the nascent black hole. Computational resources are not yet available to perform such numerical studies. Because of these difficulties, a number of studies have thus far employed steady state approximations or performed hydrodynamic (or magnetohydrodynamic) simulations [@2005ApJ...632..421L; @2007ApJ...664.1011J; @2007PThPh.118..257S; @2010ApJ...720..614H] with simplified microphysical treatments. It is interesting to note that @2011MNRAS.410.2302Z recently have conducted General Relativistic Ray Tracing Neutrino Radiation Transfer and they found that the energy deposition by neutrinos could be well described as a simple analytic formula. Thanks to these studies, the jet luminosity can be estimated qualitatively without the need for expensive numerical simulations. It should be noted, however, even if the neutrino-driven jet is successfully launched, this does not guarantee the production of a GRB. As a minimum requirement, the jet needs to successfully penetrate the stellar envelope otherwise it would become non-relativistic and thus incapable of producing GRB. Ever since the neutrino-driven collapsar model was proposed, a number of numerical works of jet propagation have been done (see e.g. [@2000ApJ...531L.119A; @2003ApJ...586..356Z; @2007ApJ...665..569M; @2009ApJ...699.1261M; @2011ApJ...731...80N]). In these studies, however, the jet luminosity was assumed for simplicity to either be constant or to directly follow the mass accretion rate (see e.g. @2001ApJ...550..410M [@2012ApJ...754...85N]). In order to judge whether neutrino-driven jets are capable of successfully breaking out of their progenitors, and to explore the effects of rotation, one needs to take into account how the jet power scales with the neutrino energy deposition generated by the accompanying accretion. In this paper, we present, for the first time, the propagation of neutrino-driven jets employing accurate neutrino energy deposition rate as calculated by @2011MNRAS.410.2302Z. The evolution of mass accretion rate and black hole mass and spin, all of which are necessary to evaluate the energy deposition by neutrinos, are evaluated here using an inner boundary condition in the simulation, although the current calculation is still not self-consistent as it fails to resolve the accretion disk and assumes no feedback (e.g. a disk wind). The purpose of this study is (1) to clarify whether neutrino-driven jets can successfully break out from the stellar surface and (2) to determine the progenitor’s rotation rate necessary for successful jet breakout. As we will show in this paper, the final outcome of the explosion depends sensitive on the rate of rotation, and differences in rotation rate could be responsible for the observational differences seen between GRBs, low luminosity GRBs (LLGRBs) and failed GRBs. Methods and Models ================== We perform two dimensional, relativistic hydrodynamical axisymmetric (and also equatorial symmetric) simulations of the accretion and subsequent jet propagation. The numerical code employed in this paper is essentially the same as those used in previous papers [@2011ApJ...731...80N; @2012ApJ...754...85N]. The initial stellar density distribution is fixed as 16TI model in @2006ApJ...637..914W. As is the case with previous studies, we cut the inner portions of the star from a certain radius. The self-gravity of matter in the active numerical regions is calculated by solving Poisson equations and the monopole gravity is added as the point mass at the inner excised region. The mass accretion rate ($\dot{M}$) is estimated by the mass flows through the inner boundary (see Eq. (1) in @2012ApJ...754...85N). The mass and angular momentum in the excised region are assumed to be the same mass and angular momentum of the black hole. The time evolution of mass and spin of black hole is calculated by integrating mass and angular momentum flux crossing the inner boundary. It should be noted that when the specific angular momentum (SAM) at the location of inner boundary in equatorial region becomes larger than SAM at the inner most stable circular orbit (ISCO) (see cross marks in Figure \[f1\]), we alter our prescription when calculating the angular momentum to: $$\begin{aligned} f_a = \dot{M} \times J_{ISCO} \label{eq:anguinteg},\end{aligned}$$ where $f_a$ and $J_{ISCO}$ denote the angular momentum flux and SAM at the ISCO, respectively. This treatment comes from the fact that the angular momentum of matter in the disk is transported outwards due to the turbulent viscosity or non-axisymmetric waves, and finally the matter falls into a black hole with $\sim J_{ISCO}$. According to @2011MNRAS.410.2302Z, the jet luminosity by the neutrino process is determined by; $$\begin{aligned} &L_{j} = 1.1 \times 10^{52} x_{ms}^{-4.8}( \frac{M_{bh}}{3 M_{\odot}} )^{-\frac{3}{2}} \nonumber \\ & \hspace{8mm} \times \begin{cases} 0 & (\dot{M} < \dot{M}_{ign}) \\ \dot{m}^{\frac{9}{4}} & ( \dot{M}_{ign} < \dot{M} < \dot{M}_{trap}) \\ \dot{m}_{trap}^{\frac{9}{4}} & (\dot{M} > \dot{M}_{trap}) \label{eq:neutrinolumi} \end{cases}\end{aligned}$$ where $\dot{m} \equiv \dot{M}/(M_{\odot}/s)$, $x_{ms} \equiv r_{ms}/(2GM_{bh}/c^2)$ ($r_{ms}$ denotes the marginally stable orbit). $G$ and $c$ denote the gravitational constant and the speed of light, respectively. The characteristic mass accretion rate $\dot{M}_{ign}$ and $\dot{M}_{trap}$ are given as a function of the viscous parameter ($\alpha$), and the Kerr parameter ($a$) (see @2011MNRAS.410.2302Z). Throughout this paper, we set $\alpha=0.1$ and Kerr parameter dependence on these accretion rate is linearly interpolated by $x_{ms}$. [lccccccccccccc]{} Model   &  Breakout  &  $t_{i}$  &  $t_{br}$  &  $L_{p}$  &  $E_{dg}$  &  $E_{j}$  &  $E_{j>L_{50}}$  &  $E_{j>L_{49.5}}$  &  $T_{j>L_{50}}$  &  $T_{j>L_{49.5}}$  &\ & & (s) & (s) & ($10^{50}$ erg/s) & ($10^{51}$erg)& ($10^{51}$erg) & ($10^{51}$erg) & ($10^{51}$erg) & (s) & (s)\ Mref & yes & 10.9 & 27.8 & 1.9 & 1.4 & 7.4 & 4.4 & 5.6 & 27.8 & 46.2\ M150 & yes & 2.2 & 17.6 & 3.2 & 1.6 & 11.7 & 8.8 & 9.8 & 40.0 & 55.0\ M70 & yes & 15.5 & 45.5 & 1.0 & 0.9 & 3.6 & - & 1.7 & - & 21.7\ M50 & no & 21.6 & - & 0.6 & - & - & - & - & - & -\ The spherical symmetric density distribution is mapped by the spherical coordinate. The computational domain covers from $10^{8}$ to $4 \times 10^{10}$cm. Note that the location of the inner boundary in the present study is ten times smaller than in previous jet propagation studies (see e.g. [@2009ApJ...700L..47L; @2011ApJ...732...26M; @2011ApJ...731...80N]). The evolution of mass accretion rate, which is sensitive to the location of inner boundary (see [@2012ApJ...754...85N]), can thus be better captured by our simulations. However, as a result, these simulations become rather computationally expensive and we are only able to conduct them until the jet bow shock reaches the stellar surface or the black hole mass reaches $10 M_{\odot}$. The evolution of the the post-breakout phase is then analyzed by using a simple analytic formalism (see Eqs. (\[eq:timeextrapo\])-(\[eq:acrateana\])). The results of an extended numerical simulation will be nonetheless compare with the analytical approach for the reference model in order to confirm that the analytical approach qualitatively captures the evolution of the jet dynamics (see Section \[sec:subsecpostbreak\] and Figure \[f4\]). We employ the gamma-law equation of state with $\gamma = 4/3$. The jet injection parameters such as the Lorentz factor and the specific internal energy are the same as those used in the standard model of @2012ApJ...754...85N, where the initial Lorentz factor and specific internal energy are fixed to $\Gamma=400$ and $\epsilon=0.01$, respectively. It should be noted for a fixed $\Gamma$ and $\epsilon$, the overall jet dynamics depend solely on $\theta_{op}$ (see @2012ApJ...754...85N). In this study, we assume $\theta_{op} = 9^{\circ}$, which agrees well with the opening angles deduced for Long GRBs [@2011arXiv1101.2458G]. The dependence of our results on the $\theta_{op}$ will be discussed in section \[sec:results\]. The 1000 non-uniform radial grids cover all the computational region while the meridian section is covered by 60 uniform grids. The 3 level Adaptive Mesh Refinement technique, similar to that used in @2011ApJ...731...80N [@2012ApJ...754...85N] is also employed in order to decrease computational cost. We set up the stellar rotation by a similar manner as those used in @2010ApJ...716.1308L. The SAM distribution is separated into radial and polar components as $J(r,\theta) = j(r) \Theta(\theta)$, where $r$ and $\theta$ are the spherical radius and polar angle, respectively. In the reference model (Mref), $j(r)$ is given by 16TI model. For M150, M70 and M50 models, $j(r)$ is multiplied by 1.5, 0.7 and 0.5, respectively (see Figure \[f1\] for the SAM distribution of our models). The polar angle components are assumed to be rigid body rotation on shells, i.e., $\Theta(\theta) = \sin^2{\theta}$. It is important to highlight that the simulations in this study do not cover the black hole accretion disk system. Even if the simulations cover the full computational domain, our numerical calculations can not treat the disk evolution appropriately, since the general relativistic effect and microphysics, which are important in determining the disk evolution, are not incorporated. However, the analytical formula proposed by @2011MNRAS.410.2302Z allows us to estimate neutrino luminosity without resolving the black hole accretion disk system. Owing to these prescription, we can study the jet penetration phase as determined by the neutrino-driven energy injection. The study of the coupling between black hole and disk accretion system is the beyond the scope of this paper. We also note that the injection of the jet is delayed in the simulation as a result of the inner core not possessing enough angular momentum to create a disk [@2006ApJ...637..914W; @2006ApJ...641..961L; @2011MNRAS.410.2302Z]. Based on the standard neutrino-driven collapsar model (and assumptions those used in @2007ApJ...657..383C [@2011MNRAS.410.2302Z]), the central engine starts to operate after the accretion disk is formed around a black hole. Therefore, in our simulations, the jet is injected only when the SAM of matter at the inner boundary in equatorial plane exceeds $J_{ISCO}$. Results {#sec:results} ======= Basic features and jet penetrability {#sec:subsecbasic} ------------------------------------ The overall evolution of the collapse of the progenitor seen in our simulations is not surprisingly similar to that found in @2011ApJ...731...80N [@2012ApJ...754...85N]. The infall of stellar envelope generates a rarefaction wave, which propagates outwards. The envelope contraction is almost identical among all models and follows a rather spherical contraction, since the centrifugal force plays a minor role. Note that we find that the density distribution at the inner boundary is slightly oblate but this does not affect the subsequent jet propagation although it might have consequences for the jet production (which is not properly simulated here). During the jet propagation phase, on the other hand, the results of jet evolution are very different among each model (see Table \[tab:model\] and Figure \[f2\]). We also show that the summary of our results in Table \[tab:model\]. For model M50, the forward shock wave does not move out and almost stagnates around the inner boundary despite the successful operation of the central engine ($t_{i}$ in Table \[tab:model\] denotes the time of initiation of central engine). In fact, no collimated feature can be seen for M50 in the lowest panels in Figure \[f2\]. This is attributed to the fact that the jet power does not exceed the ram pressure of the inflowing material, and the forward shock wave stagnates or is advected inwards. For models with successful jet breakout, on the other hand, the jet also can not move forward quickly after the initiation of the central engine. However, due to an increase in the Kerr parameter of the black hole over time, the jet power eventually exceeds the ram pressure of the inflowing material (Figure \[f3\]). Once the forward shock wave is able to move out, the jet interacts with the stellar mantle and gives rise to a cocoon. The hot cocoon helps jet confinement and helps to preserve the jet’s strong outgoing momentum and energy flux, eventually leading to a successful breakout. Figure \[f3\] shows the evolution of hemispherical neutrino luminosity, mass accretion rate, black hole mass, Kerr parameter and conversion efficiency from accretion energy to neutrino luminosity ($\eta \equiv L_{j}/\dot{M} c^2$) as a function of time from the onset of the collapse. The chief cause for the different jet propagation behavior is the sensitive dependence of the neutrino luminosity on the Kerr parameter (see Eq. (\[eq:neutrinolumi\])). For the fast rotational model, the angular momentum of the black hole is very large and it increases with time (see the 4th panel in Figure \[f3\]), which produces a powerful jet as a result of the large neutrino deposition energy rates. It is also important to note that the onset timing of central engine ($t_i$) also greatly affects the outcome of explosion. As shown in Table \[tab:model\] and illustrated in Figure \[f2\], the jet production is significantly delayed for a slower rotational model. This is due to the neutrino luminosity being weaker for both smaller accretion rates and larger black hole masses (see Eq. (\[eq:neutrinolumi\]) for the dependence of $\dot{m}$ and $M_{bh}$). In fact, for M50, although the Kerr parameter reaches $\gtrsim 0.9$ at the end of our simulation, the neutrino luminosity is not large enough to move out the forward shock wave. According to these results, we infer that neutrino driven jets may not penetrate progenitors with extended envelopes, since significant large mass might be able to accrete to the black hole before jet breakout. The weak neutrino luminosity resulting for a sizable increase in black hole mass could result in the jet becoming non-relativistic ejecta. Therefore a compact progenitor is an inevitable requirement for a successful of neutrino-driven jet breakout. For the massive envelope progenitors such as PoP III or Red (Blue) supergiants, a different process might be required to powered GRBs (see also discussions in @2011ApJ...726..107S). The foremost important result in this study is that the jet succeeds to break out from the star except for M50, which corresponds to the model with the slowest rotation rate. The neutrino-driven jet with $\theta_{op}=9^{\circ}$ produced in a rapidly rotating compact Wolf-Rayet star can potentially give rise to a GRBs. We also find that the time-averaged accretion-to-jet conversion efficiency among successful jet breakout models is roughly $\eta \sim 10^{-3}$, while $\eta$ for M50 can not reach $10^{-3}$ and never accomplish the jet breakout. This result is roughly consistent with our previous work [@2012ApJ...754...85N]. The analytical criteria in [@2012ApJ...754...85N] also shows the opening angle dependence for the successful jet breakout, which is the threshold $\eta$ increases with $(\theta_{op})^2$. Therefore, according to this result, we would like to give an important caution that jets with wider opening angle are more difficult to penetrate the star than the present results, which indicates that the threshold progenitor rotation rate differs in accordance with the jet opening angle. Post breakout phase: Analytical Formula {#sec:subsecpostbreak} --------------------------------------- As we have already mentioned, our numerical simulations are terminated at the time of jet breakout. However, it is interesting to extend the result of our numerical simulations to post break out stage, which allows us to estimate the expected observational differences among the computed models (see Section \[sec:subsecobser\]). We employee the following analytic approximations in order to see the subsequent evolution after the jet break out; $$\begin{aligned} t &=& t_{(b)} + \int_{r_{(b)}}^{r} \frac{dt_{ff}}{dr} (r) dr \label{eq:timeextrapo}\\ M_{bh} &=& M_{bh(b)} + \int_{r_{(b)}}^{r} \frac{dM_{r}}{dr} (r) dr \label{eq:massextrapo}\end{aligned}$$ where $$\begin{aligned} t_{ff}(r) &=& \beta \sqrt{\frac{GM_{r}}{r^3}} \label{eq:freefallana}, \\ \dot{M}_{ff}(r) &=& \frac{dM_{r}/dr}{dt_{ff}/dr} \nonumber \\ &=& \frac{1}{\beta ^2} \frac{8 \pi G M_{r} t_{ff} \rho } {3 M_{r} - 4 \pi r^3 \rho } \label{eq:acrateana}.\end{aligned}$$ The above analytical estimation is essentially similar to the approach presented in @2012ApJ...754...85N [@2011ApJ...726..107S]. The free fall time $t_{ff}(r)$ can be determined by based on the assumption of spherical symmetric envelope contraction. Here, $t_{b}$ denotes the time at jet break out. Note also that the functions of $M_{r}(r)$ and $\rho(r)$ are extracted from the table of 16TI model. $r_b$ is determined by the assumption of $M_{bh} (t_{b}) = M_r (r_{(b)})$. The non-dimensional parameter $\beta$ is determined to ensure that $\dot{M}(t_b)$ is equal to $\dot{M}_{ff}(r_{(b)})$. According to this procedure, the neutrino luminosity and mass accretion rate can be smoothly connected from the results of numerical simulations. The evolution of the Kerr parameter is determined by Eq. (\[eq:anguinteg\]). Note that, since model M50 does not succeed the jet breakout, there is no post breakout phase for this model. Results of these analytical extension are also described in Figure \[f3\]. Figure \[f4\] shows the comparison between the results of extended numerical simulations and analytical estimation for the reference model. The extended numerical simulations are performed until $t=40$s, which corresponds to about ten seconds after the jet breakout. Note that we do not broaden the computational region, since the stellar contraction is not affected by the jet dynamics in the outer parts of the star. As shown in this figure, the time evolution of black hole mass ($M_{bh}$), Kerr parameter ($a$) and conversion efficiency ($\eta$) are almost identical between results of the extended numerical simulation and the analytical calculation. For luminosity ($L_{j}$) and mass accretion rate ($\dot{M}$), on the other hand, the analytical calculations are slightly larger than the results of our numerical simulations. This may be attributed to the fact that the analytical approach neglects stellar rotation, which increases the mass accretion rate, and consequently overestimates the neutrino luminosity. However, these differences are within ten percent. Therefore, we confirm that the above analytical approach qualitatively well describes the time evolution of the dynamics of the jet. In the following subsection, we discuss the observational consequence with the aid of the analytical approach in the post breakout phase. Observational consequences {#sec:subsecobser} -------------------------- We first divide the energetics of the jet into two parts, which are relativistic jet component ($E_j$) and the cocoon component ($E_{dg}$) (see @2011ApJ...726..107S [@2003MNRAS.345..575M]). The $E_j$ is calculated based on the assumption that all the injected energy after the jet break out goes into the relativistic component, i.e, $E_j$ is given by $\int_{t_b}^{\infty} (L_{j}/2) dt$. Note that $1/2$ factor comes from the assumption of equatorial symmetry. As shown in Table \[tab:model\], $E_j$ increases with an increase of stellar rotation. For the purpose of studying the outcome of explosion in more detail, we further divide the energy of relativistic jet into $E_{j>L_{50}}$ and $E_{j>L_{49.5}}$. $E_{j>L_{50}}$ is calculated by the same manner as $E_j$ except for the integration is carried out when $L_j/2 > 10^{50}$erg/s, while $E_{j>L_{49.5}}$ is calculated with a condition $L_j/2 > 5 \times 10^{49}$erg/s. In addition, we also list the corresponding time duration of each component as $T_{j>L_{50}}$ and $T_{j>L_{49.5}}$ in Table \[tab:model\]. In all cases, these timescales are several tens of seconds, which are comparable with typical time scale of prompt phase of GRBs. It should be noted, however, according to @2012ApJ...749..110B, the duration of prompt phase of GRBs may be modified by the duration of jet penetration, i.e, $t_{\gamma} = t_{e} - t_{b}$ (see also Eq. (2) in @2012ApJ...749..110B), where $t_{\gamma}$, $t_{e}$, $t_{b}$ denote observed duration of the prompt phase, the central engine working time and the duration of the jet penetration phase, respectively. Therefore, the actual observed duration will be smaller than the $T_{j}$, and it is substantially modified especially for slower rotation models (e.g. M70) (since $t_{b}$ is larger with slower rotation). It is also interesting to note that model M70, which is the slowest model among successful jet breakout models, produces the weak explosion and does not have $E_{j>L_{50}}$ due to the low luminosity of the jet. Note also that this luminosity is upper limit for observed luminosity since we neglect the conversion process from hydrodynamical energy to gamma-rays. According to these facts, we infer that the neutrino-driven jet from the compact Wolf-Rayet star, whose rotation rate is between model M70 and M50, may produce very low luminous type burst, possibly LLGRBs. These results are qualitatively consistent with the previous studies (see e.g. @2009ApJ...692..804L [@2010ApJ...713..800L; @2012ApJ...744..103M]). It should be noted, however, that some LLGRBs are observed with the extreme long duration, which populations can not be explained by the results of current studies. Therefore, the neutrino-driven jet may contribute to only LLGRBs with typical duration of prompt burst ($\sim 10$s). On the other hand, the energy of cocoon component can be estimated as the diagnostic energy at $t=t_b$ (see @2012ApJ...754...85N). It is important to note that, as shown in Table \[tab:model\], $E_{dg}$ is typically $\sim 10^{51}$erg for models with successful jet breakout. This may be attributed the fact that the jet with a slowly rotating core tends to be weaker power and it spends longer time to penetrate the star. Therefore, despite its low jet luminosity, a large fraction of jet energy has been consumed for sweeping aside the stellar mantle and accumulated as the cocoon energy, which eventually reaches $\sim 10^{51}$erg. The cocoon material is expected to contribute the subsequent explosive event after the prompt phase [@2007ApJ...657L..77T; @2012ApJ...750...68L] and at the afterglow phase [@2002MNRAS.337.1349R]. The results of neutrino-deposition presented in this paper are not able to discern whether sufficient large nickel production might take place to explain hypernova explosions (see also discussions of cocoon propagation in @2001ApJ...556L..37M [@2002MNRAS.337.1349R; @2003MNRAS.345..575M]). If nickel is not effectively produce at the jet interaction region alternative pathways such as the disk wind by viscous-heating or magnetic-driven wind from central engine would be required to explain the link between the GRBs-Hypernovae. We further calculate the isotropic energy ($E_{j(iso)}$) for $E_{j}$, $E_{j>L_{49.5}}$ and $E_{j>L_{50}}$, and also isotropic peak luminosity $L_{p(iso)}$, which are shown in Figure \[f5\]. In this calculation, the jet opening angle is assumed to be $\theta_{op}= 9^{\circ}$, which is the same as the root of injected jet in our simulations. Note again that we neglect the conversion efficiency from hydrodynamic energy to radiation, so our results are still at the qualitative level and give the only upper limit of GRB radiation. In addition, the time evolution of neutrino luminosity does not capture the rapid time variability of central engine since our numerical simulations do not incorporate the black hole accretion disk system. According to these ambiguities, the time evolution of neutrino luminosity as described in the upper panel of Figure \[f3\] are different from the observed light curve in reality. It should be noted, however, that our analysis is meaningful to give the constraint the neutrino luminosity and energy as the upper limit. Based on the above assumption, we find that $E_{j(iso)}$ is $\sim 10^{54}$erg, while $L_{p(iso)}$ is $\sim 10^{52}$erg/s, which are sufficiently large to explain GRBs. We would like to point out that, for the jet with for rapidly rotating progenitor (M150), large fraction of energy are radiated with high luminosity jet ($L_j > 10^{50}$erg/s), while more than half of jet energy for M70 would be radiated with low luminous jet ($L_j < 5 \times 10^{49}$erg/s). According to these results, we suggest that neutrino-driven jet is capable of producing several types of bursts by the different progenitor rotation, which may be the origin of observational different bursts such as GRBs, XRFs. The failed GRBs would be also explained by neutrino-driven central engine when the progenitor is slowly rotating. Summary ======= We present the numerical results of neutrino-driven jet propagation in a rotating Wolf-Rayet star. By changing the rate of progenitor rotation, we discuss jet penetrability and their observational consequences with the aide of analytic extrapolation in the post breakout phase. We show that every model except for M50 succeeds to break out the star. Especially, Mref and M150, which correspond models with sufficient rapidly rotation, have the relativistic outflow component as $L_{p(iso)}\sim 10^{52}$erg/s and $E_{j(iso)}\sim 10^{54}$erg, which are sufficiently large to explain GRBs. On the other hand, the energy in the cocoon component $E_{dg}$ is $\sim 10^{51}$erg for models with successful jet break outs, although it remains an open question whether the jet or the cocoon expansion could give rise to enough nickel production to explain the GRBs-Hypernovae connection. One of the other important results in this study is that model M50, which corresponds the model with slowly rotational model, can not succeed the jet breakout (failed GRB). Therefore, there is the threshold SAM distribution between model M70 and M50 for the success of jet penetration. It should be noted, however, that the threshold rotation, no doubt, strongly depend on the jet opening angle, and our results are only adequate for the canonical jet opening angle, $\theta_{op} = 9^{\circ}$. Although there are some limitations in this study, we suggest that neutrino-driven jet is capable of producing several types of bursts (and also include failure branch of burst i.e, failed GRBs) by the different progenitor rotation. Finally, we would like to note that the results presented in this paper are optimistic. As one of the large uncertainties, the actual mass accretion rate would be smaller than the results obtained in this paper, since some fraction of the mass is expected to escape from the disk rather than being accreted to the black hole due to neutrino winds or viscous heating [@1999ApJ...524..262M]. In addition, the increase rate of Kerr parameter would be slower than the current result, since the SAM of the inflow matter is smaller than the ISCO due to the effect of pressure gradient. Note also that the disk wind also extract the angular momentum of accretion matter. Since the neutrino deposition rate depends sensitively on the mass accretion rate and Kerr parameter, the jet dynamics would be affected by these effects. Note also that the neutrino-driven jet can not explain the extreme long duration of bursts and other populations are necessary to explain these peculiar events. The other factors such as viewing angle may also cause the observational difference among GRB population (See e.g. @2002ApJ...571L..31Y [@2003ApJ...594L..79Y; @2005ApJ...630.1003G]). More quantitative discussions will be conducted in our forthcoming paper. H.N is grateful to Andrew Macfadyen, Andrei Beloborodov, Philipp Podsiadlowski, Kunihito Ioka, Yudai Suwa and Shoichi Yamada for useful comments and discussions. H.N would also like to thank Eriko Nagakura for proofreadings. This work was supported by Grant-in-Aid for the Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan (24740165) and HPCI Strategic Program of Japanese MEXT. Aloy, M. A., M[ü]{}ller, E., Ib[á]{}[ñ]{}ez, J. M., Mart[í]{}, J. M., & MacFadyen, A. 2000, , 531, L119 Bromberg, O., Nakar, E., Piran, T., & Sari, R. 2012, , 749, 110 Chen, W.-X., & Beloborodov, A. M. 2007, , 657, 383 Dessart, L., Burrows, A., Livne, E., & Ott, C. D. 2008, , 673, L43 Di Matteo, T., Perna, R., & Narayan, R. 2002, , 579, 706 Eichler, D., Livio, M., Piran, T., & Schramm, D. N. 1989, , 340, 126 Fryer, C. L., & M[é]{}sz[á]{}ros, P. 2003, , 588, L25 Goldstein, A., Preece, R. D., Briggs, M. S., et al. 2011, arXiv:1101.2458 Granot, J., Ramirez-Ruiz, E., & Perna, R. 2005, , 630, 1003 Gu, W.-M., Liu, T., & Lu, J.-F. 2006, , 643, L87 Harikae, S., Kotake, K., Takiwaki, T., & Sekiguchi, Y.-i. 2010, , 720, 614 Janiuk, A., Yuan, Y., Perna, R., & Di Matteo, T. 2007, , 664, 1011 Kohri, K., & Mineshige, S. 2002, , 577, 311 Kohri, K., Narayan, R., & Piran, T. 2005, , 629, 341 Lazzati, D., Morsony, B. J., & Begelman, M. C. 2009, , 700, L47 Lazzati, D., Morsony, B. J., Blackwell, C. H., & Begelman, M. C. 2012, , 750, 68 Lee, W. H., Ramirez-Ruiz, E., & Page, D. 2005, , 632, 421 Lee, W. H., & Ramirez-Ruiz, E. 2006, , 641, 961 Lindner, C. C., Milosavljevi[ć]{}, M., Couch, S. M., & Kumar, P. 2010, , 713, 800 Lindner, C. C., Milosavljevi[ć]{}, M., Shen, R., & Kumar, P. 2012, , 750, 163 Liu, T., Gu, W.-M., Dai, Z.-G., & Lu, J.-F. 2010, , 709, 851 Liu, T., Gu, W.-M., Xue, L., & Lu, J.-F. 2012, , 337, 711 Lopez-Camara, D., Lee, W. H., & Ramirez-Ruiz, E. 2009, , 692, 804 L[ó]{}pez-C[á]{}mara, D., Lee, W. H., & Ramirez-Ruiz, E. 2010, , 716, 1308 MacFadyen, A. I., Woosley, S. E., & Heger, A. 2001, , 550, 410 M[é]{}sz[á]{}ros, P., & Rees, M. J. 2001, , 556, L37 Milosavljevi[ć]{}, M., Lindner, C. C., Shen, R., & Kumar, P. 2012, , 744, 103 Mizuta, A., & Aloy, M. A. 2009, , 699, 1261 Mizuta, A., Nagataki, S., & Aoi, J. 2011, , 732, 26 Morsony, B. J., Lazzati, D., & Begelman, M. C. 2007, , 665, 569 MacFadyen, A. I., & Woosley, S. E. 1999, , 524, 262 Matzner, C. D. 2003, , 345, 575 Narayan, R., & Yi, I. 1994, , 428, L13 Nagakura, H., Ito, H., Kiuchi, K., & Yamada, S. 2011, , 731, 80 Nagakura, H., Suwa, Y., & Ioka, K. 2012, , 754, 85 Nagataki, S., Takahashi, R., Mizuta, A., & Takiwaki, T. 2007, , 659, 512 Popham, R., Woosley, S. E., & Fryer, C. 1999, , 518, 356 Ramirez-Ruiz, E., Celotti, A., & Rees, M. J. 2002, , 337, 1349 Sekiguchi, Y., & Shibata, M. 2011, , 737, 6 Shibata, M., Sekiguchi, Y.-I., & Takahashi, R. 2007, Progress of Theoretical Physics, 118, 257 Suwa, Y., & Ioka, K. 2011, , 726, 107 Taylor, P. A., Miller, J. C., & Podsiadlowski, P. 2011, , 410, 2385 Tominaga, N., Maeda, K., Umeda, H., et al. 2007, , 657, L77 Woosley, S. E., & Bloom, J. S. 2006, , 44, 507 Woosley, S. E. 1993, , 405, 273 Woosley, S. E., & Heger, A. 2006, , 637, 914 Yamazaki, R., Ioka, K., & Nakamura, T. 2002, , 571, L31 Yamazaki, R., Yonetoku, D., & Nakamura, T. 2003, , 594, L79 Zalamea, I., & Beloborodov, A. M. 2011, , 410, 2302 Zhang, W., Woosley, S. E., & MacFadyen, A. I. 2003, , 586, 356
--- abstract: | Transitive Lie algebroids have specific properties that allow to look at the transitive Lie algebroid as an element of the object of a homotopy functor. Roughly speaking each transitive Lie algebroids can be described as a vector bundle over the tangent bundle of the manifold which is endowed with additional structures. Therefore transitive Lie algebroids admits a construction of inverse image generated by a smooth mapping of smooth manifolds. Due to to K.Mackenzie ([@Mck-2005]) the construction can be managed as a homotopy functor $\mathcal{TLA}_{\rg}$ from category of smooth manifolds to the transitive Lie algebroids. The functor $\mathcal{TLA}_{\rg}$ associates with each smooth manifold $M$ the set $\mathcal{TLA}_{\rg}(M)$ of all transitive algebroids with fixed structural finite dimensional Lie algebra $\rg$. Hence one can construct ([@Mi-2010],[@Mi-2011]) a classifying space $\cB_{\rg}$ such that the family of all transitive Lie algebroids with fixed Lie algebra $\rg$ over the manifold $M$ has one-to-one correspondence with the family of homotopy classes of continuous maps $[M,\cB_{\rg}]$: $ \mathcal{TLA}_{\rg}(M)\approx [M,\cB_{\rg}]. $ It allows to describe characteristic classes of transitive Lie algebroids from the point of view a natural transformation of functors similar to the classical abstract characteristic classes for vector bundles and to compare them with that derived from the Chern-Weil homomorphism by J.Kubarski([@Kub-91e]). As a matter of fact we show that the Chern-Weil homomorphism does not cover all characteristic classes from categorical point of view. author: - | Mishchenko, A.S.\ (Harbin Institute of Technology, China,\ Moscow State University, Russia),\ Li Xiaoyu\ (Harbin Institute of Technology, China) title: 'Comparison of categorical characteristic classes of transitive Lie algebroid with Chern-Weil homomorphism' --- Basic definitions and functor $\mathcal{TLA}_{\mathfrak{g}}(\bullet)$ ===================================================================== Definitions ----------- (See [@Mck-2005],Definition 3.1.1) A Lie algebroid $A$ over a smooth manifold $M$ is a vector bundle $p:A\rightarrow M$ together with a Lie algebra structure $\{\bullet\}$ on the space of $\Gamma^{\infty}(A;M)$ and a bundle map $a:A\rightarrow TM $ called the anchor, such that 1. the induced map $a:\Gamma(A;M)\rightarrow \Gamma(TM;M)=\rX^{1}(M)$ is a Lie algebra homomorphism 2. for any sections $\alpha, \beta\in\Gamma(A;M)$ and smooth function $f\in C^{\infty}(M)$ we have the Leibniz identity $$\{\alpha,f\cdot\beta\}=f\cdot\{\alpha,\beta\}+ a(\alpha)(f)\cdot\beta$$ We call $A$ a regular Lie algebroid if the rank of $a$ is locally constant and $A$ a transitive Lie algebroid if $a$ is surjective. The Lie algebroid homomorphism and isomorphism is defined in [@Mck-2005]. And we often use the Atiyah exact sequence $ 0\xrightarrow{} L \xrightarrow{j} A \xrightarrow{a}TM \xrightarrow{} 0 $ to denote a transitive Lie algebroid. Here $L=\kernel a$ is called the adjoint bundle. Sometimes we use $(A,M,\{bullet\},a)$ to note Lie algebroid in order to highlight the bracket. All transitive Lie algebroids(isomorphic class) and homomorphisms between them form a category that is fundamental in our considerations. (See [@Mck-2005]) The followings are important examples of transitive Lie algebroid. 1. Let $M$ be a manifold and let $\g$ be a Lie algebra. On $TM\bigoplus(M\times\g)$ define $a:TM\bigoplus(M\times\g)\rightarrow TM $ by $a:(X,\mu)\mapsto X$. And a bracket $$\{(X,\mu),(Y,\nu)\}=([X,Y],X(\nu)-Y(\nu)+[\mu,\nu]).$$ for $(X,\mu),(Y,\nu)\in\Gamma(TM\bigoplus(M\times\g);M)$. Then $TM\bigoplus(M\times\g)$ is a transitive Lie algebroid on $M$, called the trivial Lie algebroid on $M$ with structural Lie algebra $\g$. 2. Let $L$ be a Lie algebra bundle on smooth manifold $M$. The Lie algebroid $\D_{Der}(L)$ of covariant derivatives on $\Gamma^{\infty}(L)$ is a transitive Lie algebroid on $M$. 3. The Lie algebroid $\D(E)$ of covariant differential operators on the space of sections of vector bundle $E$. As vector space is commutative Lie algebra, vector bundle $E$ is also commutative Lie algebra bundle. Thus $\D(E)$ and $\D_{Der}(E)$ are identical in this case. In the following part of this article we use $\g$ to note Lie algebra and $\h$ to note commutative Lie algebra. All the Lie algebras we consider in this article are finite dimensional. Functor $ \mathcal{TLA}_{\g}(\bullet) $ --------------------------------------- In [@Mck-2005], K. Mackenzie defines pullback of transitive Lie algebroid over smooth map $f:M'\rightarrow M$. It means that given a Lie algebra $\g$ there is the functor $\mathcal{TLA}_{\g}(\bullet)$ such that with any manifold $M$ it assigns the family $\mathcal{TLA}_{\g}(M)$ of all transitive Lie algebroid with structural Lie algebra $\g$. (See [@Mck-2005], page 248)\[LAB\] Let $ 0\xrightarrow{} L \xrightarrow{j} A \xrightarrow{a}TM \xrightarrow{} 0$ be a transitive Lie algebroid on smooth manifold $M$. Then $L$ is a Lie algebra bundle with respect to the braces structure on $\Gamma(L;M)$ induced from the braces on $\Gamma(A;M)$. (See [@Mck-2005], page 100)\[SubLie\] Let $A$ be a transitive Lie algebroid on $M$ and let $U\subset M$ be an open subset. Then the braces $\{,\}:\Gamma(A;M)\times \Gamma(A;M)\rightarrow \Gamma(A;M)$ restricted to $\Gamma(A_{U};U)\times \Gamma(A_{U};U)\rightarrow \Gamma(A_{U};U)$ make $A_{U}$ be a Lie algebroid on $U$ called the restriction of $A$ to $U$. (See [@Mck-2005], page 317)\[Localtri\] Consider a transitive Lie algebroid $ 0\xrightarrow{} L \xrightarrow{j} A \xrightarrow{a}TM \xrightarrow{} 0$ on $M$ with fixed structural Lie algebra $\g$. Given any open covering $\{U_{\alpha}\}$ of $M$ by contractible sets, for arbitrary $\alpha$, there is an Lie algebroid isomorphism $$S_{\alpha}:TU_{\alpha}\bigoplus(U_{\alpha}\times \g)\rightarrow A_{U_{\alpha}}$$ where $TU_{\alpha}\bigoplus(U_{\alpha}\times \g)$ is trivial Lie algebroid on $U_{\alpha}$. By using Lemma \[LAB\], Lemma \[SubLie\], Lemma \[Localtri\] and the method used in [@Hatcher-2005], we get the following theorem. Let $M$ and $N$ be smooth manifolds. Given an arbitrary transitive Lie algebroid $A$ on $N$. Let $f,g:M \rightarrow N$ are homotopic smooth maps. Then the pullback of $A$ over $f$ and $g$ are Lie algebroid isomorphic, that is $f^{!!}A\approx g^{!!}A$. Hence the functor $\mathcal{TLA}_{\g}(\bullet)$ is homotopy functor for fixed structural Lie algebra $\g$. There exists a classifying space $\cB_{\g}$ such that $\mathcal{TLA}_{\g}(M)$ has one to one correspondence with the family of homotopy classes of continuous maps $[M;\cB_{\g}]$. Here $\cB_{\g}$ is abstract and can be described in more or less understandable way (see [@Mi-2011]). Obstruction =========== Cohomology ---------- (see [@Mck-2005], page 107) Let $A$ be an arbitrary Lie algebroid on a smooth manifold $M$ and $E$ is a vector bundle on $M$. Let $\D(E)$ be the Lie algebroid of covariant derivative on $\Gamma^{\infty}(E)$. A representation of $A$ on $E$ is a Lie algebroid homomorphism $$\rho:A\rightarrow \D(E).$$ The cohomology space $\mathcal{H}^{n}(A,\rho,E),n\geq 0$ can be defined when the representation $\rho$ is given(see [@Mck-2005], page 260). When $A$ is $TM$, we denote the representation by $\nabla:TM\rightarrow E$. Then there is $\mathcal{H}^{n}(M,\nabla,E),n\geq 0$. The representation $\nabla:TM\rightarrow E$ can be regard as a flat connection on $E$(see [@Mck-2005], page 109, page 186 ). Due to Lemma 1.1.6 and Lemma 1.2.2 in [@Kub-91e], the following theorem holds. Let $E$ be a vector bundle on smooth manifold $M$ and $\nabla:TM\rightarrow E$ be a representation of $TM$ on $E$. Let $f:M'\rightarrow M$ be a smooth map between smooth manifold $M'$ and $M$. Let $E'=f^{*}E$ be the pullback of vector bundle over $f$. Then 1. the representation $\nabla$ induces a representation of $TM'$ on $E'$ noted by $\nabla':TM'\rightarrow\D(E')$. 2. the map $f$ induces a homomorphism between cohomologies $$f^{*}:\mathcal{H}^{*}(M,\nabla,E)\rightarrow \mathcal{H}^{*}(M',\nabla',E'),$$ where $$\mathcal{H}^{*}(M,\nabla,E)= \bigoplus\limits_{n=0}^{\infty}\mathcal{H}^{n}(M,\nabla,E), \quad\mathcal{H}^{*}(M',\nabla',E')= \bigoplus\limits_{n=0}^{\infty}\mathcal{H}^{n}(M',\nabla',E').$$ From fundamental differential geometry, the following theorem holds. Let $E$ be a commutative Lie algebra bundle with fiber $\h$. Let $\nabla$ be a flat connection on it. Then $\nabla$ induces the system of transition functions $\{\varphi_{\alpha\beta}\}$ for $E$ that are locally constant. Then $E$ can be seen as vector bundle with discrete structural group $\Aut(\h)_{d}$, and denoted by $E^{\nabla}\rightarrow M$. Here $\Aut(\h)_{d}$ is the group of all automorphisms of $\h$, that is $\Aut(\h)$, with discrete topology. Obstruction class ----------------- Let $L$ ba a Lie algebra bundle on smooth manifold $M$ with fiber $\g$. There is a commutative diagram(see [@Mck-2005]). $$\xymatrix{ &0\ar[d] &0\ar[d] \\ &ZL\ar[r]^{=}\ar[d]^{i}&ZL\ar[d]^{i}\\ &L\ar[r]^{=}\ar[d]^{ad} &L\ar[d]^{ad}\\ 0\ar[r]&\Der(L)\ar[r]^{j}\ar[d]^{\natural^{0}}&\cD_{Der}(L)\ar[r]^{a}\ar[d]^{\natural}&TM\ar[r]\ar[d]^{=}&0\\ 0\ar[r]&\Out_{Der}(L)\ar[r]^{\bar j}\ar[d]&\Out\cD_{Der}(L)\ar[r]^{\bar a}\ar[d]&TM\ar[r]\ar[d]&0\\ &0&0&0 }$$ in which both rows and columns are exact. Consider a coupling $\Xi:TM\rightarrow \Out\D_{Der}(L)$, that is the curvature tensor $$R^{\Xi}:\Lambda^{2} (TM)\rightarrow \Out_{Der}(L)$$ defined by $$R^{\Xi}(X,Y)=[\Xi(X),\Xi(Y)]-\Xi([X,Y])$$ for $X,Y\in\rX^{1}(M)$ is zero. There is a lifting $\nabla_{\Xi}:TM\rightarrow \D_{Der}(L)$ of the coupling $\Xi$: $$\xymatrix{ L\ar[d]^{ad}\\ \cD_{Der}(L)\ar[d]^{\natural}&TM\ar[l]_{\nabla_{\Xi}} \ar[d]^{=}\\ \Out\cD_{Der}(L)\ar[d]&TM\ar[d]\ar[l]_{\Xi}\\ 0&0 }$$ in which $\nabla$ is vector bundle map. Then for curvature tensor $R^{\nabla_{\Xi}}:\Lambda^{2} (TM)\rightarrow \Der(L)$ defined by $R^{\nabla_{\Xi}}(X,Y)= [\nabla_{\Xi}(X),\nabla_{\Xi}(Y)]- \nabla_{\Xi}([X,Y])$, the following diagram is commutative. $$\xymatrix{ L\ar[d]^{ad}\\ \Der(L)\ar[d]^{\natural^{0}}&\Lambda^{2}(TM) \ar[l]_{R^{\nabla_{\Xi}}}\ar@/_.5pc/[ld]_{0} \\ \Out_{Der}(L)\ar[d]&\\ 0& }$$ Since vertical column is exact there is a lifting of $R^{\nabla_{\Xi}}$ that is a bundle map $\Omega:\Lambda^{2} (TM)\rightarrow L$ such that the diagram $$\label{lifting} \xymatrix{ L\ar[d]^{ad}\\ \Der(L)\ar[d]^{\natural^{0}}&\Lambda^{2}(TM)\ar[l]_{R^{\nabla_{\Xi}}}\ar@/_.5pc/[ld]_{0} \ar@/_.5pc/[lu]_{\Omega} \\ \Out_{Der}(L)\ar[d]&\\ 0& }$$ is commutative. Define $d^{\nabla}:\Gamma(\Omega^{n}(M,L);M)\rightarrow \Gamma(\Omega^{n+1}(M,L);M)$ by $$\begin{array}{ll} d^{\nabla}f(X_{1},X_{2},...,X_{n+1})=\sum\limits_{i=1}^{n+1}(-1)^{i+1}\nabla(X_{i})(f(X_{1},X_{2},...,\hat{X_{i}},...,X_{n+1})\\ ~~~~~~~~~~~~~~~~~ +\sum\limits_{i<j}(-1)^{i+j}f([X_{i},X_{j}],X_{1},...,\hat{X_{i}},...,\hat{X_{j}},...,X_{n+1}) \end{array}$$ here $f\in\Gamma(\Omega^{n}(M,L);M)$ and $X_{1},X_{2},...,X_{n+1}\in\rX^{1}$. For $\Omega$ in diagram (\[lifting\]), $d^{\nabla_{\Xi}}\Omega\in\Omega^{3}(M,ZL)$ and $d^{\nabla^{ZL}_{\Xi}}(d^{\nabla})=0$ where $\nabla^{ZL}_{\Xi}$ is induced by $\nabla_{\Xi}$ (see [@Mck-2005]). Then define $Obs(\nabla_{\Xi})=[d^{\nabla_{\Xi}}(\Omega)] \in\mathcal{H}^{3}(M,\nabla^{ZL}_{\Xi},ZL)$. The connection $\nabla^{ZL}_{\Xi}$ and cohomology class $Obs(\nabla_{\Xi})$ depend only on $\Xi$ (see [@Mck-2005], page 273 and Theorem 7.2.12). Then the class $Obs(\nabla_{\Xi})$ is called the *obstruction class* of the coupling $\Xi$, and is denoted by $Obs(\Xi)$. (**The functorial property**) Let $L$ ba a finite dimensional Lie algebra bundle on a smooth manifold $M$. Let $M'$ be a smooth manifold and $f:M'\rightarrow M$ is smooth map. Let $L'=f^{*}L$ be the pullback of Lie algebra bundle over $f$. Consider a coupling $\Xi:TM\rightarrow \Out\D_{Der}L$. Then $\Xi$ induces a coupling $\Xi':TM'\rightarrow \Out\D_{Der}L'$ and $f$ induces a homomorphism $$f^{*}:\mathcal{H}^{*}(M,\Xi,ZL)\rightarrow \mathcal{H}^{*}(M',\Xi',ZL').$$ Further more the obstruction class $Obs(\Xi')\in\mathcal{H}^{3}(M',\Xi',ZL')$ satisfies the condition $$f^{*}(Obs(\Xi))=Obs(\Xi')$$ An extension of $TM$ by Lie algebra bundle $L$ is an exact sequence of Lie algebroid over $M$ $$0\xrightarrow{} L \xrightarrow{j} A \xrightarrow{a}TM \xrightarrow{} 0.$$ (see [@Mck-2005], corollary 7.3.9) Let $L$ be a Lie algebra bundle on $M$. Let $\Xi:TM\rightarrow \Out\D_{Der}(L)$ be a coupling. Then , if $Obs(\Xi)=0$, there is a Lie algebroid extension $$0\xrightarrow{} L \xrightarrow{j} A \xrightarrow{a}TM \xrightarrow{} 0$$ of $TM$ by $L$ inducing the coupling $\Xi$. Let $E$ be a vector bundle over $M$ (that is the Lie algebra bundle with commutative Lie algebra). There is a Lie algebroid extension $$0\xrightarrow{} E \xrightarrow{j} A \xrightarrow{a}TM \xrightarrow{} 0$$ if and only if the bundle $E$ is flat. Suppose that the extension exists $$0\xrightarrow{} E \xrightarrow{j} A \xrightarrow{a}TM \xrightarrow{} 0$$ Let $\lambda:TM\rightarrow A$ be a splitting. Define $$\nabla^{\lambda}:\rX^{1}(M)\times\Gamma^{\infty}(E;M) \rightarrow \Gamma^{\infty}(E;M)$$ by the formula $$\nabla^{\lambda}_{X}(\mu)=\{\lambda(X),\mu\}.$$ Then $$\begin{array}{ll} R^{\nabla^{\lambda}}(X,Y)(\mu)=[\nabla^{\lambda}_{X},\nabla^{\lambda}_{Y}](\mu)-\nabla^{\lambda}_{[X,Y]}(\mu)=\\ ~~~~~~~~~~~~~~~~~~~=\{[\lambda(X),\lambda(Y)]-\lambda([X,Y]),\mu\}=0 \end{array}$$ for arbitrary $X,Y\in\rX^{1}(M),\mu\in\Gamma(E;M)$ since $a([\lambda(X),\lambda(Y)]-\lambda([X,Y]))=0$ that is $[\lambda(X),\lambda(Y)]-\lambda([X,Y])\in \Gamma(E;M)$ and the structural Lie algebra is commutative. Conversely. If $E$ is flat, there is a flat connection $\nabla$ on $E$ which also is a representation of the Lie algebroids $$\nabla:TM\rightarrow \D(E),$$ that is $R^{\nabla}(X,Y)=0$. By definition of obstruction class this means that $Obs(\nabla)=0\in \mathcal{H}^{3}(M,\nabla,E)$. Then there exist Lie algebroid extensions. Characteristic Classes ====================== In this section a system of characteristic classes of transitive Lie algebroid with commutative adjoint bundle will be described. Then they will be compared with characteristic classes derived from Chern-Weil homomorphism by J.Kubarski ([@Kub-91e]). As a matter of fact we show that the Chern-Weil homomorphism does not cover all characteristic classes from categorical point of view. A system of characteristic classes for commutative case ------------------------------------------------------- Let $\h$ be a finite dimensional commutative Lie algebra. Let $\Aut(\h)_{d}$ be the group $\Aut(\h)$ with discrete topology. The functor $Vector_{d}^{\h}(\bullet)$ associates with each paracompact topology space $X$ the set $Vector_{d}^{\h}(X)$ of all vector bundle with structural group $\Aut(\h)_{d}$. Let $E^{\infty}\rightarrow B_{\Aut(\h)_{d}}$ be universal bundle with group $\Aut(\h)_{d}$ and let $B_{\Aut(\h)_{d}}$ be the classifying space. (See [@Dale-1993], Definition 11.1, Theorem 11.2, Theorem 12.2)\[lemma 3.1.1\]\[final space\] There is a bijection between $Vector_{d}^{\h}(X)$ and the homotopy classes of continuous maps $[X;B_{\Aut(\h)_{d}}]$. Let $M$ be a smooth manifold and $$0\xrightarrow{} E \xrightarrow{j} A \xrightarrow{a}TM \xrightarrow{} 0$$ be a transitive Lie algebroid with fixed structural commutative Lie algebra $\h=R^{n}$. Let $\lambda:TM\rightarrow A$ be a splitting. Define $\nabla=\nabla^{\lambda}$ by a formula $\nabla^{\lambda}_{X}(\mu)=\{\lambda(X),\mu\}$. The bundle $E$ possesses a flat structure $E^{\nabla}\in Vector_{d}^{\h}(M)$. Let $f:M'\rightarrow M$ be a smooth map and $f^{!!}A$ be the pullback of Lie algebroid $A$ over $f$, that is $$\label{PullbackTLA} 0\xrightarrow{} f^{*}E \xrightarrow{j'} f^{!!}A \xrightarrow{a'}TM' \xrightarrow{} 0.$$ Let $\lambda':TM'\rightarrow f^{!!}A$ be a splitting. Define $\nabla^{'}=\nabla^{\lambda'}$ on $f^{*}E$ and $f^{*}E$ is corresponding to $(f^{*}E)^{\nabla'}$. \[Correctness\] 1. $\nabla$ and $\nabla'$ are independent of the choice of $\lambda$ and $\lambda'$, 2. The bundle $(f^{*}E)^{\nabla'}$ is the pullback of $E^{\nabla}$ over $f:M'\rightarrow M$ in the category of vector bundle with discrete structural group $\Aut(\h)_{d}$. Statement $(i)$ is obvious. $(ii):$ Consider the splitting of transitive Lie algebroid (\[PullbackTLA\]) $$\lambda':TM'\rightarrow f^{!!}A$$ by the formula $$\lambda'(X')=(X',\lambda(Tf(X'))),$$ $X'\in TM'$. Let $\sum\limits_{i}h_{i}\cdot(\mu_{i}\circ f)\in\Gamma(f^{*}E;M)$, here $h_{i}\in C^{\infty}(M'),\mu_{i}\in\Gamma(E;M)$. Then $$\nabla^{\lambda'}_{X'}(\sum\limits_{i}h_{i}\cdot(\mu_{i}\circ f))=\sum\limits_{i}X'(h_{i})\cdot(\mu_{i}\circ f)+\sum\limits_{i}h_{i}\cdot(\nabla^{\lambda}_{Tf(X')}(\mu_{i})\circ f)$$ As $\nabla^{\lambda}$ is flat connection, there exist chart $\{\varphi_{\alpha}:E_{U_{\alpha}}\rightarrow U_{\alpha}\times \h \}_{\alpha\in\Delta}$ which satisfies the condition $$\varphi_{\alpha}(\nabla^{\lambda}_{X}(\mu_{\alpha}))= X(\varphi_{\alpha}(\mu_{\alpha}))$$ for arbitrary $\mu_{\alpha}\in\Gamma(E_{U_{\alpha}};U_{\alpha})$, $X\in\rX^{1}(M)$. Consider $\mu\in\Gamma(E_{U_{\alpha}\cap U_{\beta}};U_{\alpha}\cap U_{\beta})$. Then $$X(\varphi_{\beta}\circ\varphi_{\alpha}^{-1}\circ \varphi_{\alpha}(\mu))= X(\varphi_{\beta}(\mu))=\varphi_{\beta}(\nabla^{\lambda}_{X}(\mu))= \varphi_{\beta}\circ\varphi_{\alpha}^{-1}(\varphi_{\alpha}(\nabla^{\lambda}_{X}(\mu))).$$ Then $$X(\varphi_{\beta}\circ\varphi_{\alpha}^{-1}\circ\varphi_{\alpha}(\mu))= \varphi_{\beta}\circ\varphi_{\alpha}^{-1}(X(\varphi_{\alpha}(\mu)))$$ Thus the transition functions $\{\varphi_{\alpha\beta}\}_{\alpha,\beta\in\Delta}$ are all locally constant. Let $\{V_{\alpha}^{'}=f^{-1}(U_{\alpha})\}_{\alpha\in\Delta}$ be atlas of charts on $M'$. Define the homomorphism of $C^{\infty}(V_{\alpha}')-$modules $$\psi_{\alpha}:\Gamma(f^{*}E|_{V_{\alpha}'};V_{\alpha}')\rightarrow \Gamma(V_{\alpha}'\times \h;V_{\alpha}')$$ defined by the formula $$\psi_{\alpha}(h_{\alpha,i}\cdot(\mu_{\alpha}^{i}\circ f))= h_{\alpha,i}\cdot\varphi_{\alpha}(\mu_{\alpha}^{i})\circ f$$ for $h_{\alpha,i}\cdot(\mu_{\alpha}^{i}\circ f) \in\Gamma(f^{*}E|_{V_{\alpha}'};V_{\alpha}'),$ where $h_{\alpha,i}\in C^{\infty}(V_{\alpha}'), \mu_{\alpha}^{i}\in\Gamma(E_{U_{\alpha}};U_{\alpha})$. As $\varphi_{\alpha}$ is vector bundle isomorphism, $\psi_{\alpha}$ induces a vector bundle isomorphism. Then $\{V^{'}_{\alpha},\psi_{\alpha}:f^{*}E|_{V_{\alpha}'} \rightarrow V_{\alpha}'\times \h\}_{\alpha\in\Delta}$ is a chart for $f^{*}E$. Consider a vector field $X'\in\rX^{1}(M')$. Then $$\begin{array}{ll} \psi_{\alpha}(\nabla^{\lambda'}_{X'}(h_{\alpha,i}\cdot(\mu_{\alpha}^{i}\circ f)))\\ =\psi_{\alpha}(X'(h_{\alpha,i})\cdot (\mu_{\alpha}^{i}\circ f)+h_{\alpha,i}\cdot(\nabla^{\lambda}_{Tf(X')}(\mu_{\alpha}^{i})\circ f))=\\ =X'(h_{\alpha,i})\cdot (\varphi_{\alpha}(\mu_{\alpha}^{i})\circ f)+h_{\alpha,i}\cdot (Tf(X')(\varphi_{\alpha}(\mu_{\alpha}^{i}))\circ f)\\ =X'(h_{\alpha,i}\cdot (\varphi_{\alpha}(\mu_{\alpha}^{i}))\circ f)=X'(\psi_{\alpha}(h_{\alpha,i}\cdot(\mu_{\alpha}^{i}\circ f)))) \end{array}$$ The transition functions $$\psi_{\alpha\beta}:V_{\alpha}'\cap V_{\beta}'\rightarrow \Aut(\h)_{d}$$ are defined by $$\psi_{\alpha\beta}(x')=\varphi_{\alpha\beta}(f(x'))$$ for $x'\in V_{\alpha}\cap V_{\beta}$. So $(f^{*}E)^{\nabla'}$ is the pullback of $E^{\nabla}$ over $f:M'\rightarrow M$ in the category of vector bundle with discrete structural group $\Aut(\h)_{d}$. The Lemma \[Correctness\] shows that the following definition is corrected. \[CharClass\] Let $\h$ be a commutative Lie algebra and $M$ be a smooth manifold. Let $A\in \mathcal{TLA}_{\h}(M)$, with splitting $\lambda$. Let $E^{\nabla^{\lambda}}$ be the correspondent Lie algebra bundle with flat structure. Let $\theta:Vector_{d}^{\h}(M)\rightarrow [M;B_{\Aut(\h)_{d}}]$ be the bijection defined in Lemma \[final space\]. Then $\theta(E^{\nabla^{\lambda}})=[f]\in [M;B_{\Aut(\h)_{d}}]$ induces a homomorphism $$f^{*}:H^{*}(B_{\Aut(\h)_{d}};R)\rightarrow H^{*}(M;R).$$ The class $f^{*}(c)\in H^{*}(M;R)$ is characteristic class of $A$, for arbitrary $c\in H^{*}(B_{\Aut(\h)_{d}};R)$. Chern-Weil homomorphism ----------------------- (see [@Kub-91e], page17) Given a transitive Lie algebroid (A,q,M,{,},a) with adjoint bundle $L$. The adjoint representation of a transitive Lie algebroid $A$ is $$ad:A\rightarrow \cD(L)$$ defined by $$ad(\xi)(\nu)=\{\xi,\nu\}$$ for $\xi\in\Gamma(A;M),\quad \nu\in\Gamma(L;M)$. Let $L^{*}$ be dual bundle of $L$ and $\bigvee^{k}L^{*}$ is $k-th$ symmetric power of $L^{*}$(see [@Greub-1967], page 191). The adjoint representation $ad$ can rise to $$\bigvee^{k}ad^{\natural}:A\rightarrow \cD(\bigvee^{k}L^{*})$$ such that $$\begin{array}{l} <\bigvee^{k}ad^{\natural}(\xi)(\varphi), \nu^{1}\vee\nu^{2}\vee...\vee\nu^{k}> = \\ = a(\xi)(<\varphi,\nu^{1}\vee\nu^{2}\vee... \vee\nu^{k}>)-\sum\limits_{i=1}^{k}<\varphi,\nu^{1}\vee... \vee\{\xi,\nu^{i}\}\vee...\vee\nu^{k}> \end{array}$$ for $\xi\in\Gamma(A;M),\varphi\in\Gamma(\bigvee^{k}L^{*};M), \nu^{i}\in\Gamma(L;M)$. Here we only consider the vector bundle structure of $L$ that is commutative Lie algebra structure. Hence we use notation $\D(L)$ and $\D(\bigvee^{k}L^{*})$. (see [@Kub-91e], Definition 2.3.1) Given an arbitrary transitive Lie algebroid $ 0\xrightarrow{} L\xrightarrow{} A\xrightarrow{a} TM\xrightarrow{} 0$. Let $L^{*}$ be dual bundle of $L$. A section $\varphi\in\Gamma(\bigvee^{k}L^{*};M)$ is called $\bigvee^{k}ad^{\natural}-$invariant if $\bigvee^{k}ad^{\natural}(\xi)(\varphi)=0$ for all $\xi\in\Gamma(A;M)$. The space of all $\bigvee^{k}ad^{\natural}-$invariant sections of $\bigvee^{k}L^{*}$ is denoted by $\Gamma^{I}(\bigvee^{k}L^{*};M)$. (see [@Kub-91e], page 29) Given a transitive Lie algebroid (A,q,M,{,},a) with adjoint bundle $L$. Let $\lambda:TM\rightarrow A$ be a splitting and $R^{\lambda}\in\Omega^{2}(M;L)$ be the curvature tensor, $R^{\lambda}(X,Y)=\{\lambda(X),\lambda(Y)\}-\lambda([X,Y])$. Define a homomorphism of $C^{\infty}(M)-$modules $$\chi_{(A,\lambda),I}:\Gamma^{I}(\bigvee^{k}L^{*};M)\rightarrow \Omega^{2k}(M)$$ by the formula $$\chi_{(A,\lambda),I}=\frac{1}{k!}<\varphi,R_{\lambda}\vee R_{\lambda}\vee...\vee R_{\lambda}>$$ for $\varphi\in\Gamma(\bigvee^{k}L^{*};M)$. Here $$\begin{array}{l} <\varphi,R_{\lambda}\vee...\vee R_{\lambda}>(X_{1},X_{2},...,X_{2k})=\\\\ = <\varphi,\frac{1}{2^{k}} \sum\limits_{\sigma}(-1)^{\sigma} R_{\lambda}(X_{\sigma(1)},X_{\sigma(2)})\vee R_{\lambda}(X_{\sigma(3)},X_{\sigma(4)})\vee...\vee R_{\lambda}(X_{\sigma(2k-1)},X_{\sigma(2k)})> \end{array}$$ The forms from the image of $\chi_{(A,\lambda),I}$ is closed (see [@Kub-91e], proposition 4.1.2). Then Chern-Weil homomorphism is defined by the composition $$h_{(A,\lambda)}:\bigoplus\limits_{k\geq 0}\Gamma^{I}(\bigvee^{k}L^{*};M)\xrightarrow{\chi_{(A,\lambda),I}} \kernel d^{\nabla^{\lambda}} \xrightarrow{i} H^{*}_{DRam}(M;R).$$ The Chern-Weil homomorphism has functorial property and is independent of the choice of splitting (see [@Kub-91e], theorem 4.2.2, theorem 4.3.7). Then $h_{(A,\lambda)}$ can be denoted as $$h_{A}:\bigoplus\limits_{k\geq 0}\Gamma^{I}(\bigvee^{k}L^{*};M) \rightarrow H^{*}_{DRam}(M;R).$$ Example ------- The following example shows that the Chern-Weil homomorphism does not cover all categorical characteristic classes. Consider a flat $1$–dimensional vector bundle $E$ over a torus $T^{2}=S^{1}\times S^{1}$. We will consider $E$ as a Lie algebra bundle with commutative Lie algebra $\h\approx \R^{1}$. The structural group of the bundle $E$ is the group $R^{*}=R\setminus\{0\}$ with discrete topology. The flat structure on $E$ is defined by an atlas of charts $\{U_{\alpha}\}$ with trivialization of the bundle $E$ on each chart $U_{\alpha}$ such that all transition function are locally constant. Transition functions are fully defined by a representation of the fundamental $\pi_{1}(T^{2})$ in the structural group $\Aut(\h)_{d}$, $\rho:\pi_{1}(T^{2})\rightarrow \Aut(\h)_{d}$. There is a flat connection $\nabla$ on $E\rightarrow T^{2}$ which corresponds to the flat structure on $E$. This means that the connection on each chart $U_{\alpha}$ (after trivialization of the bundle $E$) coincides with usual derivative ($\nabla_{X}=\frac{\partial}{\partial X}$). Construct a Lie algebroid $\mathcal{A}:$ $$0\rightarrow E\rightarrow T(T^{2})\bigoplus E\rightarrow T(T^{2})\rightarrow 0$$ with bracket $$\{(X,\mu),(Y,\nu)\}=([X,Y],\nabla_{X}(\nu)-\nabla_{Y}(\mu)+\Omega(X,Y))$$ for $(X,\mu),(Y,\nu)\in\Gamma(T(T^{2})\bigoplus E;T^{2})$. Here $\Omega\in\kernel d^{\nabla}\subset\Omega^{2}(T^{2},E)$. Let $E^{*}$ be the bundle dual to $E$. Let $f\in\Gamma^{I}(E^{*};T^{2})$. Then $$\label{invsection} \begin{array}{l} ad^{\natural}((X,\nu))(f)(\mu)=X(f(\mu))-f(\{X\oplus\nu,0\oplus\mu\})= \\ =X(f(\mu))-f(\nabla_{X}(\mu))=0 \end{array}$$ for arbitrary $\mu\in \Gamma(E;T^{2}),\quad (X,\nu)\in\Gamma(T(T^{2})\oplus E;T^{2})$. Hence locally on the chart $U_{\alpha}$ the function $f$ is constant. This means that in the case of nontrivial representation the space $\Gamma^{I}(E^{*};T^{2})$ has only trivial element. Thus the characteristic class for $\mathcal{A}$ defined by Chern-Weil homomorphism by J.Kubarski is trivial. On the other hand the characteristic classes due to definition \[CharClass\] are not trivial. Namely the structural group $\Aut(\h)_{d}$ is isomorphic to $R\setminus \{0\}\approx\mathbb{Z}_{2}\times R$. Hence the classifying space for vector bundle with discrete structural group $\h$ is $B_{\mathbb{Z}_{2}}\times B_{R}$. We have $B_{\mathbb{Z}_{2}}\sim \mathbb{R}\mathbb{P}^{\infty}$. The group $R$ is a direct sum $R\approx\bigoplus_{\alpha\in A}\mathbb{Q}_{\alpha}$ where each group $\mathbb{Q}_{\alpha}$ is isomorphic to rational numbers, $\mathbb{Q}_{\alpha}\approx\mathbb{Q}$. The group $\mathbb{Q}$ is isomorphic to the direct limits $\mathbb{Q}=\lim\limits_{\rightarrow}\left(\mathbb{Z}_{n}, \omega_{n}\right)$, where all $\mathbb{Z}_{n}$ are isomorphic to $\mathbb{Z}$, and $\omega_{n}:\mathbb{Z}_{n}\rightarrow \mathbb{Z}_{n+1}, \omega_{n}(k)=(n+1)k$. Thus the classifying space $B_{R}$ can be represent as a direct limits $$B_{R}=\lim\limits_{\stackrel{\rightarrow}{b\subset B}}\mathbb{T}_{b},$$ where each $b\in B$ is a finite collection of indexes $$b=\{\alpha_{1},n_{1},\alpha_{2},n_{2},\dots, \alpha_{k}, n_{k}\}, \alpha_{j}\in A, n_{j}\in\mathbb{Z}$$ that are ordered in the natural way, $\mathbb{T}_{b}=\prod\limits^{k}_{j=1}S^{1}_{\alpha_{j},n_{j}}\approx\mathbb{T}^{k}.$ The cohomology group $H^{*}(B_{\Aut(\h)_{d}};R)$ can be describe in the following way: $$H^{*}(B_{\mathbb{Z}_{2}};R)\approx R;$$ $$H^{*}(B_{R};R)\approx \lim\limits_{\stackrel{\leftarrow}{b\subset B}} H^{*}(\mathbb{T}_{b};R).$$ The representation $\rho:\pi_{1}(T^{2})\rightarrow \Aut(\h)_{d}$ induces the map $$B_{\rho}:\mathbb{T}_{2}\rightarrow B_{\mathbb{Z}_{2}}\times B_{R},$$ and the homomorphism in cohomology $$B_{\rho}^{*}:H^{*}(B_{\mathbb{Z}_{2}}\times B_{R};R)\rightarrow H^{*}(\mathbb{T}_{2};R).$$ The homomorphism $B_{\rho}^{*}$ is surjective. The example show that Chern-Weil homomorphism cannot define all characteristic classes for transitive Lie algebroid. This example show that there is a natural problem to generalize the Chern-Weil homomorphism for non trivial flat bundle $ZL$ of local coefficients for cohomologies that contain characteristic classes. This work is partly supported by scientific program for the Chief International Academic Adviser of the Harbin institute of technology (2011-2014)(China) and Russian foundation of Basic research grant No.11-01-00057-a. [ccccc]{} (2005) (2005) , [*The Chern-Weil homomorphism of regular Lie algebroids*]{}, [Publications du Department de Mathematiques, Universite Claude Bernard - Lyon-1]{}, [(1991) pp.4–63]{} , [arXiv:1006.4839v1 \[math.AT\], 2010]{} , [arXiv:1111.6823v1 \[math.AT\]]{}, [2011.]{} , [*Fiber Bundle*]{}, [Springer-Verlag]{}, [(1993).]{} , [*Multilinear Algebra*]{}, [Springer-Verlag Berlin Heidelberg New York]{}, [(1967)]{}
--- author: - | Florin Panaite\ Institute of Mathematics of the Romanian Academy\ P. O. Box 1-764, RO-70700, Bucharest, Romania\ e-mail: fpanaite@stoilow.imar.ro - | Dragoş Ştefan\ Faculty of Mathematics, University of Bucharest\ Str. Academiei 14, RO-70109, Bucharest 1, Romania\ e-mail: dstefan@al.math.unibuc.ro title: 'Deformation cohomology for Yetter-Drinfel’d modules and Hopf (bi)modules ' --- Introduction ============ ${\;\;\;}$If $A$ is a bialgebra over a field $k$, a left-right Yetter-Drinfel’d module over $A$ is a $k$-linear space $M$ which is a left $A$-module, a right $A$-comodule and such that a certain compatibility condition between these two structures holds. Yetter-Drinfel’d modules were introduced by D. Yetter in [@y] under the name of “crossed bimodules” (they are called “quantum Yang-Baxter modules” in [@lr]; the present name is taken from [@rt]). If $A$ is a finite dimensional Hopf algebra then the category of left-right Yetter-Drinfel’d modules is equivalent to the category of left modules over $D(A)$, the Drinfel’d double of $A$ (see [@maj], [@rad]), even as braided tensor categories, and also to the category of Hopf bimodules over $A$ (see [@ad], [@ro], [@sch], [@wo]). An important class of examples occurs as follows: if $M$ is a finite dimensional vector space and $R\in End(M\otimes M)$ is a solution to the quantum Yang-Baxter equation, then the so-called “FRT construction” (see for instance [@frt]) associates to $R$ a certain bialgebra $A(R)$, and $M$ becomes a left-right Yetter-Drinfel’d module over $A(R)$ (see [@lr], [@rad]).\ ${\;\;\;}$In this paper we introduce a cohomology theory for left-right Yetter-Drinfel’d modules. If $A$ is a bialgebra and $M,N$ are left-right Yetter-Drinfel’d modules over $A$, we construct a double complex $\{Y^{n,p}(M,N)\}$ whose total cohomology is the desired cohomology $H^*(M,N)$. For $M=N=k$, this cohomology is just the Gerstenhaber-Schack cohomology of the bialgebra $A$. In general, we prove that $H^0(M,N)$ is $Hom(M,N)$ in the category of Yetter-Drinfel’d modules, and $H^1(M,N)$ is isomorphic to the group $Ext^1(M,N)$ of extensions of $M$ by $N$ in the category of Yetter-Drinfel’d modules; in particular, if $A$ is a finite dimensional Hopf algebra, this implies that $H^1(M,N)\simeq Ext^1_{D(A)}(M,N)$, where $D(A)$ is the Drinfel’d double of $A$, and we raise the problem whether $H^n(M,N)\simeq Ext^n_{D(A)}(M,N)$ for any $n\geq 2$.\ ${\;\;\;}$Similarly, we construct a cohomology theory $H^*(M,N)$ for $M,N$ being this time left-right Hopf modules over a bialgebra $A$, as the total cohomology of a certain double complex $\{C^{n,p}(M,N)\}$. We prove that this cohomology vanishes if $M$ and $N$ are of the form $M=V\otimes A$ and $N=W\otimes A$ for some linear spaces $V,W$ (in particular, this cohomology vanishes if the bialgebra $A$ has a skew antipode, since in this case any left-right Hopf module over $A$ is of this form). Finally, motivated by the recent work [@t] of R. Taillefer, we consider the case when $M$ and $N$ are not only left-right Hopf modules, but even Hopf bimodules, and we construct a subbicomplex $\{T^{n,p}(M,N)\}$ of the above $\{C^{n,p}(M,N)\}$, yielding a cohomology theory for Hopf bimodules, similar to the one introduced in [@t]. It is likely that these two cohomologies are isomorphic, at least when $A$ is a finite dimensional Hopf algebra.\ ${\;\;\;}$Let us finally mention that our cohomology theories classify deformations of the corresponding structures, in the sense of Gerstenhaber’s deformation theory (see [@g]).\ ${\;\;\;}$Throughout, $k$ will be a fixed field and all linear spaces, algebras etc. will be over $k$. Unadorned $\otimes $ and $Hom$ are also over $k$. If $V$ is a $k$-linear space and $n$ is a natural number, we denote $V^{\otimes n}$ by $V^n$. For bialgebras and Hopf algebras we refer to [@m], [@s]; we shall use Sweedler’s sigma notation $\Delta (a)= \sum a_1\otimes a_2$, $\Delta _2(a)=\sum a_1\otimes a_2\otimes a_3$ etc. Cohomology for Yetter-Drinfel’d modules ======================================= ${\;\;\;}$Let $A$ be a bialgebra with multiplication $\mu $ and comultiplication $\Delta $ and $(M, \omega _M , \rho _M)$ a left-right Yetter-Drinfel’d module over $A$, that is $(M, \omega _M)$ is a left $A$-module (we denote by $\omega _M(a\otimes m)=a\cdot m$ the left $A$-module structure of $M$), $(M, \rho _M)$ is a right $A$-comodule (we denote by $\rho _M:M\rightarrow M\otimes A$, $\rho _M(m)=\sum m_0\otimes m_1$ the comodule structure of $M$), and the following compatibility condition holds: $$\sum (a_2\cdot m)_0\otimes (a_2\cdot m)_1a_1=\sum a_1\cdot m_0\otimes a_2m_1$$ for all $a\in A, m\in M$. Let also $(N, \omega _N, \rho _N)$ be another left-right Yetter-Drinfel’d module, with the same kind of notation.\ ${\;\;\;}$For any natural numbers $n,p\geq 0$, we denote $$Y^{n, p}(M,N)=Hom(A^n\otimes M, N\otimes A^p)$$ If $f\in Y^{n,p}(M,N)$, $a^1, a^2,...,a^n\in A$, $m\in M$, we shall denote $$f(a^1\otimes ...\otimes a^n\otimes m)=\sum f(a^1\otimes ...\otimes a^n \otimes m)^0\otimes$$ $$\otimes f(a^1\otimes ...\otimes a^n\otimes m)^1\otimes ...\otimes f(a^1\otimes ...\otimes a^n\otimes m)^p$$ For any $n,p\geq 0$ and for any $i=0,1,...,n+1$, define $b_i^{n,p}: Y^{n,p}(M,N)\rightarrow Y^{n+1, p}(M,N)$, by:\ $\bullet \;\;b_0^{n,p}(f)(a^1\otimes ...\otimes a^{n+1} \otimes m)=\sum (a^1)_1\cdot f(a^2\otimes ...\otimes a^{n+1}\otimes m)^0\otimes $ $$\otimes (a^1)_2f(a^2\otimes ...\otimes a^{n+1}\otimes m)^1 \otimes ...\otimes (a^1)_{p+1}f(a^2\otimes ...\otimes a^{n+1}\otimes m)^p$$ $\bullet \;\;b_i^{n, p}(f)(a^1\otimes ...\otimes a^{n+1}\otimes m)= f(a^1\otimes ...\otimes a^ia^{i+1}\otimes ...\otimes a^{n+1}\otimes m)\\[2mm]$ for all $1\leq i\leq n$\ $\bullet \;\;b_{n+1}^{n, p}(f)(a^1\otimes ...\otimes a^{n+1}\otimes m)=\sum f(a^1\otimes ...\otimes a^n\otimes (a^{n+1})_{p+1}\cdot m)^0\otimes $ $$\otimes f(a^1\otimes ...\otimes a^n\otimes (a^{n+1})_{p+1}\cdot m)^1 (a^{n+1})_1\otimes ...\otimes f(a^1\otimes ...\otimes a^n\otimes (a^{n+1})_{p+1}\cdot m)^p(a^{n+1})_p$$ Define now $$d_m^{n, p}:Y^{n, p}(M,N)\rightarrow Y^{n+1, p}(M,N), \;\;d_m^{n, p}=\sum _{i=0}^{n+1} (-1)^ib_i^{n, p}$$ ${\;\;\;}$Then one can prove, case by case, that for any $0\leq i<j\leq n+2$ the following relation holds: $$b_j^{n+1, p}\circ b_i^{n, p}=b_i^{n+1, p}\circ b_{j-1}^{n, p}$$ and using this relation it follows that $$d_m^{n+1, p}\circ d_m^{n, p}=0$$ for all $n, p\geq 0$.\ ${\;\;\;}$Now, for any $n, p\geq 0$ and for any $i=0,1,...,p+1$, define $c_i^{n, p}:Y^{n, p}(M,N)\rightarrow Y^{n, p+1}(M,N)$, by:\ $\bullet \;\;c_0^{n, p}(f)(a^1\otimes ...\otimes a^n\otimes m)=\sum (f((a^1)_2\otimes ...\otimes (a^n)_2\otimes m)^0)_0\otimes $ $$\otimes (f((a^1)_2\otimes ...\otimes (a^n)_2\otimes m)^0)_1(a^1)_1...(a^n)_1 \otimes$$ $$\otimes f((a^1)_2\otimes ...\otimes (a^n)_2\otimes m)^1\otimes ...\otimes f((a^1)_2\otimes ...\otimes (a^n)_2\otimes m)^p$$ $\bullet \;\;c_i^{n, p}(f)(a^1\otimes ...\otimes a^n\otimes m)=\sum f(a^1\otimes ...\otimes a^n\otimes m)^0\otimes f(a^1\otimes ...\otimes a^n\otimes m)^1 \otimes $ $$\otimes ...\otimes (f(a^1\otimes ...\otimes a^n\otimes m)^i)_1 \otimes (f(a^1\otimes ... \otimes a^n\otimes m)^i)_2\otimes ...\otimes$$ $$\otimes f(a^1\otimes ...\otimes a^n\otimes m)^p$$ for all $1\leq i\leq p$\ $\bullet \;\;c_{p+1}^{n, p}(f)(a^1\otimes ...\otimes a^n\otimes m)=\sum f((a^1)_1\otimes ...\otimes (a^n)_1\otimes m_0)\otimes (a^1)_2...(a^n)_2m_1$\ Define $$d_c^{n, p}:Y^{n, p}(M,N)\rightarrow Y^{n, p+1}(M,N), \;\;d_c^{n, p}=\sum _{i=0}^{p+1} (-1)^ic_i^{n, p}$$ ${\;\;\;}$Then one can prove, case by case, that for any $0\leq i<j\leq p+2$ the following relation holds: $$c_j^{n, p+1}\circ c_i^{n, p}=c_i^{n, p+1}\circ c_{j-1}^{n, p}$$ and from this relation it follows that $$d_c^{n, p+1}\circ d_c^{n, p}=0$$ for all $n, p\geq 0$. Also, one can prove, case by case, that for any $0\leq i\leq n+1$ and $0\leq j\leq p+1$, the following relation holds: $$c_j^{n+1, p}\circ b_i^{n, p}=b_i^{n, p+1}\circ c_j^{n, p}$$ Note that for the cases $j=p+1, i=n+1$ and $j=0, i=0$ one has to use the Yetter-Drinfel’d module condition (1), and these are the only two places where this condition is used.\ ${\;\;\;}$Using this relation, it follows immediately that $$d_c^{n+1, p}\circ d_m^{n, p}=d_m^{n, p+1}\circ d_c^{n, p}$$ for all $n, p\geq 0$.\ ${\;\;\;}$In conclusion, $(Y^{n, p}(M,N), d_m^{n, p}, d_c^{n, p})$ is a double complex. We shall denote by $H^n(M, N)$, for $n\geq 0$, the cohomology of the total complex associated to this double complex.\ ${\;\;\;}$It is easy to see that $H^0(M,N)$ is the set of morphisms from $M$ to $N$ in the category of Yetter-Drinfel’d modules. Also, it is easy to see that $H^1(M,N)=Z^1(M,N)/B^1(M,N)$, where $Z^1(M,N)$ is the set of all pairs $(\omega ', \rho ')\in Hom (A\otimes M, N)\oplus Hom (M, N\otimes A)$ that satisfy the relations:\ $\bullet \;\;$ $\omega '\circ (id \otimes \omega _M)+ \omega _N\circ (id \otimes \omega ')=\omega '\circ (\mu \otimes id)$\ $\bullet \;\;$ $(\rho '\otimes id)\circ \rho _M+ (\rho _N\otimes id)\circ \rho '= (id\otimes \Delta )\circ \rho '$\ $\bullet \;\;$ $(\omega '\otimes \mu)\circ (id \otimes \tau _M\otimes id)\circ (\Delta \otimes \rho _M)+(\omega _N\otimes \mu)\circ (id\otimes \tau _N \otimes id) \circ (\Delta \otimes \rho ')=(id\otimes \mu )\circ (\rho '\otimes id)\circ \tau _M\circ (id \otimes \omega _M)\circ (\Delta \otimes id)+ (id \otimes \mu )\circ (\rho _N\otimes id)\circ \tau _N\circ (id\otimes \omega ') \circ (\Delta \otimes id)$\ where $\tau _M:A\otimes M\rightarrow M\otimes A$, $\tau _M(a\otimes m)=m\otimes a$ (and similarly for $\tau _N$), and $B^1(M, N)$ is the set of all pairs $(d_m(f), d_c(f))\in Hom (A\otimes M, N)\oplus Hom (M, N\otimes A)$, with $f\in Hom (M, N)$, and\ $\bullet \;\;$ $d_m(f)=\omega _N\circ (id \otimes f)-f\circ \omega _M$\ $\bullet \;\;$ $d_c(f)=\rho _N\circ f-(f\otimes id)\circ \rho _M$\ ${\;\;\;}$Now, the category of Yetter-Drinfel’d modules is abelian, so we can consider the abelian group $Ext^1(M,N)$, which is the set of equivalence classes of extensions of $M$ by $N$ in the category of Yetter-Drinfel’d modules, the group law being the Baer sum (see [@w], pp. 78-79).\ ${\;\;\;}$If $(\omega ',\rho ')\in Hom(A\otimes M,N)\oplus Hom(M,N\otimes A)$, denote by $N\oplus _{(\omega ',\rho ')}M$ the $k$-linear space $N\oplus M$, endowed with a left multiplication and a right comultiplication, as follows:\ $\bullet \;\;$ $a\cdot (n,m)=(a\cdot n+\omega '(a\otimes m), a\cdot m)$\ $\bullet \;\;$ $\lambda :N\oplus M\rightarrow (N\oplus M)\otimes A\simeq (N\otimes A)\oplus (M\otimes A)$ $$\lambda ((n,m))=\rho _N(n)+\rho '(m)+\rho _M(m)$$ ${\;\;\;}$Then one can check, by a direct computation, that $N\oplus _{(\omega ',\rho ')}M$ with these structures is a left-right Yetter-Drinfel’d module if and only if the pair $(\omega ',\rho ')$ belongs to $Z^1(M,N)$. Moreover, the sequence $$0\rightarrow N\rightarrow N\oplus _{(\omega ',\rho ')}M\rightarrow M \rightarrow 0$$ is an extension of $M$ by $N$ in the category of Yetter-Drinfel’d modules, and any extension of $M$ by $N$ is equivalent to one of this form. If $(\omega ',\rho '), (\omega '',\rho '')\in Z^1(M,N)$, then one can also check that the two extensions determined by these pairs are equivalent if and only if $(\omega ',\rho ')-(\omega '',\rho '')\in B^1(M,N)$. So, we have a bijection $$H^1(M,N)\simeq Ext^1(M,N)$$ and one can prove that this is actually a group isomorphism. In conclusion, we have the following The groups $H^1(M,N)$ and $Ext^1(M,N)$ are isomorphic. ${\;\;\;}$In particular, if $A$ is a finite dimensional Hopf algebra, it is well-known that the category of left-right Yetter-Drinfel’d modules over $A$ is isomorphic to the category of left modules over the Drinfel’d double of $A$ (see [@maj], [@rad]), so in this case we have a group isomorphism $H^1(M,N)\simeq Ext^1_{D(A)}(M,N)$. It is natural to ask the following\ [*Question:*]{} Is it true that $H^n(M,N)\simeq Ext^n_{D(A)}(M,N)$ for any $n\geq 2$?\ [ *Let $M=N=k$ with trivial Yetter-Drinfel’d module structure over the bialgebra $A$. In this case the double complex $(Y^{n,p}(M,N), d_m^{n,p}, d_c^{n,p})$ becomes:\ $\bullet \;\;$ $Y^{n,p}(k,k)=Hom (A^n, A^p)$ for all $n,p\geq 0$\ $\bullet \;\;$ $b_0^{n,p}(f)(a^1\otimes ...\otimes a^{n+1})=a^1\cdot f(a^2\otimes ...\otimes a^{n+1})$\ $\bullet \;\;$ $b_i^{n,p}(f)(a^1\otimes ...\otimes a^{n+1})= f(a^1\otimes ...\otimes a^ia^{i+1}\otimes ...\otimes a^{n+1})$\ for all $1\leq i\leq n$\ $\bullet \;\;$ $b_{n+1}^{n,p}(f)(a^1\otimes ...\otimes a^{n+1})= f(a^1\otimes ...\otimes a^n)\cdot a^{n+1}$\ where the dots represent the canonical (diagonal) left and right $A$-module structures on $A^p$\ $\bullet \;\;$ $c_0^{n,p}(f)(a^1\otimes ...\otimes a^n)=\sum (a^1)_1... (a^n)_1\otimes f((a^1)_2\otimes ...\otimes (a^n)_2)$\ $\bullet \;\;$ $c_i^{n,p}(f)(a^1\otimes ...\otimes a^n)=(id\otimes id \otimes...\otimes \Delta \otimes id\otimes...\otimes id)(f(a^1\otimes ... \otimes a^n))$\ for all $1\leq i\leq p$, where $\Delta $ is applied on the $i^{th}$ position\ $\bullet \;\;$ $c_{p+1}^{n,p}(f)(a^1\otimes ...\otimes a^n)=\sum f((a^1)_1\otimes ...\otimes (a^n)_1)\otimes (a^1)_2...(a^n)_2$\ and this is the double complex that gives the Gerstenhaber-Schack cohomology $H^*_b(A,A)$ of the bialgebra $A$ (see [@gs], [@pw]).*]{} [*A positive answer to the above question would imply that for a finite dimensional Hopf algebra $A$ we have $H^*_b(A,A)\simeq Ext^*_{D(A)}(k,k)$, and this would give another proof for the vanishing of the “hat” Gerstenhaber-Schack cohomology of a semisimple cosemisimple Hopf algebra (let us note that the original proof in [@st] uses also the Drinfel’d double of $A$).* ]{} [*If $A$ is finite dimensional and the field $k$ is algebraically closed, there exists a geometric interpretation of $H^1(M,M)$ for any finite dimensional Yetter-Drinfel’d module $M$ (actually, it is this geometric approach who suggested how to define $H^1(M,M)$), similar to the one for the “hat” Gerstenhaber-Schack cohomology given in [@st], that we shall now briefly describe (the proofs are similar to the ones in [@st]). Let $M$ be a finite dimensional $k$-linear space, consider the affine algebraic variety $\cal A$$=Hom(A\otimes M, M)\times Hom(M,M\otimes A)$, and define $\cal {YD}$$(M)$ to be the set of all pairs $(\omega , \rho )\in \cal A$ such that $(M,\omega ,\rho )$ is a left-right Yetter-Drinfel’d module over $A$. Since the Yetter-Drinfel’d conditions are polynomial, $\cal {YD}$$(M)$ is a subvariety of $\cal A$. Then, if we take a Yetter-Drinfel’d module $(M,\omega, \rho )$ (that is, a point $(\omega , \rho )\in \cal {YD}$$(M)$), one can prove that the tangent space $T_{(\omega ,\rho )}(\cal {YD}$$(M))$ is contained in $Z^1(M,M)$.\ ${\;\;\;}$Now, the algebraic group $G=GL(M)$ acts on $\cal {YD}$$(M)$ by transport of structures, that is $g\cdot (\omega ,\rho )=(\omega ^g,\rho ^g)$, where $\omega ^g=g\circ \omega \circ (id_A\otimes g^{-1})$ and $\rho ^g= (g\otimes id_A)\circ \rho \circ g^{-1}$ (and obviously there is a bijection between orbits and isomorphism classes of Yetter-Drinfel’d module structures on $M$). If we fix $(\omega , \rho )\in \cal {YD}$$(M)$ and we denote by $\overline {Orb_{(\omega ,\rho )}}$ the closure of the orbit through $(\omega , \rho )$, then one can prove that $B^1(M,M)$ is contained in the tangent space $T_{(\omega , \rho )}(\overline {Orb_{(\omega ,\rho )}})$.*]{} Cohomology for Hopf modules =========================== ${\;\;\;}$Let $A$ be a bialgebra with multiplication $\mu $ and comultiplication $\Delta $ and $(M, \omega _M, \rho _M)$ a left-right Hopf module over $A$, that is $(M, \omega _M)$ is a left $A$-module (with notation $\omega _M(a\otimes m)= a\cdot m$), $(M, \rho _M)$ is a right $A$-comodule (with notation $\rho _M:M\rightarrow M\otimes A$, $\rho _M(m)=\sum m_0\otimes m_1$) and the following compatibility condition holds: $$\sum (a\cdot m)_0\otimes (a\cdot m)_1=\sum a_1\cdot m_0\otimes a_2m_1$$ for all $a\in A, m\in M$. Let $(N, \omega _N,\rho _N)$ be another left-right Hopf module over $A$. We denote by $C^{n, p}(M,N)=Hom (A^n\otimes M, N\otimes A^p)$ and for $f\in C^{n,p}(M,N)$ we use the same notation as in the previous section for $f(a^1\otimes ...\otimes a^n\otimes m)$. For any $i=0, 1,...,n+1$, define $b_i^{n, p}:C^{n, p}(M,N) \rightarrow C^{n+1, p}(M,N)$, by:\ $\bullet \;\;b_0^{n, p}(f)(a^1\otimes ...\otimes a^{n+1}\otimes m)=\sum (a^1)_1\cdot f(a^2\otimes ...\otimes a^{n+1}\otimes m)^0\otimes $ $$\otimes (a^1)_2f(a^2\otimes ...\otimes a^{n+1}\otimes m)^1\otimes ... \otimes (a^1)_{p+1}f(a^2\otimes ...\otimes a^{n+1}\otimes m)^p$$ $\bullet \;\;b_i^{n, p}(f)(a^1\otimes ...\otimes a^{n+1}\otimes m)= f(a^1\otimes ...\otimes a^ia^{i+1}\otimes ...\otimes a^{n+1}\otimes m)$\ for all $1\leq i\leq n$\ $\bullet \;\;b_{n+1}^{n, p}(f)(a^1\otimes ...\otimes a^{n+1}\otimes m)= f(a^1\otimes ...\otimes a^n\otimes a^{n+1}\cdot m)$\ ${\;\;\;}$For any $i=0, 1, ..., p+1$, define $c_i^{n, p}: C^{n,p}(M,N)\rightarrow C^{n, p+1}(M,N)$, by:\ $\bullet \;\;c_0^{n, p}(f)(a^1\otimes ...\otimes a^n\otimes m)=\sum (f(a^1\otimes ...\otimes a^n\otimes m)^0)_0\otimes (f(a^1\otimes ...\otimes a^n\otimes m)^0)_1\otimes $ $$\otimes f(a^1\otimes ...\otimes a^n\otimes m)^1\otimes ...\otimes f(a^1\otimes ...\otimes a^n\otimes m)^p$$ $\bullet \;\;c_i^{n, p}(f)(a^1\otimes ...\otimes a^n\otimes m)=\sum f(a^1\otimes ...\otimes a^n\otimes m)^0\otimes f(a^1\otimes ...\otimes a^n \otimes m)^1\otimes $ $$\otimes ...\otimes (f(a^1\otimes ...\otimes a^n\otimes m)^i)_1\otimes (f(a^1\otimes ...\otimes a^n\otimes m)^i)_2\otimes ...\otimes f(a^1\otimes ...\otimes a^n\otimes m)^p$$ for all $1\leq i\leq p$\ $\bullet \;\;c_{p+1}^{n, p}(f)(a^1\otimes ...\otimes a^n\otimes m)=\sum f((a^1)_1\otimes ...\otimes (a^n)_1\otimes m_0)\otimes (a^1)_2(a^2)_2... (a^n)_2m_1$\ ${\;\;\;}$Then one can prove, as in the previous section, that $(C^{n, p}(M,N), d_m^{n, p}, d_c^{n, p})$ is a double complex, where $d_m^{n, p}$ and $d_c^{n, p}$ are defined by the same formulae as in the previous section. We shall denote by $H^n(M, N)$, for $n\geq 0$, the cohomology of the total complex associated to this double complex.\ ${\;\;\;}$It is easy to see that $H^0(M,N)$ is the set of morphisms from $M$ to $N$ in the category of left-right Hopf modules over $A$. Also, it is easy to see that $H^1(M, N)=Z^1(M, N)/B^1(M, N)$, where $Z^1(M, N)$ is the set of all pairs $(\omega ', \rho ')\in Hom(A\otimes M, N) \oplus Hom(M, N\otimes A)$ that satisfy the relations:\ $\bullet \;\;\omega '\circ (id\otimes \omega _M)+\omega _N\circ (id\otimes \omega ')=\omega '\circ (\mu \otimes id)$\ $\bullet \;\;(\rho '\otimes id)\circ \rho _M+(\rho _N\otimes id)\circ \rho '= (id\otimes \Delta )\circ \rho '$\ $\bullet \;\;(\omega '\otimes \mu)\circ (id\otimes \tau _M\otimes id)\circ (\Delta \otimes \rho _M)+(\omega _N\otimes \mu)\circ (id\otimes \tau _N \otimes id) \circ (\Delta \otimes \rho ')=\rho '\circ \omega _M+ \rho _N\circ \omega '$\ where $\tau _M:A\otimes M\rightarrow M\otimes A$, $\tau _M(a\otimes m)= m\otimes a$ (and similarly for $\tau _N$), and $B^1(M, N)$ is the set of all pairs $(d_m(f), d_c(f))\in Hom(A\otimes M, N) \oplus Hom(M, N\otimes A)$, with $f\in Hom (M, N)$ and\ $\bullet \;\;d_m(f)=\omega _N\circ (id\otimes f)-f\circ \omega _M$\ $\bullet \;\;d_c(f)=\rho _N\circ f-(f\otimes id)\circ \rho _M$\ ${\;\;\;}$As in the previous section, one can prove that $H^1(M,N)$ is isomorphic to the group $Ext^1(M,N)$ of extensions of $M$ by $N$ in the category of left-right Hopf modules.\ ${\;\;\;}$Let $A$ be a bialgebra and $V$ a $k$-linear space. Then $M=V\otimes A$ becomes a left-right Hopf module over $A$, with module structure $a\cdot (v\otimes b)=v\otimes ab$ for all $a, b\in A, v\in V$, and with comodule structure $\rho :V\otimes A\rightarrow V\otimes A\otimes A$, $\rho (v\otimes a)=\sum v\otimes a_1\otimes a_2$. Let also $W$ be a $k$-linear space and consider the left-right Hopf module $N=W\otimes A$. One can check that in this case all the rows and the columns of the double complex corresponding to $M$ and $N$ are acyclic. Indeed, if $g\in Ker (d_m^{n+1, p})$, then $g=d_m^{n, p}(f)$, where $f:A^n\otimes V\otimes A\rightarrow W\otimes A\otimes A^p$, $$f(a^1\otimes ...\otimes a^n\otimes v\otimes a)=(-1)^{n+1} g(a^1\otimes ...\otimes a^n\otimes a\otimes v\otimes 1)$$ and if $g\in Ker (d_c^{n, p+1})$, then $g=d_c^{n, p}(f)$, where $f:A^n\otimes V\otimes A\rightarrow W\otimes A\otimes A^p$, $$f(a^1\otimes ...\otimes a^n\otimes v\otimes a)=(id_W\otimes \varepsilon \otimes id_A^{p+1})(g(a^1\otimes ...\otimes a^n\otimes v\otimes a))$$ ${\;\;\;}$Since the rows of the double complex $\{C^{n,p}(M,N)\}$ are acyclic, if we consider the double complex $\{D^{n,p}\}$ obtained by adding to $\{C^{n,p}(M,N)\}$ one more column consisting of $\{Ker (d_m^{0,p})\}$ for all $p\geq 0$, then by the “Acyclic Assembly Lemma” (see [@w], p. 59) the total complex associated to $\{D^{n,p}\}$ is acyclic. Then, a long exact sequence argument shows that the total cohomology of $\{C^{n,p}(M,N)\}$ may be computed as the cohomology of the added column, that is $$H^{p+1}(V\otimes A, W\otimes A)=Ker (d_m^{0, p+1})\cap Ker (d_c^{0, p+1})/ d_c^{0, p}(Ker (d_m^{0,p}))$$ $H^{p+1}(V\otimes A, W\otimes A)=0$ for all $p\geq 0$. [**Proof:**]{} Let $g\in Ker (d_m^{0, p+1})\cap Ker (d_c^{0, p+1})$. Since $g\in Ker (d_c^{0, p+1})$, there exists $f\in C^{0,p}(V\otimes A,W\otimes A)$ such that $d_c^{0, p}(f)=g$, namely $f=(id_W\otimes \varepsilon \otimes id_A^{p+1})\circ g$. It will be enough to prove that $f\in Ker (d_m^{0, p})$. Since $g\in Ker (d_m^{0, p+1})$, it follows that $$g(v\otimes ab)=\sum g_W(v\otimes b)\otimes a_1g(v\otimes b)^0\otimes a_2 g(v\otimes b)^1\otimes ...\otimes a_{p+2}g(v\otimes b)^{p+1}$$ for all $a, b\in A$ and $v\in V$, where we denoted $$g(v\otimes b)=\sum g_W(v\otimes b)\otimes g(v\otimes b)^0\otimes ... \otimes g(v\otimes b)^{p+1}\in W\otimes A\otimes A^{p+1}$$ By applying $id_W\otimes \varepsilon \otimes id_A^{p+1}$ we obtain $$(id_W\otimes \varepsilon \otimes id_A^{p+1})(g(v\otimes ab))=$$ $$\sum \varepsilon (g(v\otimes b)^0)g_W(v\otimes b)\otimes a_1g(v\otimes b)^1\otimes ...\otimes a_{p+1}g(v\otimes b)^{p+1}$$ that is $$f(v\otimes ab)=\sum a_1f(v\otimes b)^0\otimes a_2f(v\otimes b)^1 \otimes ...\otimes a_{p+1}f(v\otimes b)^p$$ which means that $d_m^{0, p}(f)=0$, q.e.d.\ ${\;\;\;}$Suppose that the bialgebra $A$ has a skew antipode. In this case, it is very well-known that any left-right Hopf module over $A$ is of the form $M=V\otimes A$, for some $k$-linear space $V$ (see, for instance, [@m], p.16). Hence we have obtained the following If $A$ is a bialgebra with a skew antipode (for example, a Hopf algebra with bijective antipode) then for any left-right Hopf modules $M$ and $N$ over $A$ and for any natural number $n\geq 1$ we have $H^n(M, N)=0$. ${\;\;\;}$Let us introduce some notation. If $A$ is a bialgebra, we denote by $mod-A$ the category of right $A$-modules, by $A-comod$ the category of left $A$-comodules, by $A_{lr}^r$ the category whose objects are $k$-linear spaces $M$ which are bimodules, left-right Hopf modules and right-right Hopf modules over $A$, by $A_l^{lr}$ the category whose objects are $k$-linear spaces $M$ which are bicomodules, left-left Hopf modules and left-right Hopf modules over $A$, and finally by $A_{lr}^{lr}$ the category of Hopf bimodules (or two-sided two-cosided Hopf modules in the terminology of [@sch]) over $A$. We shall see now how the above double complex $\{C^{n,p}\}$ yields very naturally some cohomology theories for the categories $A_{lr}^r$, $A_l^{lr}$ and $A_{lr}^{lr}$.\ ${\;\;\;}$Let $M,N\in A_{lr}^r$ and $n,p$ some natural numbers. Then $A^n\otimes M$ becomes a right $A$-module with structure $$(a^1\otimes ...\otimes a^n\otimes m)\cdot b=a^1\otimes ...\otimes a^n \otimes m\cdot b$$ and $N\otimes A^p$ becomes a right $A$-module with structure $$(x\otimes a^1\otimes ...\otimes a^p)\cdot b=\sum x\cdot b_1\otimes a^1b_2 \otimes ...\otimes a^pb_{p+1}$$ Now, if we define $R^{n,p}(M,N)=Hom_{mod-A}(A^n\otimes M,N\otimes A^p)$, then one can check, by a direct computation, that $d_m^{n,p}(R^{n,p}(M,N))\subseteq R^{n+1,p}(M,N)$ and $d_c^{n,p}(R^{n,p}(M,N))\subseteq R^{n,p+1}(M,N)$, so that $(R^{n,p}(M,N), d_m^{n,p}, d_c^{n,p})$ is a double complex, giving a cohomology theory for objects in $A_{lr}^r$.\ ${\;\;\;}$Similarly, if $M,N\in A_l^{lr}$ and $n,p$ are natural numbers, then $A^n\otimes M$ becomes a left $A$-comodule, with structure $$A^n\otimes M\rightarrow A\otimes A^n\otimes M$$ $$a^1\otimes ...\otimes a^n\otimes m\mapsto \sum (a^1)_1...(a^n)_1m_{(-1)} \otimes (a^1)_2\otimes ...\otimes (a^n)_2\otimes m_{(0)}$$ where we denoted by $m\mapsto \sum m_{(-1)}\otimes m_{(0)}$ the left $A$-comodule structure of $M$, and $N\otimes A^p$ becomes a left $A$-comodule, with structure $$N\otimes A^p\rightarrow A\otimes N\otimes A^p$$ $$x\otimes a^1\otimes ...\otimes a^p\mapsto \sum x_{(-1)}\otimes x_{(0)} \otimes a^1\otimes ...\otimes a^p$$ If we denote by $L^{n,p}(M,N)=Hom^{A-comod}(A^n\otimes M,N\otimes A^p)$, then one can check also by a direct computation that $d_m^{n,p}(L^{n,p}(M,N))\subseteq L^{n+1,p}(M,N)$ and $d_c^{n,p}(L^{n,p}(M,N))\subseteq L^{n,p+1}(M,N)$, so that $(L^{n,p}(M,N), d_m^{n,p}, d_c^{n,p})$ is a double complex, yielding a cohomology theory for objects in $A_l^{lr}$.\ ${\;\;\;}$Finally, if $M,N\in A_{lr}^{lr}$, then on $A^n\otimes M$ and $N\otimes A^p$ we can introduce all the above right $A$-module and left $A$-comodule structures, so if we denote by $$T^{n,p}(M,N)=Hom_{mod-A}^{A-comod}(A^n\otimes M, N\otimes A^p)$$ then $d_m^{n,p}(T^{n,p}(M,N))\subseteq T^{n+1,p}(M,N)$ and $d_c^{n,p}(T^{n,p}(M,N))\subseteq T^{n,p+1}(M,N)$, hence $(T^{n,p}(M,N), d_m^{n,p}, d_c^{n,p})$ is a double complex, which yields a cohomology theory for Hopf bimodules. Note that $\{T^{n,p}(M,N)\}$ is a sort of “mirror” version of the double complex for Hopf bimodules introduced by R. Taillefer in [@t]. It is likely that the cohomologies given by these double complexes are isomorphic, at least when $A$ is a finite dimensional Hopf algebra. [99]{} N. Andruskiewitsch and J. Devoto, Extensions of Hopf algebras, St. Petersburg Math. J. [**7**]{} (1), 17-52 (1996). L. Faddeev, N. Reshetikhin and L. Takhtajan, Quantum groups, in “Braid group, knot theory and statistical mechanics”, C. N. Yang and M. L. Ge (eds.), Adv. Series in Math. Phys. Vol. 9, pp. 97-110, World Scientific, Singapore, 1989. M. Gerstenhaber, On the deformations of rings and algebras, Ann. of Math. [**79**]{}, 1-34 (1964). M. Gerstenhaber and S. D. Schack, Bialgebra cohomology, deformations and quantum groups, Proc. Natl. Acad. Sci. USA [**87**]{}, 478-481 (1990). L. A. Lambe and D. E. Radford, Algebraic aspects of the quantum Yang-Baxter equation, J. Algebra [**154**]{}, 228-288 (1992). S. Majid, Doubles of quasitriangular Hopf algebras, Comm. Algebra [**19**]{} (11), 3061-3073 (1991). S. Montgomery, “Hopf algebras and their actions on rings”, CBMS Reg. Conf. Series [**82**]{}, AMS, Providence, 1993. B. Parshall and J.-P. Wang, On bialgebra cohomology, Bull. Soc. Math. Belg., Ser. A [**42**]{} (3), 607-642 (1990). D. E. Radford, Solutions to the quantum Yang-Baxter equation and the Drinfel’d double, J. Algebra [**161**]{}, 20-32 (1993). D. E. Radford and J. Towber, Yetter-Drinfel’d categories associated to an arbitrary bialgebra, J. Pure Appl. Algebra [**87**]{}, 259-279 (1993). M. Rosso, Groupes quantiques et algebres de battages quantiques, C. R. Acad. Sci. Paris [**320**]{}, 145-148 (1995). P. Schauenburg, Hopf modules and Yetter-Drinfel’d modules, J. Algebra [**169**]{}, 874-890 (1994). M. E. Sweedler, “Hopf algebras”, Benjamin, New York, 1969. D. Ştefan, The set of types of $n$-dimensional semisimple and cosemisimple Hopf algebras is finite, J. Algebra [**193**]{}, 571-580 (1997). R. Taillefer, Cohomology theories of Hopf bimodules and cup-product, preprint q-alg/0005019. C. Weibel, “An introduction to homological algebra”, Cambridge University Press, 1994. S. L. Woronowicz, Differential calculus on compact matrix pseudogroups (quantum groups), Comm. Math. Phys. [**122**]{}, 125-170 (1989). D. N. Yetter, Quantum groups and representations of monoidal categories, Math. Proc. Cambridge Philos. Soc. [**108**]{}, 261-290 (1990).
--- abstract: 'In this paper we establish global existence and uniqueness of the solution to the three-dimensional Vlasov-Poisson system in presence of point charges in case of repulsive interaction. The present analysis extends an analogeous two-dimensional result [@CM].' author: - 'Carlo Marchioro, Evelyne Miot and Mario Pulvirenti' title: 'The Cauchy problem for the 3-D Vlasov-Poisson system with point charges' --- Introduction ============ In this paper we study the time evolution of a three-dimensional system constituted by a continuous distribution of electric charges, a plasma, coupled with $N$ charged point particles. All the charges, as well as the plasma, have the same sign so that the interaction is repulsive. For simplicity we assume that the charges and the masses of the point particles are unitary. If $f=f(x,v,t)$ denotes the mass distribution of the plasma and $\{\xi_\alpha\}_{\alpha=1}^N$ are the positions of the point particles, the dynamics of the system is described by the following system of equations $$\label{eq:system} \begin{cases} {\displaystyle}\partial_t f+v\cdot \nabla_x f+(E+F)\cdot \nabla_v f=0\\ \vspace*{0.5em} {\displaystyle}E(x,t)=\int_{{\mathbb{R}}^3} \frac{x-y}{|x-y|^3}\rho(y,t)\,dy\\\vspace*{0.5em} {\displaystyle}\rho(x,t)=\int_{{\mathbb{R}}^3} f(x,v,t)\,dv\\\vspace*{0.5em} {\displaystyle}F(x,t)=\sum_{\alpha=1}^N \frac{x-\xi_\alpha(t)}{|x-\xi_\alpha(t)|^3}\\\vspace*{0.5em} {\displaystyle}\dot{\xi}_\alpha(t)=\eta_\alpha(t),\quad \alpha=1,\ldots,N\\ {\displaystyle}\dot{\eta}_\alpha(t)=E(\xi_\alpha(t),t)+\sum_{\substack{\beta=1\\\beta\neq \alpha}}^N \frac{\xi_\alpha(t)-\xi_\beta(t)}{|\xi_\alpha(t)-\xi_\beta(t)|^3}. \end{cases}$$ In absence of the charges, the system reduces to the well-known Vlasov-Poisson equation, which has been widely investigated in the last years. The difficulty of the Cauchy problem associated to the Vlasov-Poisson problem increases with the dimension of the physical space. In two dimensions, satisfactory existence and uniqueness results go back to [@OkUk] and [@Horst]. The three-dimensional problem was solved in the nineties in [@Pf], [@Sh], [@Wo], by a careful analysis of characteristics associated to the Vlasov-Poisson system (Lagrangian point of view) or by estimating the moments of $f$ via a more genuine PDE technique [@LP] (eulerian point of view). We address the reader to the monograph [@G] for a complete analysis of this equation and additional references. When point charges enter in the game, the situation changes drastically even if complete repulsivity is assumed. The extra singular field could, in principle, produce extremely large velocities of the plasma particles and, in turn, a large spatial density and a blow-up of the electric field in finite time. However we know that it is not the case in dimension two. Indeed, in [@CM] it was shown a global existence and uniqueness result for the two-dimensional Cauchy problem associated to system . The basic ingredient of the analysis in [@CM] was the introduction of the energy of a trajectory of the plasma in the reference frame of a suitable point charge. The advantage of using this energy function is two-fold. From one side it controls the motion. From the other, its time derivative along the trajectory does not involve the singular part of the electric field. Combining this idea with the well-known fact that, in dimension two, the electric field generated by the plasma is linearly bounded by the maximal velocity of the particles, one can then conclude (see [@CM] for the details). Our purpose here is to study the more complex three-dimensional problem. Our approach relies heavily on the adaptation of the method in [@Pf], [@Sh] and [@Wo] to the present situation, for which the energy associated to a trajectory (see or ) plays an essential role. We explain the main steps of this adaptation in the next section after some preliminary considerations have been presented. We mention that a related issue concerning the Cauchy problem for the Vlasov-Poisson equation, namely the instability arising from a singular perturbation of the field, has been recently analyzed in [@instab], see also [@G1] or [@G2]. In that situation the extra singular field is due to reflecting boundary conditions. The plane of the paper is the following. In Section \[sec:one\] we solve the Cauchy problem for one single charge ($N=1$). This result is well suited to be easily extended to the full problem $N\geq 1$. This extension is done in Section \[sec:many\]. Section \[sec:conclusion\] is finally devoted to comments and criticism. Preliminaries and general strategy ================================== We use this section to fix the notations, to recall some well known estimates (see e.g. [@Wo], [@G] and references quoted therein) and to illustrate the strategy we follow to treat the present problem. Let $f=f(x,v)$, $(x,v)\in {\mathbb{R}}^3\times {\mathbb{R}}^3$, be a probability density (namely $f\geq 0$ and $\int f\,dx\,dv=1$). We denote by $$\label{eq:kinetic} K_0=\frac{1}{2}\int |v|^2 f(x,v)\,dx\,dv$$ the kinetic energy and by $$\rho(x)=\int f(x,v)\,dv$$ the spatial density. Also, $K$ and $K_i,i=1,\ldots,$ will stand for positive constants depending only on the kinetic energy and on $\|f\|_{L^\infty}$, which we assume to be finite. Moreover $C$ will denote any numerical positive constant. For any $M\geq 0$, we have $$\begin{split} \rho(x)&=\int_{|v|< M} f(x,v)\,dv+\int_{|v|\geq M}f(x,v)\,dv\\ &\leq \frac{4}{3} \pi M^3 \|f\|_{L^\infty}+\frac{1}{M^2}\int |v|^2 f(x,v)\,dv. \end{split}$$ Optimizing in $M$, we find $$\rho(x)\leq C\|f\|_{L^\infty}\left( \int |v|^2 f(x,v)\,dv\right)^{3/5},$$ whence $$\label{ineq:rho} \|\rho\|_{L^{5/3}}\leq K_1.$$ Moreover, defining for $R>0$ $$\rho_R(x)=\int_{|v|<R} f(x,v)\,dv,$$ we have by using Hölder inequality $$\begin{split} \int \frac{\rho_R(x')}{|x-x'|^2}\,dx'&= \int_{|x-x'|< M}\frac{\rho_R(x')}{|x-x'|^2}\,dx' +\int_{|x-x'|\geq M} \frac{\rho_R(x')}{|x-x'|^2}\,dx'\\ &\leq C \|\rho_R\|_{L^\infty}M+\|\rho_R\|_{L^{5/3}}\left( \int_{|x-x'|\geq M} \frac{dx'}{|x-x'|^5}\right)^{2/5}. \end{split}$$ Optimizing in $M$ and using we obtain $$\label{ineq:rho2} \int \frac{\rho_R(x')}{|x-x'|^2}\,dx'\leq K_2 \|\rho_R\|_{L^\infty}^{4/9}.$$ Summarizing the previous considerations we are led to the \[prop:prelim1\] There exists a constant $K>0$ (depending only on $\int |v|^2 f\,dx\,dv$ and $\|f\|_{L^\infty}$) for which $$\label{ineq:rho3} \int \frac{\rho_R(x')}{|x-x'|^2}\,dx'\leq K R^{4/3}$$ for all $R>0$. Estimate follows from , realizing that $\|\rho_R\|_{\infty}\leq \frac{4}{3} \pi R^3 \|f\|_{L^\infty}$. We now come to Problem and define the class of solutions we wish to deal with. Let $\{(\xi_{\alpha0},\eta_{\alpha0})\}_{\alpha=1}^N$ denote the initial positions and velocities of the point charges. Let $f_0$ be a compactly supported probability density satisfying, for some positive $\delta_0$, $$\label{assumption:compact} \min \left\{ |x-\xi_{\alpha0}|{\:|\:}\: (x,v)\in {\mathrm{supp}}(f_0), \alpha=1,\ldots,N\right\}\geq \delta_0.$$ Let $T>0$. We say that $(\xi_1,\ldots,\xi_N,f)$ is a solution of Problem on $[0,T]$ with initial datum $(\xi_{\alpha0},\eta_{\alpha0},\alpha=1,\ldots,N,f_0)$ if $$\label{assumption:space}\xi_\alpha\in C^2([0,T]),\quad f\in L^\infty\left([0,T],L^1\cap L^\infty\right),\quad \rho \in L^\infty\left([0,T],L^\infty \right)$$ and for $t\in [0,T]$ we have $$\label{eq:edo2} \begin{cases} {\displaystyle}\dot{\xi}_\alpha(t)=\eta_\alpha(t)\\ {\displaystyle}\dot{\eta}_\alpha(t)=E(\xi_\alpha(t),t)+\sum_{\beta\neq \alpha} \frac{\xi_\alpha(t)-\xi_\beta(t)}{|\xi_\alpha(t)-\xi_\beta(t)|^3}\\ {\displaystyle}(\xi_\alpha,\eta_\alpha)(0)=(\xi_{\alpha0},\eta_{\alpha0}). \end{cases}$$ Moreover $$\label{eq:transport} f\left(X(x,v,0,t),V(x,v,0,t),t\right)=f_0(x,v),$$ where for all $\tau,t\in [0,T]$ $$(X,V)(\cdot,\cdot,\tau,t):{\mathbb{R}}^3\setminus\cup_\alpha\{\xi_\alpha(\tau)\}\times {\mathbb{R}}^3\to {\mathbb{R}}^3\setminus\cup_\alpha\{\xi_\alpha(t)\}\times {\mathbb{R}}^3$$ is an invertible flow such that $$\label{assumption:lower} |X(x,v,\tau,t)-\xi_\alpha(t)|\geq \delta(T),\quad \forall(x,v)\in {\mathrm{supp}}(f(\tau))$$ for some $\delta(T)>0$. It satisfies for $t\in [0,T]$ $$\label{eq:edo} \begin{cases} {\displaystyle}\frac{d}{dt}X(x,v,\tau,t)=V(x,v,\tau,t) \\ {\displaystyle}\frac{d}{dt}V(x,v,\tau,t)=E\left(X(x,v,\tau,t),t\right)+ \sum_{\alpha=1}^N\frac{X(x,v,\tau,t)-\xi_\alpha(t)}{|X(x,v,\tau,t)-\xi_\alpha(t)|^3}\\ \left(X,V\right)(x,v,\tau,\tau)=(x,v)\in {\mathrm{supp}}(f(\tau)). \end{cases}$$ Given $\rho\in L^\infty\left([0,T],L^1\cap L^\infty({\mathbb{R}}^3)\right)$ it is a well-known fact (see e.g. [@LP]) that the corresponding field $E$ belongs to $L^\infty\left([0,T]\times {\mathbb{R}}^3\right)$. Moreover, it is almost-Lipschitz in the sense that for all $(x,y,t)\in {\mathbb{R}}^3\times {\mathbb{R}}^3\times [0,T]$ $$\label{eq:almost-lipschitz} |E(x,t)-E(y,t)|\leq C|x-y|\left(1+|\ln|x-y||\right).$$ In particular the solutions of the ODEs and are uniquely defined as long as the distance between the plasma particles and the charges remains positive. Thanks to the hamiltonian structure of the system, the flow $(X,V)(0,t)$ preserves Lebesgue’s measure on ${\mathbb{R}}^3\times {\mathbb{R}}^3$. As a result all the norms $\|f(t)\|_{L^p}$, $1\leq p\leq +\infty$ are preserved. Finally, it follows from , and from the fact that $E$ is bounded that the density $f(t)$ remains compactly supported for all $t\in[0,T]$. We define the total energy associated to Problem by $$ \begin{split} H(f)&=\frac{1}{2}\int |v|^2f(x,v)\,dx\,dv+\frac{1}{2}\sum_{\alpha=1}^N |\eta_\alpha|^2\\ &+\sum_{\alpha=1}^N \int \frac{\rho(x)}{|x-\xi_\alpha|}\,dx +\frac{1}{2}\iint \frac{\rho(x)\rho(y)}{|x-y|}\,dx\,dy +\frac{1}{2}\sum_{\alpha\neq \beta} \frac{1}{|\xi_\alpha-\xi_\beta|}. \end{split}$$ One easily checks that for a solution to Problem $H(f(t))$ is finite and constant on $[0,T]$. Due to the positivity of the interaction, the kinetic energy of the plasma part is bounded by $H(f(t))\equiv H(f(0))$. Therefore we may assume that the constant $K$ appearing in does not depend on the time thanks to the conservation of the energy and of $\|f\|_{L^\infty}$. Let us come back, for the moment, to the Vlasov-Poisson problem ignoring the point charges. Denoting by $$P(t)=\text{sup}\left\{|v|{\:|\:}\:(x,v)\in {\mathrm{supp}}(f(t))\right\}$$ we conclude by that $$\label{ineq:P1} P(t)\leq P(0)+ K\int_0^t P(s)^{4/3}\,ds.$$ Indeed, the electric field computed on a characteristic $X=X(t)$ is bounded by $$\begin{split} |E(X(t),t)|&\leq \int \frac{\rho(x',t)}{|X(t)-x'|^2}\,dx'=\int \frac{\rho_{P(t)}(x',t)}{|X(t)-x'|^2}\,dx', \end{split}$$ hence follows from . Obviously does not provide an a priori global bound for $P(t)$. A refined estimate allowing to solve the $3$-D problem has been obtained by considering the *time averaging of the electric field* along a trajectory, see [@Pf], [@Sh], [@Wo] or [@G]. The basic idea consists in partitioning the time interval $[0,T]$ into small pieces $(t_{i-1},t_i)$ for $i=1,2,\ldots,$ of small length $\Delta T=|t_{i-1}-t_i|$. For a given characteristic $(X,V)(t)$, one writes thanks to Liouville’s theorem $$\label{eq:av1} \int_{t_{i-1}}^{t_i}dt\,|E\left(X(t),t\right)|\leq C \int_{t_{i-1}}^{t_i}dt\,\int f(y,w,t_{i-1})\frac{1}{|X(t)-Y(t)|^2}\,dy\,dw$$ where $(Y,W)(t)$ is a characteristic leaving $(y,w)$ at time $t_{i-1}$. Now, when $Y(t)$ is a trajectory of large velocity and large relative velocity at time $t_{i-1}$ (namely $|w|$ and $|V(t_{i-1})-w|$ are $\mathcal{O}(P^{4/3})$), we restrict our attention to the *time integral* $$\label{eq:averaging} \int_{t_{i-1}}^{t_i} \frac{dt}{|X(t)-Y(t)|^2}.$$ Here $\Delta T$ is chosen so small that the relative velocity remains large in that time interval (*stability property*). Then the time integral can be computed almost explicitely, using that $X(t)-Y(t)$ essentially performs a free motion. As a consequence the contribution of to the time integral of the electric field is shown to be smaller than $\mathcal{O}(P\Delta T)$ (see [@Pf], [@Sh], [@Wo] and [@G] or Lemma \[lemma:scat-plasma-plasma\] below). We call this contribution *scattering plasma-plasma*. The other contributions in can then be handled by means of static estimates relying essentially on Proposition \[prop:prelim1\] above. When a single point charge is present, the scenario changes dramatically, since the stability property for the trajectories of the plasma fails. Indeed the relative velocity of the plasma particles can change extremely fast if one of them collides with (or get very close to) the point charge. Nevertheless the interval of time in which the *scattering charge-plasma* takes place is so small that the contribution to the time integral can again be controlled (see Lemma \[lemma:scatt-plasma-charge\] below). One also has to remark that, instead of using the maximal velocity $P(t)$ as a control quantity, it is more convenient to use as in [@CM] the energy of a trajectory, the time derivative of which cancels the singular part of the electric field. Once treated the single point charge-plasma in Section \[sec:one\], we can turn to the $N$-charges problem in Section \[sec:many\]. Indeed, the choice of $\Delta T$ and of all other parameters ensures that a plasma particle cannot get close to more than one point charge in every time interval $(t_{i-1},t_i)$. As a consequence we can transfer the single charge analysis to the $N$-charges problem with minor modifications. The plasma-charge model {#sec:one} ======================= Let $f=f(x,v,t)$ be the probability distribution of the plasma particles, and let $\xi$ and $\eta=\dot{\xi}$ denote the position and the velocity of a single point particle of unitary charge. Problem - reads $$\label{assumption:classe-1} f\in L^\infty\left(L^1\cap L^\infty\right), \quad \rho\in L^\infty\left(L^1\cap L^\infty\right),\quad E=\frac{x}{|x|^3}\ast \rho,$$ with $$\label{eq:transport-one} f\left(X(x,v,0,t),V(x,v,0,t),t\right)=f(x,v,0),\quad (t,x,v)\in {\mathbb{R}}\times {\mathbb{R}}^3\times {\mathbb{R}}^3,$$ where $(X,V)(x,v,0,t)=(X,V)(t)$ satisfies $$\label{eq:syst1} \begin{cases} {\displaystyle}\dot{X}(t)=V(t) \\ {\displaystyle}\dot{V}(t)=E\left(X(t),t\right)+\frac{X(t)-\xi(t)}{|X(t)-\xi(t)|^3}\\ {\displaystyle}(X,V)(0)=(x,v),\quad x\neq \xi(0) \end{cases}$$ and $$\label{eq:syst2} \begin{cases} {\displaystyle}\dot{\xi}(t)=\eta(t)\\ {\displaystyle}\dot{\eta}(t)=E\left(\xi(t),t\right). \end{cases}$$ The main result of this section is summarized in \[thm:main1\] Let $f_0\in L^\infty$ be a compactly supported probability distribution. Let $(\xi_0,\eta_0)\in {\mathbb{R}}^3\times {\mathbb{R}}^3$. Assume that there exists some $\delta_0>0$ such that $$\mathrm{min}\left\{|x-\xi_0|\:{\:|\:}\:(x,v)\in {\mathrm{supp}}(f_0)\right\}\geq \delta_0.$$ For all time $T>0$ there exists a unique solution $(\xi,f)$ to Problem - on $[0,T]$ with this initial datum. We shall prove that, assuming a solution to Problem - exists up to a fixed but arbitrary time $T>0$, we have $$\label{ineq:H1} \sup\left \{ |V(x,v,0,t)|+\frac{1}{|X(x,v,0,t)-\xi(t)|} {\:|\:}\quad t\in [0,T],(x,v)\in {\mathrm{supp}}(f_0)\right\}\leq C(T)$$ where $C(T)$ is a constant depending only on $f_0$ and $(\xi_0,\eta_0)$. As a consequence of the fact that a unique solution to Problem - does exist follows by rather standard arguments presented in Paragraph \[subsection:completed\] at the end of the section. In what follows, $C_i$ and $K_i,i=0,1,\ldots,$ will denote positive constants depending only on $\|f_0\|_{L^\infty}$ and $H$, where $H$ denotes the global energy. In the case of one single point charge, the energy reduces to $$\begin{split} H\equiv\frac{1}{2}\int |v|^2f(x,v,t)&\,dx\,dv+\frac{1}{2}|\eta(t)|^2\\&+ \int \frac{\rho(x,t)}{|x-\xi(t)|}\,dx +\frac{1}{2}\iint \frac{\rho(x,t)\rho(y,t)}{|x-y|}\,dx\,dy. \end{split}$$ In particular, we have $$\label{ineq:eta} |\eta(t)|\leq \sqrt{2H}.$$ Following [@CM] we also introduce the pointwise energy of a plasma particle $$\label{def:en} h(x,v,t)=\frac{|v-\eta(t)|^2}{2}+\frac{1}{|x-\xi(t)|}+K_1,$$ where $K_1$ is a large constant. A possible choice is $$K_1\geq\max(8H,1).$$ In particular, in view of this choice ensures that for all $(x,v,t)$ $$\label{ineq:H2} |V(x,v,0,t)|\leq 2\sqrt{ h}\left(X(x,v,0,t),V(x,v,0,t),t\right).$$ As already mentionned, the energy turns out to be a relevant quantity to control the motion, since it controls both the velocity and the distance from $\xi$ of the characteristic under consideration. We remark that $h$ is uniformly bounded on ${\mathrm{supp}}(f_0)$ at time $0$. In the following we shall use the short-hand notation $(X(t),V(t))=\left(X(x,v,0,t),V(x,v,0,t)\right)$ when the initial condition $(x,v)$ is clear from the context. Differentiating along the characteristics of the plasma particles and using - we find $$\dot{h}\left(X(t),V(t),t\right)=\left( V(t)-\eta(t)\right)\cdot \left( E(X(t),t)-E(\xi(t),t)\right)$$ from which $$\label{ineq:h1} \Big|\frac{d}{dt} \sqrt{h}\left(X(t),V(t),t\right)\Big|\leq |E(\xi(t),t)|+|E(X(t),t)|.$$ Note that the variation of $h$ is controlled by the smooth part of the electric field and this is, of course, crucial. We introduce the quantity $$Q=Q_T=\sup\left\{\sqrt{h}\left(X(t),V(t),t\right) {\:|\:}\quad t\in [0,T],(x,v)\in {\mathrm{supp}}(f_0)\right\}.$$ The remainder of this section is devoted to the proof of the following estimate from which follows immediately. \[prop:mainQ\] We have $$Q_T\leq (Q_0+C_1)\exp(C_1 (1+T))\quad \forall T>0.$$ As explained earlier, the method relies on an suitable splitting of $[0,T]$ into small intervals. More precisely, we set $$\Delta T=\frac{1}{K_2 Q_T},$$ where $K_2$ denotes a suitable large constant satisfying $$K_2\geq 16\quad \text{and}\quad \frac{8K}{K_2}<\frac{1}{8},$$ where $K$ (depending only on $\|f_0\|_{\infty}$ and $H$) is the constant appearing in Proposition \[prop:prelim1\]. Next, if $\Delta T< T$ we set $$n=\left[\frac{T}{\Delta T}\right],\quad t_0=0, \quad t_n=T,\quad t_i=i\Delta T \quad \text{for}\quad i=0,\ldots,n-1,$$ so that $$[0,T]=\bigcup_{i=1}^n [t_{i-1},t_i]\quad \text{with}\quad |t_i-t_{i-1}|\leq \Delta T.$$ For $i=1,\ldots,n$ we define $$Q_i=\sup\left\{\sqrt{h}\left( X(t),V(t),t\right) {\:|\:}\quad t\in (t_{i-1},t_i),(x,v)\in {\mathrm{supp}}(f(t_{i-1}))\right\}$$ where here $\left(X(t),V(t)\right)=\left( X(x,v,t_{i-1},t),V(x,v,t_{i-1},t)\right)$ are the trajectories at time $t\geq t_{i-1}$, leaving $(x,v)$ at time $t_{i-1}$. Finally, we set $$Q_0=\sup\left \{\sqrt{h}(x,v,0) {\:|\:}\quad (x,v)\in {\mathrm{supp}}(f_0)\right\}.$$ In order to show Proposition \[prop:mainQ\] we will first establish the following basic inequality \[prop:mainQi\]Let $T>0$ such that $\Delta T< T$. We have $$Q_i\leq Q_{i-1}+C_2 Q_T \Delta T,\quad i=1,\ldots,n.$$ We claim that Proposition \[prop:mainQ\] follows from Proposition \[prop:mainQi\]. Indeed, let us set $T_0=1/(4C_2)$. There are two cases. If $\Delta T_0=1/(K_2Q_{T_0})<T_0$ then Proposition \[prop:mainQi\] for $T_0$ implies that for all $i=1,\ldots,n$ $$\begin{split} Q_i\leq Q_0+C_2iQ_{T_0}\Delta T_0\leq Q_0+2C_2 T_0 Q_{T_0}, \end{split}$$ hence $$Q_{T_0}\leq \frac{Q_0}{1-2C_2 T_0}=2 Q_0.$$ Otherwise we have $\Delta T_0\geq T_0$, which means that $$\begin{split} Q_{T_0}\leq \frac{1}{T_0K_2}= 4C_2 K_2^{-1}. \end{split}$$ In both cases we obtain $$\label{ineq:star} Q_{T_0}\leq 2Q_0+4C_2 K_2^{-1},$$ thus Proposition \[prop:mainQ\] holds up to time $T_0$. Let now $T>T_0$ and $k=[\frac{T}{T_0}]$. Since $T_0$ depends only on conserved quantities, we can iterate the previous arguments $k+1$ times to get $$\begin{split} Q_{T}\leq Q_{(k+1)T_0}\leq 2^{k+1}Q_0+4C_2 K_2^{-1}\sum_{j=0}^k 2^j\leq 2^{T/T_0+1}(Q_0+2C_2K_2^{-1}) \end{split}$$ and the conclusion follows. We now come to the main ingredients for proving Proposition \[prop:mainQi\]. We observe preliminary that, without loss of generality, we may assume $$\label{ineq:assumption} Q_i\geq K_3\geq 1$$ where $K_3$ is a constant depending only on $K_1$ and $K$ (therefore only on $\|f\|_{\infty}$ and $H$) which will be specified in the course of the proof of Lemma \[lemma:scatt-plasma-charge\] below. Indeed, otherwise we have $$Q_i\leq K_3\leq K_3K_2\frac{1}{K_2}\leq Q_{i-1}+C_2Q\Delta T$$ provided that $C_2\geq K_3 K_2$, and Proposition \[prop:mainQi\] follows. Next, we notice that, by virtue of , and we have $$\label{ineq:H3} \sqrt{h}\left(X(t),V(t),t\right)\leq \sqrt{h}(x,v,t_{i-1})+8K Q_i^{4/3}\Delta T$$ for $t\in [t_{i-1},t_i]$ and $(x,v)\in {\mathrm{supp}}(f(t_{i-1}))$. Now, consider a trajectory $\left(X(t),V(t)\right)$ satisfying $$\sqrt{h}\left(X(\overline{t}),V(\overline{t}),\overline{t}\right)=Q_i$$ for some $\overline{t}\in[t_{i-1},t_{i}]$. We then have by definition of $\Delta T$ $$\label{ineq:che} \sqrt{h}(x,v,t_{i-1})\geq Q_i-\frac{8K}{K_2}Q_i^{1/3}\geq \frac{Q_i}{2}.$$ Therefore to control $Q_i$ it suffices to control the energy of those trajectories for which holds. In order to bound the time integral on the right-hand side of we have to evaluate the integrals $$\label{eq:int} \int_{t_{i-1}}^{t_i} \frac{dt}{|X(t)-Y(t)|^2}\quad \text{and} \quad \int_{t_{i-1}}^{t_i} \frac{dt}{|\xi(t)-Y(t)|^2}$$ for high energy trajectories $X(t)$, $Y(t)$ which could possibly make very small the denominators in . There are various situations: 1. Both $X$ and $Y$ are far from $\xi$ (Lemma \[lemma:scat-plasma-plasma\]), 2. $X$ is close to $\xi$ (Lemmas \[lemma:scatt-plasma-charge\] and \[lemma:scatt-plasma-charge-bis\]), 3. $X$ is far from $\xi$ but $Y$ is close to $\xi$, hence $X$ and $Y$ are far from each other (Lemma \[lemma:est2\]). We shall handle each of these situations separately, achieving thereby the dynamical part of the proof. The remainder of the proof relies on rather straightforward estimates in phase-space. Preliminary estimates --------------------- We start by establishing a lemma concerning the plasma-charge scattering. \[lemma:fac\] For any $(y,w)\in {\mathrm{supp}}(f(t_{i-1}))$ we have, with $(Y,W)(t)=(Y, W)(y,w,t_{i-1},t)$ $$\int_{t_{i-1}}^{t_i} \frac{dt}{|Y(t)-\xi(t)|^2}\leq (2\sqrt{2}+1)Q_i.$$ Setting $\ell(t)=|Y(t)-\xi(t)|$, we differentiate $$\label{eq:l'} \dot{\ell}=\frac{(Y-\xi)}{|Y-\xi|}\cdot(W-\eta),$$ then $$\begin{split} \ddot{\ell}=\frac{|W-\eta|^2}{|Y-\xi|}+\frac{1}{\ell^2}+ \frac{(Y-\xi)}{|Y-\xi|}\cdot\left(E(Y)-E(\xi)\right) -\frac{\left[(Y-\xi)\cdot(W-\eta)\right]^2}{|Y-\xi|^3}. \end{split}$$ Proposition \[prop:prelim1\] and Cauchy-Schwarz inequality yield $$\ddot{\ell}\geq \frac{1}{\ell^2}-8K Q_i^{4/3}.$$ Therefore $$\begin{split} \int_{t_{i-1}}^{t_i} dt\,\frac{1}{\ell^2(t)}&\leq \dot{\ell}(t_i)-\dot{\ell}(t_{i-1})+8K \Delta T Q_i^{4/3}\\ &\leq |W-\eta|(t_i)+|W-\eta|(t_{i-1})+\frac{8K}{K_2} Q_i^{1/3}. \end{split}$$ By definition of $K_2$ and since the first two terms in the right-hand side of the previous inequality are bounded by $\sqrt{2}Q_i$ we conclude the proof. We now introduce the quantities $$\label{def:1} R_i=Q_i^{3/4}\quad \text{and}\quad \delta_i=Q_i^{-7/8}.$$ Note that $R_i$ corresponds to the maximal radius in the velocity space for which actually works, namely yields a linear estimate in $Q_i$. As already mentionned, the choice of the parameters ensures stability for the quantity $\sqrt{h}$. \[lemma:stabh\]Let $(y,w)\in {\mathrm{supp}}(f(t_{i-1}))$ and $(Y,W)(t)=\left(Y,W\right)(y,w,t_{i-1},t)$. If $\sqrt{h}(y,w,t_{i-1})\geq R_i$ then $$\label{ineq:stabh1} \sqrt{h}\left(Y(t),W(t),t\right)\geq \frac{1}{2}R_i,\quad \forall t\in [t_{i-1},t_i].$$ If $\sqrt{h}(y,w,t_{i-1})\leq R_i$ then $$\label{ineq:stabh2} \sqrt{h}\left(Y(t),W(t),t\right)\leq \frac{3}{2}R_i,\quad \forall t\in [t_{i-1},t_i].$$ This is a straightforward consequence of , observing that by choice of $\Delta T$ we have $8KQ_i^{4/3}\Delta T\leq R_i/2$. The quantity $\delta_i$ is the radius of a protection sphere around the charge $\xi$, outside which the singular field created by $\xi$ is relatively moderate, namely $\mathcal{O}(Q_i^{7/4})$. The next lemma deals with the plasma-plasma scattering when the influence of the charge is not too large. The control of the corresponding time integral is the same as in [@Pf], [@Sh], [@Wo] and [@G]. We shortly repeat it for completeness and also because, for the present problem, we need a different choice of the parameters. \[lemma:scat-plasma-plasma\]Let $l_i>0$. Assume that there exists a time interval $$J=(\overline{t}_{i-1},\overline{t}_i)\subset (t_{i-1},t_i)$$ such that, for all $t\in J$ we have, with $\left(X,V\right)(t)=\left(X,V\right)(x,v,t_{i-1},t)$ and $\left(Y,W\right)(t)=\left(Y,W\right)(y,w,t_{i-1},t)$, where $(x,v)$ and $(y,w)\in {\mathrm{supp}}(f(t_{i-1}))$ $$\label{ineq:scat1} \inf_{t\in J}\min \left\{ |X(t)-\xi(t)|,|Y(t)-\xi(t)|\right\}> \delta_i$$ and $$\label{ineq:scat3} |V(\overline{t}_{i-1})-W(\overline{t}_{i-1})|>R_i.$$ Then $$\label{ineq:lemma1} \int_{\overline{t}_{i-1}}^{\overline{t}_i} dt\frac{\chi(|X(t)-Y(t)|>l_i)}{|X(t)-Y(t)|^2}\leq \frac{C_3}{l_i R_i}.$$ We shall use this estimate with the choice $l_i=Q_i^{-2}$. Let $t_0\in [\overline{t}_{i-1},\overline{t}_i]$ be the minimizer of $|X(t)-Y(t)|$. Suppose for the moment that $t_0\in (\overline{t}_{i-1},\overline{t}_i)$. Setting $D(t)=X(t)-Y(t)$, we have for $t\in (t_0,\overline{t}_i)$ $$\label{eq:D} D(t)=D(t_0)+\dot{D}(t_0)(t-t_0)+\rho(t),$$ where $$\begin{split} \label{eq:reste} \rho(t)=\int_{t_0}^t ds\,(t-s)\Big( \frac{X(s)-\xi(s)}{|X(s)-\xi(s)|^3}&- \frac{Y(s)-\xi(s)}{|Y(s)-\xi(s)|^3}\\ &+E\left(X(s),s\right)-E\left(Y(s),s\right)\Big). \end{split}$$ By virtue of , Proposition \[prop:prelim1\] and the definition of $R_i$ and $\delta_i$ we have $$\begin{split} |\rho(t)|&\leq \frac{1}{2}(t-t_0)\left( 2 Q_i^{7/4}+8K Q_i^{4/3}\right)\Delta T\\ &\leq \frac{1}{2} (t-t_0)\left(\frac{2+8K}{K_2}\right) Q_i^{3/4}\\ &\leq \frac{1}{8}(t-t_0)Q_i^{3/4}, \end{split}$$ hence $$\label{ineq:reste} |\rho(t)|\leq \frac{1}{8}(t-t_0)R_i.$$ On the other hand, by and it holds $$\begin{split} |\dot{D}(t_0)|&\geq |\dot{D}(\overline{t}_{i-1})|-\int_{\overline{t}_{i-1}}^{\overline{t}_{i}}|\ddot{D}(s)|\,ds\\ &\geq R_i-(2Q_i^{7/4}+8K Q_i^{4/3})\Delta T\\ &\geq R_i-\left(\frac{2+8K}{K_2}\right) Q_i^{3/4}, \end{split}$$ hence $$\label{ineq:D} |\dot{D}(t_0)|\geq \frac{R_i}{2}.$$ We remark that the parameter $\delta_i$ has precisely been chosen so as to ensure the above stability property, and that we have used the fact that $K_2$ is sufficiently large with respect to $K$. We have $$|D(t)|\geq |D(t_0)+\dot{D}(t_0)(t-t_0)|-|\rho(t)|.$$ Since $D(t_0)$ and $\dot{D}(t_0)$ are orthogonal, $$\label{ineq:orth} |D(t_0)+\dot{D}(t_0)(t-t_0)|^2\geq |D(t_0)|^2+|\dot{D}(t_0)|^2(t-t_0)^2\geq |\dot{D}(t_0)|^2(t-t_0)^2.$$ Hence we have by and $$\begin{split} |D(t_0)+\dot{D}(t_0)(t-t_0)|&\geq |\dot{D}(t_0)|(t-t_0)\geq \frac{1}{2} R_i (t-t_0)\geq 4|\rho(t)|, \end{split}$$ so that $$\label{ineq:D3} |D(t)|^2\geq \frac{9}{16}|\dot{D}(t_0)|^2(t-t_0)^2\geq \frac{9}{64} R_i^2(t-t_0)^2.$$ Finally, $$\begin{split} \int_{t_0}^{\overline{t}_i}dt\;\frac{\chi(|D(t)|>l_i)}{|D(t)|^2}& \leq \int_{t_0}^{\infty}dt\;\frac{\chi(|D(t)|>l_i)}{|D(t)|^2}\\ &\leq \int_{t_0\leq t\leq t_0+ \frac{8l_i}{3R_i}}\frac{dt}{l_i^2} +\int_{t\geq t_0+ \frac{8l_i}{3R_i}}\frac{64}{9R_i^2}\frac{dt}{(t-t_0)^2}\\ &\leq \frac{C_4}{l_i R_i}. \end{split}$$ For the integral on the time interval $(\overline{t}_{i-1},t_0)$ we use the time reversal. When $t_0=\overline{t}_{i-1}$ we use the same method, observing that $D(t_0)\cdot \dot{D}(t_0)\geq 0$ so that still holds. Finally if $t_0=\overline{t}_i$ we use the time reversal. Next we need to control the charge-plasma scattering. Basically our aim is to show that the time spent by a trajectory in the protection sphere $B(\xi(t),\delta_i)$ is very small. For proving this we apply the virial theorem argument, introducing $$\label{def:I} I(t)=\frac{1}{2} |Y(t)-\xi(t)|^2.$$ Differentiating we get $$\label{ineq:I1} \dot{I}=(Y-\xi)\cdot (W-\eta)$$ and $$\label{ineq:I2} \ddot{I}=|W-\eta|^2+\frac{1}{|Y-\xi|}+(Y-\xi)\cdot \left(E(Y)-E(\xi)\right).$$ \[lemma:scatt-plasma-charge\] For $(y,w)\in {\mathrm{supp}}(f(t_{i-1}))$, suppose that $$\sqrt{h}(y,w,t_{i-1})> R_i.$$ Consider $(Y,W)(t)=(Y,W)(y,w,t_{i-1},t)$. Then the set $$J=\{t\in (t_{i-1},t_i){\:|\:}\quad |Y(t)-\xi(t)|<\delta_i\}$$ is connected. Moreover, $$\label{est:m1} \mathrm{meas}(J)\leq C_5 Q_i^{-13/8}.$$ Let $t_0\in \overline{J}$ be a minimizer for $I(t)$. By , $$|\dot{I}(t)|\leq \sqrt{2I(t)}|W(t)-\eta(t)|,$$ hence $$|\dot{\sqrt{I}(t)}|\leq Q_i.$$ For $t\in [t_0,t_i)$ we obtain $$\begin{split} \sqrt{I(t)}&\leq \sqrt{I(t_0)}+Q_i\Delta T\leq \frac{1}{\sqrt{2}}Q_i^{-7/8}+\frac{1}{K_2}, \end{split}$$ therefore $$\label{ineq:I3} |I(t)|\leq 1.$$ Moreover, by Lemma \[lemma:stabh\] we have for $t\in [t_0,t_i)$ $$\label{ineq:stab1} \sqrt{h}\left(Y(t),W(t),t\right)\geq \frac{R_i}{2}.$$ Then by and we have for $t\in [t_0,t_i)$ $$\begin{split} \ddot{I}(t)&\geq h\left(Y(t),W(t),t\right)-K_1-|Y(t)-\xi(t)||E(Y(t),t)-E(\xi(t),t)|\\ &\geq \frac{1}{4}R_i^2-K_1-8K\sqrt{2}Q_i^{4/3}\\ &= \frac{1}{4}Q_i^{3/2}-K_1-8K\sqrt{2} Q_i^{4/3}. \end{split}$$ Now, we observe that for $K_3>0$ sufficiently large (depending only on $K_1$ and $K$) we have by assumption $$\label{ineq:I4} \ddot{I}(t)\geq \frac{1}{8}R_i^2.$$ Consider now $(t_-,t_+)\subset J$ a maximal connected component containing $t_0$. If $t_0\in [t_-,t_+)$, $\dot{I}(t_0)\geq 0$ (if $t_0=t_+$ we use the same argument via the time reversal). Then $$\dot{I}(t)\geq \dot{I}(t_0)+\frac{1}{8}R_i^2(t-t_0)\geq 0,\quad \forall t\in [t_0,t_i).$$ Since $I$ is increasing from $t_0$ up to $t_i$, the trajectories cannot reenter in the protection sphere once escaped. Therefore $J=(t_-,t_+)$ is connected. Next, integrating twice in time and using that $\dot{I}(t_0)\geq 0$ we get $$\frac{1}{2}\delta_i^2\geq I(t)\geq I(t_0)+\frac{1}{16}R_i^2(t-t_0)^2,\quad \forall t\in J,$$ so that $$\label{ineq:m1} (t-t_0)^2\leq 8 Q_i^{-13/4},\quad \forall t\in J,$$ and is proved. We finally obtain the following variant of Lemma \[lemma:scatt-plasma-charge\] \[lemma:scatt-plasma-charge-bis\] For $(y,w)\in {\mathrm{supp}}(f(t_{i-1}))$, suppose that $$\sqrt{h}(y,w,t_{i-1})> \frac{Q_i}{2}.$$ Consider $(Y,W)(t)=(Y,W)(y,w,t_{i-1},t)$. Then the set $$J=\{t\in (t_{i-1},t_i)|\:|Y(t)-\xi(t)|<2\delta_i\}$$ is connected. Moreover, $$\label{est:m2-bis} \mathrm{meas}(J)\leq C_6 Q_i^{-15/8}.$$ It suffices to follow the proof of Lemma \[lemma:scatt-plasma-charge\] step by step, observing that estimate can even be improved if $h(y,w,t_{i-1})\geq Q_i^2/4$. Indeed, we have in this case $\ddot{I}(t)\geq Q_i^2/8$ for $t\in (t_{i-1},t_i)$ and everything goes exactly as before leading to . This concludes the dynamical preparation. We now come to the proof of Proposition \[prop:mainQi\]. Proof of Proposition \[prop:mainQi\] ------------------------------------ In view of , in order to show Proposition \[prop:mainQi\] we need to control the integrals $$\mathcal{J}_1=\int_{t_{i-1}}^{t_i}dt\, |E\left(\xi(t),t\right)|\quad \text{and}\quad \mathcal{J}_2=\int_{t_{i-1}}^{t_i}dt\,|E\left(X(t),t\right)|.$$ \[lemma:est1\]We have $$\mathcal{J}_1\leq C_7 Q \Delta T.$$ Setting $(Y,W)(t)=(Y,W)(y,w,t_{i-1},t)$ we have $$\label{ineq:J1} \begin{split} \mathcal{J}_1&\leq \int_{t_{i-1}}^{t_i}\,dt \int_{(y,w)| \sqrt{h}(y,w,t_{i-1})\leq R_i}\,dy\,dw f(y,w,t_{i-1})\frac{1}{|Y(t)-\xi(t)|^2}\\ &+\int_{(y,w)| \sqrt{h}(y,w,t_{i-1})\geq R_i}\,dy\,dwf(y,w,t_{i-1})\int_{t_{i-1}}^{t_i}dt\,\frac{1}{|Y(t)-\xi(t)|^2}. \end{split}$$ Note that, by virtue of the stability property for $\sqrt{h}$ (see Lemma \[lemma:stabh\]) and by $$|W(t)|\leq 3R_i,\quad \forall t\in (t_{i-1},t_i)$$ when $\sqrt{h}(y,w,t_{i-1})\leq R_i$. Therefore, thanks to Liouville’s theorem the first term in the right-hand side of can be controlled by $$\int_{t_{i-1}}^{t_i}\,dt \int_{|\overline{w}|\leq 3 R_i}d\overline{y} \,d\overline{w}\,f(\overline{y},\overline{w},t)\frac{1}{|\overline{y}-\xi(t)|^2}.$$ By virtue of Proposition \[prop:prelim1\] with $R$ replaced by $3R_i$ this latter is bounded by $$\label{ineq:est2} C_8 R_i^{4/3}\Delta T=C_8Q_i \Delta T.$$ We now turn to the second integral in . By virtue of Lemma \[lemma:fac\] we can bound it by $$\begin{split} 5Q_i\int_{(y,w)|\:h(y,w,t_{i-1})> R_i^2} &dy\,dw\,f(y,w,t_{i-1})\\ &\leq 5 Q_iQ_i^{-3/2}\int dy\,dw\,h(y,w,t_{i-1})f(y,w,t_{i-1})\\ &\leq C_{9} Q_i^{-1/2}(H+1)\\ &\leq C_{10} Q\Delta T, \end{split}$$ where we have used that $Q_i\geq K_3$ in the last inequality. The conclusion follows. \[lemma:est2\]Let $(x,v)\in {\mathrm{supp}}(f(t_{i-1}))$ and $(X,V)(t)=(X,V)(x,v,t_{i-1},t)$ be a trajectory such that $h(x,v,t_{i-1})>Q_i^2/4$. Then $$\mathcal{J}_2\leq C_{11} Q \Delta T.$$ Let $(t_-,t_+)$ be the connected interval (if any) in which we have $|X(t)-\xi(t)|<2\delta_i$. By virtue of Proposition \[prop:prelim1\] and Lemma \[lemma:scatt-plasma-charge-bis\] (estimate ) $$\int_{t_-}^{t_+}dt\,|E\left(X(t),t\right)|\leq 4 K Q_i^{4/3} C_6 Q_i^{-15/8}\leq C_{12} Q \Delta T.$$ It remains to control the integrals $$\int_{t_{i-1}}^{t_-}dt\,|E\left(X(t),t\right)|\quad \text{and}\quad \int_{t_+}^{t_i}dt\,|E\left(X(t),t\right)|,$$ which can be handled (using again the time reversal) in the same way. We write $$\begin{split} \int_{t_{i-1}}^{t_-}&dt \int dy\,dw\, f(y,w,t_{i-1})\frac{1}{|X(t)-Y(t)|^2}\\ &=\sum_{j=1}^4 \int_{S_j} dt\,dy\,dw\, f(y,w,t_{i-1})\frac{1}{|X(t)-Y(t)|^2}\\ &=\sum_{j=1}^4 \tilde{\mathcal{J}}_j, \end{split}$$ where $$\begin{split} S_1&=\{(y,w,t)|\quad \sqrt{h}(y,w,t_{i-1})\leq R_i\},\\ S_2&=\{(y,w,t)|\quad \sqrt{h}(y,w,t_{i-1})>R_i\quad \text{and}\quad |X(t)-Y(t)|\leq l_i\},\\ S_3&=\{(y,w,t)|\quad \sqrt{h}(y,w,t_{i-1})>R_i, \quad |X(t)-Y(t)|> l_i\quad \text{and}\quad |Y(t)-\xi(t)|\leq \delta_i\}\\ S_4&=\{(y,w,t)|\quad \sqrt{h}(y,w,t_{i-1})>R_i, \quad |X(t)-Y(t)|> l_i\quad \text{and}\quad |Y(t)-\xi(t)|> \delta_i\}, \end{split}$$ and where $$l_i=Q_i^{-2}.$$ Using stability for $\sqrt{h}$ (see Lemma \[lemma:stabh\]) as well as static estimates, the first integral $\tilde{\mathcal{J}}_1$ can be estimated as before in Lemma \[lemma:est1\] (see ), so that $$\tilde{\mathcal{J}}_1\leq C_{13}Q_i\Delta T.$$ For the integral on $S_2$, following [@Pf], [@Sh], [@Wo] or [@G] we get $$\tilde{\mathcal{J}}_2\leq \int_{t_{i-1}}^{t_-}dt \int_{|X(t)-\overline{y}|<l_i} d\overline{y}\,\frac{\rho(\overline{y},t)}{|\overline{y}-X(t)|^2}\leq C_{14}Q_i^3l_i\Delta T \leq C_{14}Q_i\Delta T .$$ Next, for the integral $\tilde{\mathcal{J}}_3$ we have $$\tilde{\mathcal{J}}_3\leq \int_{h(y,w,t_{i-1})>R_i^2} dy\,dw\, f(y,w,t_{i-1}) \int_{t_{i-1}}^{t_-}dt\, \frac{\chi(|Y(t)-\xi(t)|\leq\delta_i)}{|Y(t)-X(t)|^2}.$$ Since $|X(t)-\xi(t)|>2\delta_i$ in $(t_{i-1},t_-)$ we have in $S_3$ $$ |Y(t)-X(t)|\geq |X(t)-\xi(t)|-|Y(t)-\xi(t)|\geq \delta_i.$$ Hence, in view of the conservation of the energy and of Lemma \[lemma:scatt-plasma-charge\] (estimate ) we have $$\begin{split} \tilde{\mathcal{J}}_3& \leq \frac{1}{\delta_i^2}\int_{h(y,w,t_{i-1})>R_i^2} dy\,dw\, f(y,w,t_{i-1})\int_{t_{i-1}}^{t_-}dt\,\chi(|Y(t)-\xi(t)|\leq \delta_i)\\ &\leq C_{15}Q_i^{7/4} Q_i^{-3/2}Q_i^{-13/8}\\ &\leq C_{16}Q\Delta T. \end{split}$$ We finally estimate the last integral $\tilde{\mathcal{J}}_4$. By virtue of Lemma \[lemma:scatt-plasma-charge\], for each $(y,w)$ such that $\sqrt{h}(y,w,t_{i-1})> R_i$ we may split the set $$\{t\in (t_{i-1},t_-)|\:|Y(t)-\xi(t)|\geq \delta_i\}$$ into two at most intervals $J^1(y,w)$ and $J^2(y,w)$ for which $$\inf_{t\in J^k(y,w)}|Y(t)-\xi(t)|\geq \delta_i,\quad k=1,2.$$ Note that at least one of the extremal points of $J^1$ or $J^2$ has to coincide with at least one of the $t_{i-1}$ or $t_i$. Hence we have $$\tilde{\mathcal{J}}_4\leq \sum_{k=1}^2 \int_{h(y,w,t_{i-1})>R_i^2} dy\,dw\, f(y,w,t_{i-1}) \int_{J^k(y,w)}dt\, \frac{\chi(|X(t)-Y(t)|>l_i)}{|Y(t)-X(t)|^2}.$$ It suffices to control the integral on $J^1(y,w)$ because the integral on $J^2(y,w)$ can be handled in the same way. We set $J^1(y,w)=(\overline{t}_-,\overline{t}_+)$. Then we further split the integration domain as follows $$\begin{split} \{(y,w){\:|\:}\quad \sqrt{h}(y,w,t_{i-1})> R_i\}=S_4^{(1)}\cup S_4^{(2)}, \end{split}$$ where $$\begin{split} S_4^{(1)}&= \{(y,w)|\quad \sqrt{h}(y,w,t_{i-1})> R_i\quad \text{and}\quad |W(\overline{t}_{-})-V(\overline{t}_{-})|\leq R_i\},\\ S_4^{(2)}&= \{(y,w)|\quad \sqrt{h}(y,w,t_{i-1})> R_i\quad\text{and}\quad |W(\overline{t}_{-})-V(\overline{t}_{-})|> R_i\}. \end{split}$$ We recall that $W(t)$ denotes the velocity of the plasma particle leaving $(y,w)$ at time $t_{i-1}$. First, for $(y,w)\in S_4^{(1)}$ we have $$|W(t)-V(t)|\leq\frac{3}{2} R_i,\quad \forall t\in J_1(y,w).$$ Indeed, both $X(t)$ and $Y(t)$ remain at least at distance $\delta_i$ of the charge in $J_1(y,w)$, therefore $$\left| \frac{d}{dt}|W(t)-V(t)|\right|\leq 2 Q_i^{7/4}+8 K Q_i^{4/3},$$ so our choice of $\Delta T$ ensures stability for the velocities (see the proof of Lemma \[lemma:scat-plasma-plasma\]). Hence, applying Liouville’s theorem we obtain $$\begin{split} \int_{S_4^{(1)}} dy\,dw\, f(&y,w,t_{i-1}) \int_{J^1(y,w)}dt\, \frac{\chi(|X(t)-Y(t)|>l_i)}{|Y(t)-X(t)|^2} \\ &\leq \int dy\,dw\, f(y,w,t_{i-1}) \int_{t_{i-1}}^{t_i}dt\; \frac{\chi(|V(t)-W(t)|<3R_i/2)}{|Y(t)-X(t)|^2}\\ &\leq \int_{t_{i-1}}^{t_i}dt\;\int_{|\overline{w}-V(t)|\leq 3R_i/2} d\overline{y}\,d\overline{w}\, f(\overline{y},\overline{w},t) \frac{1}{|\overline{y}-X(t)|^2}\\ &\leq C_{17}Q_i\Delta T, \end{split}$$ where we have used Proposition \[prop:prelim1\] in the last inequality. Finally, let $(y,w)\in S_4^{(2)}$. Since $|X(t)-\xi(t)|\geq 2\delta _i >\delta_i$ on $J^1(y,w)$, we may now apply Lemma \[lemma:scat-plasma-plasma\] to obtain $$\begin{split} \int_{ J^1(y,w)}dt\,\frac{\chi(|X(t)-Y(t)|>l_i)}{|X(t)-Y(t)|^2} &\leq \frac{C_{18}}{R_il_i}. \end{split}$$ Since $l_i=Q_i^{-2}$ this yields $$\begin{split} \int_{S_4^{(2)}} dy\,dw\, f(y,w,t_{i-1}) &\int_{J^1(y,w)}dt\, \frac{\chi(|X(t)-Y(t)|>l_i)}{|Y(t)-X(t)|^2} \\&\leq \frac{C_{19}}{R_i^2}\cdot \frac{1}{R_i l_i}\leq C_{19}Q_i^{-9/4}Q_i^2\leq C_{20} Q\Delta T. \end{split}$$ This concludes the proof of the Lemma. Combining Lemmas \[lemma:est1\] and \[lemma:est2\] we may finally turn to the **Proof of Proposition \[prop:mainQi\] completed**. Let $(X,V)(t)=\left(X,V\right)(x,v,t_{i-1},t)$ be a trajectory such that $$\sqrt{h}\left(X(\overline{t}),V(\overline{t}),\overline{t}\right)=Q_i\quad \text{for some}\quad \overline{t}\in[t_{i-1},t_i].$$ By virtue of we have $$\sqrt{h}(x,v,t_{i-1})> \frac{ Q_i}{2}.$$ On the other hand, integrating first on $[t_{i-1},\overline{t}]$ and applying then Lemmas \[lemma:est1\] and \[lemma:est2\] to the high-energy trajectory $(X,V)$ we obtain $$\begin{split} \sqrt{h}\left(X(\overline{t}),V(\overline{t}),\overline{t}\right) &\leq \sqrt{h}(x,v,t_{i-1})+\int_{t_{i-1}}^{t_i}dt\,\left(|E\left(X(t),t\right)|+|E\left(\xi(t),t\right)|\right)\\ &\leq Q_{i-1}+C_{20}Q \Delta T, \end{split}$$ whence $$Q_i\leq Q_{i-1}+C_{20}Q\Delta T$$ and the conclusion follows. Proof of Theorem \[thm:main1\] completed {#subsection:completed} ---------------------------------------- We finally complete the proof of Theorem \[thm:main1\] in the following way. We first regularize the Coulomb kernel $1/|x|$ at a small distance from the origin, say ${\varepsilon}$, obtaining a solution $(\xi^{\varepsilon}(t),\eta^{\varepsilon}(t),f^{\varepsilon}(t), X^{\varepsilon}(t), V^{\varepsilon}(t))$ of the corresponding regularized version of Problem -. The fact that the corresponding global energy $H^{\varepsilon}$ is uniformly bounded by $H$ provides uniform bounds for the kinetic energy of $f^{\varepsilon}$ and for $|\eta^{\varepsilon}|$ (see ). We choose ${\varepsilon}$ smaller than $1/C(T)$, where $C(T)$ is the a priori bound appearing in . Next, we take $T_{\varepsilon}$ maximal such that $$\min\left\{|X^{\varepsilon}(x,v,0,t)-\eta^{\varepsilon}(t)|\:{\:|\:}\:(x,v)\in {\mathrm{supp}}(f_0)\right\}>{\varepsilon},\quad t\in[0,T_{\varepsilon}).$$ On $[0,T_{\varepsilon})$, the extra field created by the charge coincides with the one of Problem for all trajectory starting from ${\mathrm{supp}}(f_0)$. Therefore the previous analysis applies, yielding . We conclude that $T_{\varepsilon}=T$, so that provides uniform $L^\infty$ bounds for $\rho_{\varepsilon}$ and $E_{\varepsilon}$ on $[0,T]$. It follows that $(\xi^{\varepsilon},\eta^{\varepsilon})$ is uniformly bounded and equicontinuous on $[0,T]$ and that $(X^{\varepsilon},V^{\varepsilon})$ is uniformly bounded and equicontinuous on ${\mathrm{supp}}(f_0)\times[0,T]$. Hence one easily passes to the limit ${\varepsilon}\to 0$ to get existence of a global solution $(\xi,f)$ satisfying the desired conditions. Finally, uniqueness is achieved by means of a Gronwall inequality, using almost-Lipschitz regularity for $E$ (see ) and the lower bound together with the assumption on the support of $f_0$. We omit the details. The case of many point charges {#sec:many} ============================== The purpose of the present section is to extend the existence and uniqueness result of the previous section to the case of many point charges. We will establish the following \[thm:main-m\] Let $N\geq 1$. Let $f_0\in L^\infty$ be a compactly supported distribution and let $\{\xi_{\alpha0},\eta_{\alpha0}\}_{\alpha=1}^N$ such that $\xi_{\alpha0}\neq \xi_{\beta0}$ for $\alpha\neq \beta$. Assume that there exists some $\delta_0>0$ such that $$\min\left\{ |x-\xi_{\alpha0}|\:{\:|\:}(x,v)\in {\mathrm{supp}}(f_0)\right\}\geq \delta_0.$$ Then for all $T>0$, there exists a unique solution to Problem - on $[0,T]$ with these initial data. We proceed as in the proof of Theorem \[thm:main1\], considering a solution on $[0,T]$ obtained by regularizing the singular interaction kernel in Problem and establishing estimates depending only on $\|f_0\|_{\infty}$ and $H$. Here again, $C$, $C_i$ or $K_i,i=1,\ldots,$ denote positive constants depending only these quantities, $C$ changing possibly from a line to another. First, we infer from the conservation of the total energy $H(f)$ that $$\label{est:many1} |\xi_\alpha(t)-\xi_\beta(t)|\geq \lambda ,\quad \forall t\in [0,T],$$ where $\lambda=1/(2H)$. Also, $$\label{est:many2} |\eta_\alpha(t)|\leq \sqrt{2H},\quad \forall t\in [0,T].$$ For $\alpha=1,\ldots,N$ we introduce the energy related to the $\alpha$-th charge $$\label{def-m:en} h_\alpha(x,v,t)=\frac{1}{2}|v-\eta_\alpha(t)|^2+\frac{1}{|x-\xi_\alpha(t)|}+K_1,$$ where $K_1$ has already been defined in the previous section. We set $$Q=\max_{\alpha=1,\ldots,N} \sup\left\{ \sqrt{h_\alpha}\left(X(x,v,0,t),V(x,v,0,t),t\right){\:|\:}\quad t\in[0,T],(x,v)\in {\mathrm{supp}}(f_0)\right\}.$$ As before, we split the interval $[0,T]$ into small intervals $[t_{i-1},t_{i}]$ of length smaller than $\Delta T$, where $$\Delta T=\frac{1}{K_2 Q}.$$ Here $K_2$, depending only on $H$ and $\|f_0\|_{\infty}$, is chosen in a similar way as in the previous section, and we assume moreover that $$K_2\geq \sqrt{2}\frac{48}{\lambda}.$$ Therefore, if $(X,V)(t)=(X,V)(x,v,t_{i-1},t)$ denotes a plasma particle leaving $(x,v)$ at time $t_{i-1}$ we have thanks to $$\label{ineq-m:stab} \left|\frac{d}{dt}|X(t)-\xi_\alpha(t)|\right|\leq\sqrt{2}Q\leq \frac{\lambda}{48}\frac{1}{\Delta T},$$ so that for all $\alpha=1,\ldots,N$ $$\label{eq:isolated} x\in B\left(\xi_\alpha(t_{i-1}),\frac{\lambda}{8}\right)\Rightarrow X(t)\in B\left(\xi_\alpha(t),\frac{7\lambda}{48}\right) \quad\forall t\in [t_{i-1}, t_i]$$ and $$\label{eq:isolated2} x\in B\left(\xi_\alpha(t_{i-1}),\frac{\lambda}{8}\right)^c\Rightarrow X(t)\in B\left(\xi_\alpha(t),\frac{5\lambda}{48}\right)^c\quad\forall t\in [t_{i-1}, t_i].$$ In view of , equation means that a plasma particle starting close to the $\alpha$-th charge at time $t_{i-1}$ cannot approach the other charges on $[t_{i-1},t_i]$. This property will enable us to isolate each charge and to apply the previous analysis to the present case. For $i=1,\ldots,n$ we define $$Q_i=\max_{\alpha=1,\ldots,N} \sup\left\{\sqrt{h_\alpha}\left( X(t),V(t),t\right){\:|\:}\: t\in (t_{i-1},t_i),(x,v)\in {\mathrm{supp}}(f(t_{i-1}))\right\}$$ where $(X,V)(t)=\left( X,V\right)(x,v,t_{i-1},t)$. Finally we set $$Q_0=\max_{\alpha=1,\ldots,N}\sup\left\{\sqrt{h}(x,v,0){\:|\:}\: (x,v)\in {\mathrm{supp}}(f_0)\right\}$$ so that $$Q=\max_{i=0,\ldots,n}Q_i.$$ As explained in the previous section, Theorem \[thm:main-m\] is a consequence of the following variant of Proposition \[prop:mainQi\] for the present case \[prop:mainQi-m\] Let $T>0$ such that $\Delta T<T$. For all $i=1,\ldots,n$ we have $$Q_i\leq Q_{i-1}+C_1Q \Delta T.$$ We will only present the proof of Proposition \[prop:mainQi-m\]. Again we may assume that $$Q_i\geq K_3\geq1.$$ We introduce the security balls $$B_\beta=B\left(\xi_\beta(t_{i-1}), \frac{\lambda}{8}\right),\quad \beta=1,\ldots,N$$ and the complementary set $$\Omega={\mathbb{R}}^3\setminus \bigcup_{\beta=1}^N B_\beta.$$ Next, we set for $t\in[t_{i-1},t_i]$ $$\label{def:part} \begin{split} f_\beta(x,v,t)=\chi_{B_\beta}f(x,v,t) \quad \text{and}\quad \tilde{f}(x,v,t)&=\chi_{\Omega}f(x,v,t), \end{split}$$ so that $$f=\sum_{\beta=1}^Nf_\beta + \tilde{f},$$ and we denote by $E_\beta$ and $\tilde{E}$ the corresponding electric fields, so that $$E=\sum_{\beta=1}^N E_\beta+\tilde{E}.$$ \[prop:alpha\] Let $\alpha\in \{1,\ldots,N\}.$ Let $(x,v)\in {\mathrm{supp}}(f(t_{i-1}))$ such that $$x\in B_\alpha \quad \mathrm{and}\quad h_\alpha(x,v,t_{i-1})\geq \frac{1}{4}Q_i^2.$$ Then $$\sqrt{h_\alpha}\left(X(t),V(t),t\right)\leq \sqrt{h_\alpha}(x,v,t_{i-1})+C_2Q\Delta T,\quad t\in [t_{i-1},t_i].$$ Introducing the field $$F_\alpha(z,t)=\sum_{\beta\neq \alpha}\frac{z-\xi_\beta(t)}{|z-\xi_\beta(t)|^3},\quad z\neq \xi_\beta(t),$$ we compute $$\label{est-m:der} \begin{split} \left|\frac{d}{dt} \sqrt{h_\alpha}\left(X(t),V(t),t\right)\right|&\leq |E\left(X(t),t\right)|+|E\left(\xi_\alpha(t),t\right)| \\&+|F_\alpha\left(X(t),t\right)|+|F_\alpha\left(\xi_\alpha(t),t\right)|. \end{split}$$ Thanks to and , we observe that $X(t)$ remains far from the charges $\xi_\beta(t)$ for $\beta\neq \alpha$. Hence the fields $F_\alpha(\xi_\alpha)$ and $F_\alpha(X)$ are bounded on $[t_{i-1},t_i]$ and we have $$\label{est-m:0} \int_{t_{i-1}}^{t_i}dt\,\left(|F_\alpha\left(\xi_\alpha(t),t\right)|+| F_\alpha\left(X(t),t\right)|\right) \leq C\Delta T\leq CQ\Delta T.$$ Moreover, since $\tilde{E}+\sum_{\beta\neq \alpha}E_\beta+F_\alpha$ is bounded by $\mathcal{O}(Q_i^{4/3})$ away from the $\alpha$-th charge, we may follow step by step the dynamical preparation performed in the previous section, simply replacing the quantities $\ell(t)$, $\sqrt{h}(t)$ and $I(t)$ by $\ell_\alpha(t)=|X(t)-\xi_\alpha(t)|$, $\sqrt{h_\alpha}(t)$ and $I_\alpha(t)=|X(t)-\xi_\alpha(t)|^2/2$. In particular, adapting the proofs of Lemmas \[lemma:est1\] and \[lemma:est2\] we find $$\label{est-m:1} \int_{t_{i-1}}^{t_i} \,dt\left( |E_\alpha\left(\xi_\alpha(t),t\right)|+|E_\alpha\left(X(t),t\right)|\right)\leq C Q \Delta T.$$ It remains to estimate the contributions of the other fields. Let $y\in B_\beta$. Thanks to and we have $$|X(t)-Y(t)|\geq |\xi_\alpha(t)-\xi_\beta(t)|-|X(t)-\xi_\alpha(t)|-|Y(t)-\xi_\beta(t)| \geq \frac{17\lambda}{24},$$ hence $E_\beta(X)$ is bounded on $[t_{i-1},t_i]$. By and $E_\beta(\xi_\alpha)$ is also bounded. Therefore $$\label{est-m:3} \sum_{\beta\neq \alpha} \int_{t_{i-1}}^{t_i}dt\, \left(|E_\beta\left(\xi_\alpha(t),t\right)|+|E_\beta\left(X(t),t\right)|\right)\leq CQ\Delta T.$$ We finally estimate the contribution of $\tilde{E}$. By and $\tilde{E}(\xi_\beta)$ is bounded, thus $$\label{est-m:5} \int_{t_{i-1}}^{t_i}dt\, |\tilde{E}\left(\xi_\alpha(t),t\right)|\leq C\Delta T\leq CQ \Delta T.$$ In order to estimate $\tilde{E}\left(X(t),t\right)$ we distinguish between two cases. We assume first that $x\in B(\xi_\alpha(t_{i-1}),\lambda/16)$. For $y\in \Omega$ we then have $|X(t)-Y(t)|\geq \lambda/48$ by and . Hence $\tilde{E}(X)$ is bounded and we obtain $$\label{est-m:6} \int_{t_{i-1}}^{t_i}dt\, |\tilde{E}\left(X(t),t\right)|\leq CQ \Delta T\quad \text{if}\quad x\in B\left(\xi_\alpha(t_{i-1}),\frac{\lambda}{16}\right).$$ Otherwise, we have $x\in B_\alpha\setminus B(\xi_\alpha(t_{i-1}),\lambda/16)$. Let $y\in \Omega$. In view of the particles $X(t)$ and $Y(t)$ both remain in ${\mathbb{R}}^3\setminus \cup_{\beta} B(\xi_\beta(t),\lambda/24)$ on $[t_{i-1},t_i]$. One may therefore neglect the plasma-charge interaction and only take into account the plasma-plasma interaction. Following the arguments in the case without charges (see [@Wo]), or adapting Lemmas \[lemma:scat-plasma-plasma\] and \[lemma:est2\] we obtain $$\label{est-m:7} \int_{t_{i-1}}^{t_i} dt\, |\tilde{E}\left(X(t),t\right)|\leq CQ\Delta T\quad \text{if}\quad x\in B_\alpha\setminus B\left(\xi_\alpha(t_{i-1}),\frac{\lambda}{16}\right).$$ Gathering estimates to and using we are led to the conclusion of Proposition \[prop:alpha\]. Our next result concerns the variation of the energies $h_\alpha$ away from the charges. \[prop:away\] Let $(x,v)\in {\mathrm{supp}}(f(t_{i-1}))$ such that $x\in \Omega$. For all $\alpha\in \{1,\ldots,N\}$ we have $$\sqrt{h_\alpha}\left(X(t),V(t),t\right)\leq \sqrt{h_\alpha}(x,v,t_{i-1})+C_{3}Q\Delta T,\quad t\in (t_{i-1},t_i).$$ We have $X(t)\in{\mathbb{R}}^3\setminus \cup_{\alpha} B(\xi_\alpha(t), 5\lambda/48)$. Imitating the proof of Lemma \[prop:alpha\], it only remains to estimate $$\int_{t_{i-1}}^{t_i}dt\, |E_\alpha\left(X(t),t\right)|$$ for all $\alpha$. We proceed as before, writing $$B_\alpha=B\left(\xi_\alpha(t_{i-1}),\frac{\lambda}{16}\right) \cup \left( B_\alpha\setminus B\left(\xi_\alpha(t_{i-1}),\frac{\lambda}{16}\right)\right),$$ noticing that $|X(t)-Y(t)|\geq \lambda/48$ when $y$ belongs to the first set and ignoring the effect of the charges in the second set. The conclusion follows. Finally, we can compare the energies as follows \[lemma:comp-energ\] There exists a constant $C_4$ such that for $t\in [0,T]$ we have for all $\alpha,\beta$ $$\sqrt{h_\beta}(X,V,t)\leq \sqrt{h_\alpha}(X,V,t)+C_4,\quad \forall X\in B\left(\xi_\alpha(t),\frac{7\lambda}{48}\right),\quad\forall V\in{\mathbb{R}}^3.$$ Our aim is to show that $$\frac{2}{|X-\xi_\beta|}+|\eta_\beta|^2-2V\cdot \eta_\beta\leq \frac{2}{|X-\xi_\alpha|}+|\eta_\alpha|^2-2V\cdot \eta_\alpha +2C_4^2+\sqrt{2h_\alpha}C_4,$$ which holds if $$|V|\leq \frac{1}{2|\eta_\alpha-\eta_\beta|}\left(|\eta_\alpha|^2-|\eta_\beta|^2+\frac{2}{|X-\xi_\alpha|}-\frac{2}{|X-\xi_\beta|} +2C_4^2+\sqrt{2h_\alpha}C_4\right).$$ We have $|X-\xi_\beta|>|X-\xi_\alpha|$ by virtue of . Using and we find that the right-hand side in the above inequality is larger than $$\frac{1}{4\sqrt{2H}}\left( -2 H+2C_4^2+2|V|C_4\right),$$ which is larger than $|V|$ provided that $C_4\geq 2\sqrt{2H}$. We may finally turn to the **Proof of Proposition \[prop:mainQi-m\].** Let $\overline{t}\in [t_{i-1},t_i]$ and $\beta\in \{1,\ldots,N\}$ such that $$Q_i^2=h_\beta\left(X(\overline{t}),V(\overline{t}),\overline{t}\right).$$ There are three possibilities. If $x\in B_\beta$ then $X(t)\in B(\xi_\beta(t),7\lambda/48)$ on $[t_{i-1},t_i]$. In view of , stability for $\sqrt{h_\beta}$ holds on $[t_{i-1},t_i]$. In particular, we may choose $K_2$ sufficiently large so that $\sqrt{h_\beta}(x,v,t_{i-1}) \geq Q_i/2$. Hence Lemma \[prop:alpha\] yields $$\label{est-m:8} Q_i\leq Q_{i-1}+C_2Q\Delta T.$$ If $x\in \Omega$ we use Lemma \[prop:away\] for $\sqrt{h}_\beta$ to obtain $$\label{est-m:9} Q_i\leq Q_{i-1}+C_3Q\Delta T.$$ Finally, if $x\in B_\alpha$ for some $\alpha\neq\beta$ then stability for $\sqrt{h_\alpha}$ holds on $[t_{i-1},t_i]$ by . Since by Lemma \[lemma:comp-energ\] at time $\overline{t}$ we have $$\sqrt{h_\alpha}(X(\overline{t}),V(\overline{t}),\overline{t})\geq \sqrt{h_\beta}(X(\overline{t}),V(\overline{t}),\overline{t})-C_4\geq \frac{3Q_i}{4}$$ provided that $Q_i\geq K_3\geq 4C_4$, we may choose $K_2$ such that at time $t_{i-1}$ it holds $\sqrt{h_\alpha}(x,v,t_{i-1})\geq Q_i/2$. Hence we may apply Lemma \[prop:alpha\] to $\sqrt{h_\alpha}$. Relying then once more on Lemma \[lemma:comp-energ\] at time $\overline{t}$ we get $$\label{est-m:10} Q_i\leq Q_{i-1}+C_4+C_2 Q \Delta T= Q_{i-1}+(C_4K_2+C_2) Q \Delta T.$$ Setting $C_1=\max(C_2,C_3,C_4K_2+C_2)$ the conclusion follows from , and . Final remarks and comments ========================== The first comment concerns the regularity of the solution we have constructed in Theorems \[thm:main1\] and \[thm:main-m\]. The fact that $t \mapsto f(t)$ propagates the $C^k$ regularity of the initial condition is standard; this is a consequence of the almost-Lipschitz regularity of the electric field $E$ (see ). In particular if $f_0 \in C^1$ the solution we have constructed is a classical solution to Problem . The second comment regards our hypotheses. We assumed the charges with an island of vacuum and $f_0$ compactly supported in $v$. Of course the total energy can be finite also in absence of these hypotheses. However when the trajectories can be arbitrarily close to the charges or when they have arbitrarily large velocity, there is not any uniform upper bound for the initial energy of all the trajectories $Q_0$. Therefore our method requires new technical ideas to take into account these large tails. Finally we mention that the case of an interaction plasma-charge of different signs eludes our techniques: new ideas and estimates are needed. \[sec:conclusion\] **Acknowlegments.** Work performed under the auspices of GNFM-INDAM and the Italian Ministry of the University (MIUR). The second author has been partially supported by the Fondation des Sciences Mathématiques de Paris. [99]{} S. Caprino and C. Marchioro, *On the plasma-charge model*, to appear in Kinetic and Related Models (2010). T. Glassey, *The Cauchy problem in kinetic theory*, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 1996. Y. Guo, *Regularity for the Vlasov equations in a half-space*, Indiana Univ. Math. J. **43** (1994), no. 1, 255-320. Y. Guo, *Singular solutions of the Vlasov-Maxwell system on a half line*, Arch. Rational Mech. Anal. **131** (1995), no. 3, 241-304. E. Horst, *On the asymptotic growth of the solutions of the Vlasov-Poisson system*, Math. Methods Appl. Sci. **16** (1993), no. 2, 75-86. H. J. Hwang and J. J. L. Velázquez, *Global Existence for the Vlasov-Poisson System in Bounded Domains*, Arch. Rational Mech. Anal. **195** (2010), no. 3, 763-796. P.-L. Lions and B. Perthame, *Propagation of moments and regularity for the $3$-dimensional Vlasov-Poisson system*, Invent. Math. **105** (1991), no. 2, 415-430. S. Okabe and T. Ukai, *On classical solutions in the large in time of the two-dimensional Vlasov equation*, Osaka J. Math. **15** (1978), 245-261. K. Pfaffelmoser, *Global classical solutions of the Vlasov-Poisson system in three dimensions for general initial data*, Jour. Diff. Eq. **95** (1992), 281-303. J. Schaeffer, *Global existence of smooth solutions to the Vlasov-Poisson system in three dimensions*, Comm. Partial Differential Equations **16** (1991), no. 8-9, 1313-1335. S. Wollman, *Global in time solution to the three-dimensional Vlasov-Poisson system*, Jour. Math. Anal. Appl. **176 (1)** (1996), 76-81. [(C. Marchioro)]{} [Dipartimento di Matematica G. Castelnuovo, Università di Roma La Sapienza, Italy]{}. *E-mail address*: marchior@mat.uniroma1.it.\ (E. Miot) [Dipartimento di Matematica G. Castelnuovo, Università di Roma La Sapienza, Italy]{}. *E-mail address*: miot@ann.jussieu.fr.\ (M. Pulvirenti) [Dipartimento di Matematica G. Castelnuovo, Università di Roma La Sapienza, Italy]{}. *E-mail address*: pulviren@mat.uniroma1.it.\
--- abstract: 'The many-multiplet method applied to high redshift quasar absorption spectra has indicated a possible time variation of the fine structure constant. Alternatively, a constant value of $\alpha$ is consistent with the observational analysis if a non-solar isotopic ratio of $^{24,25,26}$Mg occurs at high redshift. In particular, a higher abundance of the heavier isotopes $^{25,26}$Mg are required to explain the observed multiplet splitting. We show that the synthesis of $^{25,26}$Mg at the base of the convective envelope in low-metallicity asymptotic giant branch stars, combined with a simple model of galactic chemical evolution, can produce the required isotopic ratios and is supported by recent observations of high abundances of the neutron-rich Mg isotopes in metal-poor globular-cluster stars. We conclude that the present data based on high redshift quasar absorption spectra may be providing interesting information on the nucleosynthetic history of such systems, rather than a time variation of fundamental constants.' author: - 'T. Ashenfelter' - 'Grant J. Mathews' - 'Keith A. Olive' title: | The Chemical Evolution of Mg Isotopes vs.\ the Time Variation of the Fine Structure Constant --- 0.2in Over the last several years, there has been considerable excitement over the prospect that a time variation in the fine structure constant may have been observed in quasar absorption systems [@webb] - [@murphy3]. While many observations have led to interesting limits on the temporal variation of $\alpha$ (see [@uzan] for a recent review), only the many-multiplet method [@mm] has led to a positive result, namely that ${\delta \alpha \over \alpha} = (0.54 \pm 0.12) \times 10^{-5}$ over a redshift range of $0.5 < z < 3$, where $\delta \alpha$ is defined as the present value minus the past one. The sources of systematic errors in this method have been well documented [@murphy2; @murphy3] (see also [@bss]). Here, we would like to focus on one of these sources of systematic error for which there is recent evidence of a new interpretation, namely the isotopic abundances of Mg assumed in the analysis. The analyses in [@webb] - [@murphy3], have assumed terrestrial ratios for the three Mg isotopes. They have also shown that had they neglected the presence of the neutron rich Mg isotopes, the case for a varying $\alpha$ would only be strengthened. They further argued, based upon the galactic chemical evolution studies available at that time, that the ratio of $^{25,26}$Mg/$^{24}$Mg is expected to decrease at low metallicity making their result a robust and conservative one. In this paper, we will show that it is in fact quite plausible that the $^{25,26}$Mg/$^{24}$Mg ratio was sufficiently [*higher*]{} at low metallicity to account for the apparent variation in $\alpha$. As such, we would argue that while the many-multiplet method of analysis does not unambiguously indicate a time variation in the fine structure constant, it can lead to important new insights with regard to the nucleosynthetic history of quasar absorption systems. Regarding the observations of Mg isotopes, Gay and Lambert [@gl] determined the Mg isotopic ratios in 20 stars in the metallicity range $-1.8 <$ \[Fe/H\] $< 0.0$ with the aim of testing theoretical predictions [@tww]. (The notation \[A/B\] refers to the the log of the abundance ratio A/B [*relative*]{} to the solar ratio.) Their results confirmed that the $^{25,26}$Mg abundances relative to $^{24}$Mg appear to decrease at low metallicity for normal stars. Although many stars were found to have abundance ratios somewhat higher than predicted, even the ‘peculiar’ stars which show enrichments in $^{25,26}$Mg do not have abundance ratios substantially above solar. Recently, however, a new study of Mg isotopic abundance in stars in the globular cluster NGC 6752 has been performed [@yong]. This study looked at 20 bright red giants which are all at a relatively low metallicity adopted at \[Fe/H\] = -1.62. These observations show a considerable spread in the Mg isotopic ratios which range from $^{24}$Mg:$^{25}$Mg:$^{26}$Mg = 84:8:8 (slightly poor in the heavies) to 53:9:39 (greatly enriched in $^{26}$Mg). The terrestrial value is $^{24}$Mg:$^{25}$Mg:$^{26}$Mg = 79:10:11 [@rt]. Of the 20 stars observed, 15 of them show $^{24}$Mg fractions of 78% or less (that is below solar), and 7 of them show fractions of 70% or less with 4 of them in the range 53 - 67 %. This latter range is low enough to have a substantial effect on a determination of $\alpha$ in quasar absorption systems if the same ratios were to be found there. A previous study [@shetrone] also found unusually high abundances of the heavy Mg isotopes in M13 globular-cluster giants. Ratios of $^{24}$Mg:$^{25,26}$Mg were found as low as 50:50 and even 44:56. Similar results were very recently found in [@yli]. According to [@murphy3], raising the heavy isotope concentration to $^{24}$Mg:$^{25,26}$Mg = 63:37 would sufficiently shift the multiplet wavelengths to eliminate the need for a varying fine structure constant. While dispersion in the data could be a symptom of systematic errors often occurring when data from several samples are combined, real dispersion is a signal that the observed abundances have been affected by local events. Available calculations of Type II supernova yields input into chemical evolution models [@tww], as well as observations of the Mg isotopic abundances in relatively low metallicity stars, support the idea that the heavy Mg isotopes were rarer in the past. Nevertheless, this conclusion is very sensitive to the the star formation history in object under consideration. Intermediate mass stars in their giant phase, are expected to be efficient producers of $^{25,26}$Mg [@fc; @sll; @lattanzio] and the recent data from a low-metallicity globular cluster [@yong; @yli] indicates substantial amounts of variation in the $^{25,26}$Mg/$^{24}$Mg ratios as well as a number stars highly polluted in the neutron rich Mg isotopes. These observations show isotopic ratios considerably higher than the ratios predicted in zero metallicity supernovae and they conclude that asymptotic giant branch (AGB) stars may be responsible for this contamination. Mg is produced in both Type I and Type II supernovae. In Type II supernovae, it is produced [@ww94] in the carbon and neon burning shells with an abundance somewhat less than 10% of the oxygen abundance produced in massive stars. However, not much $^{25,26}$Mg is produced in conventional stellar evolution at low metallicity. This is because the isotopes $^{25,26}$Mg are produced primarily in the outer carbon layer by the reactions $^{22}$Ne($\alpha$,n)$^{25}$Mg and $^{25}$Mg(n,$\gamma$)$^{26}$Mg. According to the models of Woosley and Weaver [@ww94], solar metallicity models produce final Mg isotope ratios reasonably close to solar values, while more massive stars tend to be slightly enhanced in the heavy isotopes, ( e.g., the 25 M$_\odot$ model gives a ratio of 65:15:20). Furthermore, the abundance of $^{25,26}$Mg scales linearly with metallicity in the carbon shell. Hence, it would naively be expected that the ejecta from the first generation of supernovae would show a severe paucity of $^{25,26}$Mg. Much of the solar abundance of Mg is produced in Type Ia supernovae but with Mg to Fe ratios below solar. For example, the models of Thielemann, Nomoto and Yokoi [@tny], give \[Mg/Fe\] $\simeq -1.2$. Due to the absence of free neutrons, essentially no $^{25,26}$Mg are produced in Type Ia supernovae. The results of chemical evolution models tracing the Mg isotopes were presented in [@tww]. These models clearly show the effect of the low yields of $^{25,26}$Mg at low metallicity and predict \[$^{25,26}$Mg/$^{24}$Mg\] $< -0.8$ at \[Fe/H\] $< -1$. The result of [@tww] is essentially reproduced in the dashed curve shown in Figure 1 and discussed below. Based on the conventional theory of Mg production (e.g. [@ww94], models of galactic chemical evolution [@tww] and previously available data on the isotopic abundance of Mg [@gl], the adoption of solar isotopic Mg ratios by [@webb] - [@murphy3] in the many-multiplet analysis would appear to be a safe and conservative one. These models indicate that the $^{24}$Mg:$^{25,26}$Mg ratio was higher than 79:21 in the past and it was shown [@murphy3] that a higher ratio only strengthens the case for a varying $\alpha$. These models, however, do not include contributions from intermediate mass stars as we now describe. Recall that increasing the abundances of the heavier Mg isotopes would yield a larger value for $\alpha$, and a ratio of $^{24}$Mg:$^{25,26}$Mg = 63:37 is sufficient to obviate the need for a varying fine structure constant. Recently, it has been appreciated [@fc; @sll; @lattanzio] that intermediate mass stars of low metallicity can also be efficient producers of the heavy Mg isotopes during the thermal-pulsing AGB phase. Heavy magnesium isotopes (and to some extent silicon isotopes as well) are synthesized via two mechanisms both of which are particularly robust in 2.5-6 M$_\odot$ stars with low metallicity. Such low-metallicity objects, indeed are precisely the kinds of objects which ought to produce the abundances observed in QSO narrow-line absorption systems at high redshift. One process is that of hot-bottom burning. This process is also believed to be a copious source of lithium in low metallicity stars (cf. [@iwamoto]) During the AGB phase, stars develop an extended outer convective envelope. Material in this convective envelope is mixed downward to regions of high temperature at the base. Of particular interest for this paper is that the base of the envelope is more compact and of higher temperature in low metallicity stars than in stars of solar composition. This can be traced to the decreased opacity of these objects. Furthermore, these stars would also have a shorter lifetime because they are hotter. Low to intermediate mass stars would contribute to the enrichment of the interstellar medium considerably sooner than their higher metallicity conterparts. Because these stars become sufficiently hot ($T \ge 7 \times 10^7$ K), proton capture processes in the Mg-Al cycle become effective. Proton capture on $^{24}$Mg then leads to the production of $^{25}$Mg (from the decay of $^{25}$Al) and to $^{26}$Al (which decays to $^{26}$Mg). A second contributing process occurs deeper in the star during thermal pulses of the helium-burning shell. The helium shell experiences periodic thermonuclear runaways when the ignition of the triple-alpha reaction occurs under electron-degenerate conditions. Due to electron degeneracy, the star is unable to expand and cool. Hence, the temperature rapidly rises until the onset of convection to transport the energy away. During these thermal pulses, $^{22}$Ne is produced by $\alpha$ captures on $^{14}$N which itself is left over from the CNO cycle. Heavy magnesium isotopes are then produced via the $^{22}$Ne($\alpha$,n))$^{25}$Mg and $^{22}$Ne($\alpha$,$\gamma$)$^{26}$Mg reactions. It was argued recently [@sll], that in intermediate mass stars which experience a 3rd dredge-up, significantly greater amounts of $^{25,26}$Mg are produced. A key point is that even though seed material is less plentiful in low metallicity stars, the reactions are very temperature sensitive. Hence, the increased temperature in the interior of low-metallicity stars more than compensates for the depleted seed material, leading to significant production of the heavy Mg isotopes. It has even been argued that these processes may also be net destroyers of $^{24}$Mg [@fc; @lattanzio] due to the extreme temperatures attained. To illustrate the effects of producing enhanced abundances of $^{25,26}$Mg in intermediate mass stars, we show the results of a simple model of galactic chemical evolution which traces the Mg isotopic abundances with and without the AGB source. When combined with the recent data showing such enhancements, our results suggest a plausible alternative for the interpretation of the quasar absorption system data based on the many-multiplet method. For our purposes a simple recalculation of the results of Timmes et al. [@tww] with and without the contribution from intermediate-mass AGB stars is sufficient. This allows us to make a direct comparison with the conclusions of the previous authors. The galactic chemical evolution model of Timmes et al. [@tww] is based upon exponential infall and a Schmidt star formation rate. We utilize a slightly modified model with updated yields [@up1] which nevertheless reproduces the results of [@tww] in the appropriate limit. Hence, we write the evolution of the surface density $\sigma_i$ of an isotope $i$ as, $$\begin{aligned} {d \sigma_i \over dt} = \int_{0.8}^{40} B(t - \tau(m)) \Psi(m) X_i^S(t-\tau(m)) dm \nonumber \\ + \int_{2.5}^{9.0} B(t - \tau(m)) \Psi(m) X_i^{AGB}(t-\tau(m)) dm \nonumber \\ + m_{CO} X_i^{Ia}R_{Ia} -B(t) {\sigma_i \over \sigma_{gas}} + \dot \sigma_{i,gas}, \label{sigdot}\end{aligned}$$ where $B(t)$ is the stellar birthrate at $t$, $\Psi(m)$ is the initial mass function (IMF), $X_i^S$ is the mass fraction of isotope $i$ ejected from single star evolution and Type II supernovae, and $\tau(m)$ is the lifetime of a star of mass $m$. The 2nd term in Eq. \[sigdot\] is the new contribution from AGB stars. For this purpose we adopt the AGB yields of [@lattanzio]. The 3rd term in Eq. \[sigdot\] is the contribution from type Ia supernovae where $m_{CO}$ is the mass the exploding carbon-oxygen white dwarf, and $R_{Ia}$ is the SNIa supernova rate taken from [@kobayashi]. Finally, the last two terms represent the trapping of elements in new stars, and the galactic infall rate (presumed to be of primordial material). As noted by a number of authors, the yields of heavy magnesium isotopes in AGB stars is extremely temperature sensitive, and hence rather sensitive to detailed physics of the stellar models. Moreover, there are reasons to expect that the initial mass function at low metallicity could be biased toward intermediate-mass stars. One argument for this is simply that with fewer metals, the cooling is less efficient in the protostellar cloud, so that a more massive cloud is required to form a star. To account for these possibilities, we introduce a modest enhancement of the IMF for intermediate mass stars. Such an enhancement has often been proposed and is motivated by models for star formation at low metallicity. For example, it has been invoked [@wd] to account for a large population of white dwarfs as microlensing objects in the Galactic Halo. In fact, such a population of intermediate mass stars was recently proposed [@foscv] to explain the dispersion of D/H observed in quasar absorption systems [@kirk]. Hence for the creation function, $B\Psi$, we write: $$\begin{aligned} \label{imf} B(t)\Psi(m) & = &B_1(t) \Psi_1(m) + B_2(t) \Psi_2(m) \qquad \\ = B_1 m^{-2.35} & + &(B_2/m) \exp{(-\log{(m/5.0)}^2)/(2 \sigma^2)}). \nonumber\end{aligned}$$ The IMF in Eq. (\[imf\]) accounts for a standard Salpeter distribution of stellar masses, $\Psi_1(m)$, with the addition of a lognormal component of stars peaked at 5 M$_\odot$, $\Psi_2$. The dimensionless width, $\sigma$, is taken to be 0.07. This IMF is similar to a Gaussian with a width of $5\sigma = 0.35$ M$_\odot$. For the normal stellar component we take the time dependence as $$B_1(t) = (1.0-e^{-t/.5{\rm Gyr}})\sigma_{tot}(t) [{\sigma_{gas} / \sigma_{tot}(t)}]^2~~,$$ while for the intermediate mass component we take $$B_2(t) = 5.5 e^{-t/.2{\rm Gyr}}\sigma_{tot}(t) [{\sigma_{gas} / \sigma_{tot}(t)}]^2~~.$$ This is very similar to the model used in [@foscv]. It contains an early burst of intermediate mass stars peaked at 5 M$_\odot$ which is exponentially suppressed after 0.2 Gyr. The second component describes standard quiescent star formation with a smooth transition from the burst. Figure 1 shows a comparison of our calculated magnesium isotope ratio vs iron abundance. The solid curve shows the result of the model described above including the AGB contribution. The QSO absorption-line systems in question have metallicities in the range from 0 to -2.5 with a typical iron abundance of \[Fe/H\]$ \sim -1.5$. The mean isotopic ratio needed to account for the data of [@webb]-[@murphy3] is $^{25,26}$Mg/$^{24}$Mg = 0.58 (shown by the solid horizontal line) with a $1\sigma$ lower limit of 0.47 (dashed horizontal line). This figure clearly demonstrates that a plausible model is possible in which a sufficient abundance of heavy Mg isotopes can be produced to both explain the observed globular-cluster data and the apparent trends in the many-multiplet data or QSO absorption-line systems at high redshift. The behavior in the evolution of the heavy isotopes can be explained as follows: Initially, the production of $^{25,26}$Mg in the ejecta of intermediate mass stars is delayed by their relatively long lifetime (compared to very massive stars). Initial contributions to the chemical enrichment of the interstellar medium comes from the most massive and shortest lived stars. In this model, the burst of intermediate mass stars begins to produce $^{25,26}$Mg at \[Fe/H\] $\ga -2.5$. At this stage, during the intermediate mass burst, $^{25}$Mg and $^{26}$Mg are copiously produced relative to $^{24}$Mg as per the yields of [@lattanzio]. At higher metallicity, the ejecta from the standard population of (massive) stars which is poor in $^{25,26}$Mg begins to dilute the ratio relative to $^{24}$Mg, thereby producing the noticable bump in $^{25,26}$Mg/$^{24}$Mg around \[Fe/H\] $\sim -1.5$. At late times, the effect of the early generation of intermediate mass stars is largely washed away. The dashed curve excludes the AGB yields and the intermediate mass component. It gives a result similar to that of [@tww]. We note that recently the AGB contribution was included in a chemical evolution model [@fenner] for a normal stellar distribution. While the results showed significantly higher abundances of $^{25,26}$Mg relative to $^{24}$Mg than that given by the dashed curve, they were not high enough to account for the claimed variability in $\alpha$. An enhanced early population of intermediate mass stars is therefore necessary. Nevertheless, this seems more plausible than a time varying fundamental constant. We have argued that previous models for the apparent broadening of the Mg multiplet in QSO absorption-line systems may have left out the important possible contribution from the production of heavy magnesium isotopes during the AGB phase of low-metallicity intermediate-mass stars. We have shown that a simple, plausible galactic chemical evolution model can be constructed which explains both the large abundances of heavy Mg isotopes observed in globular clusters and the large abundance necessary to explain the many-multiplet data. We note that the hot bottom burning process in AGB stars is also likely to have altered the Si isotopes as well by proton captures on aluminum and silicon at the base of the convective envelope. Hence, it is possible that the supporting data from Si isotopes can be explained by this paradigm as well. Obviously more detailed work is warranted to clarify the ability of this mechanism to account for the data. Nevertheless, the model presented here is based upon plausible expectations of stellar and galactic evolution and should be taken seriously before demanding an alteration of any fundamental constant at high redshift. We thank C. Cardall for helpful conversations. The work of K.A.O. was partially supported by DOE grant DE–FG02–94ER–40823. Work at the University of Notre Dame was supported by the U.S. Department of Energy under Nuclear Theory Grant DE-FG02-95-ER40934. [0]{} J. K. Webb, V. V. Flambaum, C. W. Churchill, M. J. Drinkwater and J. D. Barrow, Phys. Rev. Lett.  [**82**]{} (1999) 884 \[arXiv:astro-ph/9803165\]. M. T. Murphy [*et al.*]{}, Mon. Not. Roy. Astron. Soc.  [**327**]{} (2001) 1208 \[arXiv:astro-ph/0012419\]. J. K. Webb [*et al.*]{}, Phys. Rev. Lett.  [**87**]{} (2001) 091301 \[arXiv:astro-ph/0012539\]. M. T. Murphy, J. K. Webb, V. V. Flambaum, C. W. Churchill and J. X. Prochaska, Mon. Not. Roy. Astron. Soc.  [**327**]{} (2001) 1223 \[arXiv:astro-ph/0012420\]. M. T. Murphy, J. K. Webb and V. V. Flambaum, arXiv:astro-ph/0306483. J. P. Uzan, Rev. Mod. Phys.  [**75**]{} (2003) 403 \[arXiv:hep-ph/0205340\]. V. A. Dzuba, V. V. Flambaum, and J. K. Webb, Phys. Rev. A [**59**]{} (1999) 230; Phys. Rev. Lett.  [**82**]{} (1999) 888. J. N. Bahcall, C. L. Steinhardt and D. Schlegel, arXiv:astro-ph/0301507. P. Gay and D. L. Lambert, Astrophys. J.  [**533**]{} (2000) 260 \[arXiv:astro-ph/9911217\]. F. X. Timmes, S. E. Woosley and T. A. Weaver, Astrophys. J. Suppl.  [**98**]{} (1995) 617 \[arXiv:astro-ph/9411003\]. D. Yong, F. Grundahl, D. L. Lambert, P. E. Nissen and M. Shetrone, Astron.  Astrophys.  [**402**]{} (2003) 985 \[arXiv:astro-ph/0303057\]. K.J.R. Rosman and P.D.P. Taylor, J. Phys. Chem. Ref. Data, [**27**]{} (1998) 1275. M.D. Shetrone, Astron. J.  [**112**]{} (1996) 2639. D. Yong, D.L. Lambert, and I.I.Ivans, astro-ph/0309/079. M. Forestini and C. Charbonnel, Astron. Astrophys. Supp. [**123**]{} (1996) 241. L. Siess, M. Livio, and J. Lattanzio, Astrophys. J.  [**570**]{} (2002) 329 \[arXiv:astro-ph/0201284\]. A.I. Karakas and J.C. Lattanzio, PASA [**20**]{} (2003) 279 and arXiv:astro-ph/0305011. S. E. Woosley and T. A. Weaver, Astrophys. J. Suppl.  [**101**]{} (1995) 181. F.-K. Thielemann,ÊK. Nomoto,Êand K. Yokoi,Ê Astron.  Astrophys.  [**158**]{} (1986) 17. N. Iwamoto et al., Nucl. Phys. A [**719**]{} (2003) 571. P. Marigo, Astron. Astrophys. [**370**]{} (2001) 194; L.Portinari, C. Chiosi, and A. Bressan, Astron. Astrophys. [**334**]{} (1998) 505. C. Kobayashi, T. Tsujimoto, and K. Nomoto, Ap. J. [**539**]{} (2000) 26. D. S. Ryu, K. A. Olive and J. Silk, Astrophys. J.  [**353**]{} (1990) 81; B. D. Fields, G. J. Mathews and D. N. Schramm, Astrophys. J.  [**483**]{} (1997) 625 \[arXiv:astro-ph/9604095\]. B. D. Fields, K. A. Olive, J. Silk, M. Casse and E. Vangioni-Flam, Ap. J. [**563**]{} (2001) 653, \[arXiv:astro-ph/0107389\]. D. Kirkman, D. Tytler, N. Suzuki, J. M. O’Meara and D. Lubin, arXiv:astro-ph/0302006. Y. Fenner et al., arXiv:astro-ph/0307445.
--- abstract: 'In this paper, we tackle the accurate and consistent Structure from Motion (SfM) problem, in particular camera registration, far exceeding the memory of a single computer in parallel. Different from the previous methods which drastically simplify the parameters of SfM and sacrifice the accuracy of the final reconstruction, we try to preserve the connectivities among cameras by proposing a camera clustering algorithm to divide a large SfM problem into smaller sub-problems in terms of camera clusters with overlapping. We then exploit a hybrid formulation that applies the relative poses from local incremental SfM into a global motion averaging framework and produce accurate and consistent global camera poses. Our scalable formulation in terms of camera clusters is highly applicable to the whole SfM pipeline including track generation, local SfM, 3D point triangulation and bundle adjustment. We are even able to reconstruct the camera poses of a city-scale data-set containing more than one million high-resolution images with superior accuracy and robustness evaluated on benchmark, Internet, and sequential data-sets.' author: - Siyu Zhu - Tianwei Shen - Lei Zhou - Runze Zhang - Jinglu Wang - Tian Fang - | Long Quan\ The Hong Kong University of Science and Technology\ [{szhu,tshenaa,lzhouai,rzhangaj,jwangae,tianft,quan}@cse.ust.hk]{} bibliography: - 'egbib.bib' title: Parallel Structure from Motion from Local Increment to Global Averaging --- ![Our scalable and parallel SfM system recovers accurate and consistent 1.21 million camera poses (marked by blue dots) and 1.68 billion sparse 3D points of a typical medium-sized city from 50 megapixel high-resolution images. The figures from left to right zoom successively closer to the final representative buildings. []{data-label="fig:teaser"}](teaser.pdf){width="1.0\linewidth"} ![image](pipeline.pdf){width="1.0\linewidth"} Introduction ============ The prestigious large-scale SfM methods [@agarwal2011; @frahm2010; @heinly2015; @klingner2013; @schonberger2015; @snavely2008] have already provided ingenious designs in feature extraction [@lowe2004; @siftgpu07wu], overlapping image detection [@agarwal2011; @frahm2010; @heinly2015; @nister2006scalable], feature matching and verification [@wu2013], and bundle adjustment [@eriksson2016; @ni2007; @wu2011]. However, the large-scale accurate and consistent camera registration problem has not been completely solved, not to mention in a parallel fashion. To fit a whole camera registration problem into a single computer, previous works [@agarwal2011; @frahm2010; @heinly2015; @schonberger2015; @snavely2008] generally drastically discard the connectivities among cameras and tracks by first building a skeletal geometry of iconic images [@li2008] and registering the remaining cameras with respect to the skeletal reconstruction. The other approaches [@havlena2010; @moulon2013; @resch2015; @sweeney2017; @toldo2015] generate exclusive camera clusters for partial reconstruction and finally merge them together. Such losses of camera-to-camera connectivities remarkably decrease the accuracy and consistency of the final reconstruction. Instead, this work tries to preserve the camera-to-camera connectivities and their corresponding tracks for a highly accurate and consistent reconstruction. We propose an iterative camera clustering algorithm that splits the original SfM problem into several smaller sub-problems in terms of clusters of cameras with overlapping. We then exploit this scalable framework to solve the whole SfM problem, including track generation, local SfM, 3D point triangulation and bundle adjustment far exceeding the memory of a single computer in a parallel scheme. To obtain the global camera poses from partial sparse reconstructions, the hybrid SfM methods [@bhowmick2014; @sweeney2017] directly use similarity transformations to roughly merge clusters of cameras together and possibly lead to inconsistent camera poses across clusters. Others [@farenzena2009; @havlena2010; @lhuillier2005; @resch2015; @toldo2015] hierarchically merge camera pairs and triplets and are sensitive to the order of the merging process. Given that the camera-to-camera connectivities are preserved by our clustering algorithm at all possible, we instead apply the accurate and robust relative poses from incremental SfM [@agarwal2011; @pollefeys2004; @schoenberger2016sfm; @snavely2006; @wu2013] to the global motion averaging framework [@arie-nachimson2012; @brand2004; @carlone2015; @chatterjee2013; @cuibmvc2015; @cui2015; @goldstein2016; @govindu2001; @govindu2004; @hartley2013; @martinec2007; @ozyesil2015; @sinha2010], and obtain the global camera poses. The contributions of our approach are three-fold. First, we introduce a highly scalable framework to handle SfM problems exceeding the memory of a single computer. Second, a camera clustering algorithm is proposed to guarantee that sufficient camera-to-camera connectivities and corresponding tracks are preserved in camera registration. Finally, we present a hybrid SfM method that uses relative motions from incremental SfM to globally average the camera poses and achieve the state-of-the-art accuracy evaluated on benchmark data-sets [@strecha2008benchmarking]. To the best of my knowledge, ours is the first pipeline able to reconstruct highly accurate and consistent camera poses from more than one million high-resolution images in a parallel manner. Related Works ============= Based on an initial camera pair, the well-known incremental SfM method [@snavely2006] and its derivations [@agarwal2011; @pollefeys2004; @schoenberger2016sfm; @wu2013] progressively recover the pose of the “next-best-view" by carrying out perspective-three-point (P3P) [@kneip2011] combined with RANSAC [@fischler1981] and non-linear bundle adjustment [@triggs1999] to effectively remove outlier epipolar geometry and feature correspondences. However, frequent intermediate bundle adjustment leads to incredible time consumption and drifted optimization convergence, especially on large-scale data-sets. In contrast, the global SfM methods [@arie-nachimson2012; @brand2004; @carlone2015; @chatterjee2013; @cuibmvc2015; @cui2015; @goldstein2016; @govindu2001; @govindu2004; @hartley2013; @martinec2007; @ozyesil2015; @sinha2010] solve all the camera poses simultaneously from the available relative poses, the computation of which is highly parallel, and can effectively avoid drifting errors. Compared with incremental SfM methods, global SfM methods are, however, more sensitive to possible erroneous epipolar geometry despite the various delicate designs of epipolar geometry filters [@cui2015; @govindu2006; @heinly2014; @jiang2013; @moulon2013; @roberts2011; @wilson2013; @wilson2014; @zach2008; @zach2010]. In this paper, we embrace the advantages of both incremental and global SfM methods and exploit a hybrid SfM formulation. The previous hybrid methods [@farenzena2009; @havlena2010; @lhuillier2005; @resch2015; @toldo2015] are limited to small-scale or sequential data-sets. Havlena  [@havlena2010] form the final 3D model by merging atomic 3D models from camera triples together, while the merging process is not robust depending solely on common 3D points. Bhowmick [@bhowmick2014] directly estimate the similarity transformations to combine camera clusters but produce possibly inconsistent camera poses across clusters. The work in [@sweeney2017] incrementally merges multiple cameras while suffering from severe drifting errors. In contrast, we apply the robust relative poses from partial reconstruction by local incremental SfM to the global motion averaging framework and provide highly consistent and accurate camera poses. The work in [@sweeney2017] optimizes the relative poses by solving a single global optimization problem rather than multiple local problems, and suffers from scalability in very large-scale data-sets. To tackle the scalability problem of large-scale SfM, previous works generally exploit a skeletal [@snavely2008] or simplified graph [@agarwal2011; @frahm2010; @heinly2015; @schonberger2015] of iconic images [@li2008]. Although millions of densely sampled Internet images can be roughly registered, numerous geometry connectivities are discarded. Therefore, such approaches can hardly guarantee a highly accurate and consistent reconstruction in our scenario consisted of uniformly captured high-resolution images. The hybrid SfM pipelines [@bhowmick2014; @havlena2010] employing exclusive clusters of cameras lose a large number of connectivities among cameras and tracks during the cluster partition as well. Instead, our proposed camera clustering algorithm produces clusters of cameras with overlapping guaranteeing that sufficient camera-to-camera connectivities and corresponding tracks are validated and preserved in camera registration and consequently achieve superior reconstruction accuracy and consistency. Scalable Formulation ==================== Preliminary ----------- We start with a given set of images $\mathcal{I}=\{I_i\}$, their corresponding SIFT [@lowe2004] features $\mathcal{F} = \{\mathcal{F}_i\}$ and matching correspondences $\mathcal{M}=\{\mathcal{M}_{ij}\,|\,\mathcal{M}_{ij}\subset\mathcal{F}_i\times\mathcal{F}_j,\, i \neq j\}$ where $\mathcal{M}_{ij}$ is a set of inlier feature correspondences verified by epipolar geometry [@hartley2003multiple] between two images $I_i$ and $I_j$. Each image $I_i$ is associated with a camera $C_i\in\mathcal{C}$. The target of this paper is then to compute the global camera poses of all the cameras $\mathcal{C}=\{C_i\}$ with projection matrices denoted by $\{\mathbf{P}_i|\mathbf{P}_i=\mathbf{K}_i[\mathbf{R}_i|-\mathbf{R}_i\mathbf{c}_i]\}$. Camera Clustering {#sec:camera_clustering} ----------------- As the problem of SfM, in particular camera registration, scales up, the following two problems emerge. First, the problem size gradually exceeds the memory of a single computer. Second, the high degree parallelism of our distributed computing system can hardly be fully utilized. We therefore introduce a camera clustering algorithm to split the original SfM problem into several smaller manageable sub-problems in terms of clusters of cameras and associated images. Specifically, our goal of camera clustering is to find camera clusters such that all the SfM operations of each cluster can be fitted into a single computer for efficient processing (**size** constraint) and that all the clusters have sufficient overlapping cameras with adjacent clusters to guarantee a complete reconstruction when their corresponding partial reconstructions are merged together in motion averaging (**completeness** constraint). ### Clustering Formulation In order to encode the relationships between all the cameras and associated tracks, we introduce a camera graph $\mathcal{G}=\{\mathcal{V},\mathcal{E}\}$, in which each node $V_i\in\mathcal{V}$ represents a camera $C_i\in\mathcal{C}$, each edge $e_{ij}\in\mathcal{E}$ with weight $w(e_{ij})$ connects two different cameras $C_i$ and $C_j$. In the subsequent scalable SfM, both local incremental SfM and bundle adjustment [@eriksson2016] encourage cameras with great numbers of common features to be grouped together for a robust geometry estimation. We therefore define the edge weight $w(e_{ij})$ as the number of feature correspondences, namely $w(e_{ij}) = |\mathcal{M}_{ij}|$. Our target is then to partition all the cameras denoted by a graph $\mathcal{G}=\{\mathcal{V},\mathcal{E}\}$ into a set of camera clusters denoted by $\{\mathcal{G}_k|\mathcal{G}_k=\{\mathcal{V}_k,\mathcal{E}_k\}\}$ while satisfying the following **size** and **completeness** constraints. ![ The average relative rotation and translation errors compared with the ground-truth data for different choices of the number of cameras in a cluster for the four Internet data-sets [@wilson2014]. []{data-label="fig:cluster_size_motion_error"}](cluster_size_motion_error.pdf){width="1.0\linewidth"} #### Size constraint We encourage the number of cameras of each camera cluster to be small and of similar size. First, each camera cluster should be small enough to be fit into a single computer for efficient local SfM operations. Particularly for local incremental SfM, a comparatively small-scale problem can effectively avoid redundant time-consuming intermediate bundle adjustment [@triggs1999] and possible drifting. Second, a balanced problem partition stimulates a fully utilization of the distributed computing system. The size constraint is therefore defined as $$\begin{aligned} &\forall{\mathcal{G}_i\in\{\mathcal{G}_k\}},\; |\mathcal{V}_i| \leq \Delta_{\text{up}}\\ &\forall{\mathcal{G}_i,\mathcal{G}_j}\in \{\mathcal{G}_k\},\; |\mathcal{V}_i| \simeq |\mathcal{V}_j| \end{aligned}$$ where $\Delta_{\text{up}}$ is the upper bound of the number of cameras of a cluster. We can see from Figure \[fig:cluster\_size\_motion\_error\] that both the average relative rotation and translation errors computed from local incremental SfM in a cluster first remarkably decrease and then stabilize as the number of cameras in a cluster increases. The acceptable number of cameras in a cluster is therefore in a large range and we choose $\Delta_{\text{up}}=100$ for the trade-off between accuracy and efficiency. #### Completeness constraint The completeness constraint is introduced to preserve camera-to-camera connectivities, which provides relative poses for motion averaging to generate global camera poses. However, a complete preserving of camera-to-camera connectivities introduces many repeated cameras in different clusters and the size constraint can hardly be satisfied [@bourse2014]. We therefore define the completeness ratio of a camera cluster $\mathcal{G}_i$ as $\delta(\mathcal{G}_i) = \frac{\sum_{{i \neq j}}{|\mathcal{V}_i \cap \mathcal{V}_j}|}{|\mathcal{V}_i|}$ which quantifies the degree cameras covered in one camera cluster $\mathcal{G}_i$ are also covered by other camera clusters. It limits the number of repeated cameras and guarantees that all the clusters have sufficient overlapping cameras with adjacent clusters for a complete reconstruction. Then, we have $$\forall{\mathcal{G}_i} \in \{\mathcal{G}_k\},\;\;\delta(\mathcal{G}_i)\geq\delta_c.$$ As shown in Figure \[fig:upper\_bound\_completeness\_ratio\], a large completeness ratio $\delta_c$ encourages less loss of camera-to-camera connectivities while results in more duplicated cameras in different clusters. Balancing the trade-off between accuracy and efficiency, we choose $\delta_c=0.7$. Here, less than $5\%$ of camera-to-camera connectivities are discarded and approximately 1.8 times of the original number of cameras are reconstructed in local SfM. In contrast, exclusive camera clustering ($\delta_c=0$) leads to a loss of $40\%$ of camera-to-camera connectivities. ![Left: the ratio of the total number of cameras of all the clusters to the original number of cameras given different upper bounds of cluster camera numbers $\Delta_\text{up}$ and completeness ratio $\delta_{c}$. Right: the ratio of discarded edges given different $\Delta_\text{up}$ and $\delta_{c}$. The plot is based on the statistics of the city-scale data-sets. []{data-label="fig:upper_bound_completeness_ratio"}](total_camera_num.pdf){width="1.0\linewidth"} ### Clustering Algorithm We propose a two-step algorithm to solve the camera clustering problem. A sample output of this algorithm is illustrated in Figure \[fig:camera\_clustering\]. #### 1. Graph division We guarantee the size constraint by recursively splitting a camera cluster violating the size constraint into smaller components. Starting with the camera graph $\mathcal{G}$, we iteratively apply normalized-cut algorithm [@dhillon2007], which guarantees an unbiased vertex partition, to divide any sub-graph $\mathcal{G}_i$ not satisfying the size constraint into two balanced sub-graphs $\mathcal{G}_{i_1}$ and $\mathcal{G}_{i_2}$, until that no sub-graphs violate the size constraint. Intuitively, camera pairs with great numbers of common features have high edge weights and are less likely to be cut. #### 2. Graph expansion We enforce the completeness constraint by introducing sufficient overlapping cameras between adjacent camera clusters. More specifically, we first sort $\mathcal{E}_\text{dis}$ the edges discarded in graph division by edge weight $w(e_{ij})$ in descending order, and iteratively add the edge $e_{ij}$ and associate vertices $V_i$ and $V_j$ randomly to one of its connected sub-graphs $\mathcal{G}(V_i)$ and $\mathcal{G}(V_j)$ if the completeness ratio of the subgraph is smaller than $\delta_c$. Here, $\mathcal{G}(V_i)$ denotes the sub-graph containing vertex $V_i$. Such process is iterated until no additional edges can be added to any of the sub-graph. It is noteworthy that the completeness constraint is not difficult to satisfy after adding a small subset of discarded edges and associated vertices. The size constraint may be violated after graph expansion, and we iterate between graph division and graph expansion until both constraints are satisfied. ![ The visual results of our camera clustering algorithm before graph expansion on the City-B data-set. []{data-label="fig:camera_clustering"}](camera_clustering.pdf){width="1.0\linewidth"} Camera Cluster Categorization ----------------------------- The camera clusters from the clustering algorithm are divided into two categories, namely **independent** and **interdependent** camera clusters. We define the final camera clusters from our clustering algorithm as interdependent camera clusters since they share overlapping cameras with adjacent clusters. Such interdependent clusters are used in subsequent parallel local incremental SfM. Accordingly, we define all the fully exclusive camera clusters before graph expansion as independent camera clusters which are used in the following parallel 3D point triangulation and parallel bundle adjustment. We also leverage the independent camera clusters to build a hierarchical camera cluster tree $\mathcal{T}_c$, in which each leaf node corresponds to an independent camera cluster and each non-leaf node is associated with an intermediate camera cluster during the recursive binary graph division. The hierarchical camera cluster tree is an important structure in the subsequent parallel track generation. Next, we can base on the camera clusters from our clustering algorithm to implement a scalable SfM pipeline. **Input:** $\mathcal{G}=\{\mathcal{E},\mathcal{V}\}$ **Output:** $\mathbb{G}_\text{out}=\{\mathcal{G}_k|\mathcal{G}_k=\{\mathcal{E}_k,\mathcal{V}_k\}\}$ $\mathbb{G}_\text{in} \gets \{\mathcal{G}\}, \mathbb{G}_\text{out} \gets \emptyset$ $\mathbb{G}_\text{size} \gets \emptyset$ Choose $\mathcal{G}_i=\{\mathcal{V}_i,\mathcal{E}_i\}$ from $\mathbb{G}_\text{in}$ $\mathbb{G}_\text{in}\gets\mathbb{G}_\text{in}-\{\mathcal{G}_i\}$ $\mathbb{G}_\text{size}\gets\mathbb{G}_\text{size}+\{\mathcal{G}_i\}$ Divide $\mathcal{G}_i$ into $\mathcal{G}_{i_1}$ and $\mathcal{G}_{i_2}$ by normalized-cut [@dhillon2007] $\mathbb{G}_\text{in}\gets\mathbb{G}_\text{in}+\{\mathcal{G}_{i_1}\}+\{\mathcal{G}_{i_2}\}$ $\mathcal{E}_\text{dis}\gets$ edges discarded in graph division Select one from $\mathcal{G}(V_i),\;\mathcal{G}(V_j)\in\mathbb{G}_\text{size}$ such that $\delta(\mathcal{G}(V_i))<\delta_c$, $\delta(\mathcal{G}(V_j))\!\!<\!\!\delta_c$ uniformly at random, where $\mathcal{G}(V_i)$ is the sub-graph containing $V_i$ and $\delta(\mathcal{G})$ measures the completeness ratio of $\mathcal{G}$ Add $e_{ij}$ and $V_j$ to $\mathcal{G}(V_i)$ Add $e_{ij}$ and $V_i$ to $\mathcal{G}(V_j)$ $\mathbb{G}_\text{out}\gets\mathbb{G}_\text{out}+\{\mathcal{G}_i\}$ $\mathbb{G}_\text{in}\gets\mathbb{G}_\text{in}+\{\mathcal{G}_i\}$ Scalable Implementation ======================= Track Generation {#sec:track_generation} ---------------- The first step of scalable SfM is to use the pair-wise feature correspondences to generate globally consistent tracks across all the images, and the problem is solved by a standard Union-Find [@moulon2012] algorithm. However, as the size of the input images scales up, it gradually becomes impossible to concurrently load all the feature and associate match files into the memory of a single computer for track generation. We therefore base on the hierarchical camera cluster tree $\mathcal{T}_c$ to perform track generation and avoid caching all the features and correspondences into memory at once. In detail, we define $N_i^k$ as the node in the $k$th level of $\mathcal{T}_c$, and $N_{i_1}^{k+1}$ and $N_{i_2}^{k+1}$ are respectively the left and right child of $N_i^k$. For the track generation sub-problem associated with sibling leaf nodes ${N}^{k+1}_{i_1}$ and ${N}^{k+1}_{i_2}$, we load all their features and correspondences into memory, generate the tracks corresponding to ${N}_i^k$, reallocate the memory of features and correspondences, and save the tracks associated with ${N}^{k}_{i}$ into storage. As for the two sibling non-leaf nodes $N^{l+1}_{j_1}$ and $N^{l+1}_{j_2}$, we only load the correspondences and tracks associated with both nodes, merge them, and save the tracks corresponding to $N^{l}_{j}$ into storage. Such processes are iteratively performed from the bottom up until the globally consistent tracks with respect to the root node of $\mathcal{T}_c$ are obtained. All the track generation processes associated with each level of $\mathcal{T}_c$ are handled in parallel under a standard framework of MapRedeuce [@dean2008]. ![ The comparison of the accuracy of the generated relative rotations and relative translations between the motion averaging methods [@moulon2013; @theia-manual] and our approach. The cumulative distribution functions (CDF) are based on the Internet data-sets [@wilson2014]. []{data-label="fig:relative_error"}](relative_error.pdf){width="1.0\linewidth"} Local Incremental SfM {#sec:local_sfm} --------------------- For the cameras and corresponding tracks of every interdependent camera cluster $\mathcal{C}^k$ denoted by the sub-graph $\mathcal{G}_k\!=\!\{\mathcal{E}_k,\mathcal{V}_k\}$, we perform local incremental SfM in parallel. Local incremental SfM is vital to the subsequent motion averaging in two aspects. First, RANSAC [@fischler1981] based filters and repeated partial bundle adjustment [@triggs1999] can remove erroneous epipolar geometry and feature correspondences. Second, incremental SfM considers robust $N$-view ($\geq3$) pose estimation [@kneip2011; @nister2004efficient] and produces superior accurate and robust relative rotations and translations than the generally adopted essential matrix based [@arie-nachimson2012; @brand2004; @govindu2001; @ozyesil2015] and trifocal tensor based methods [@jiang2013; @moulon2013] even for the camera pairs with weak association, large angle of views, and great scale variation. Figure \[fig:relative\_error\] and the statistics of the benchmark data-sets [@strecha2008benchmarking] ($\delta \bar{R}$ and $\delta \bar{t}$) in Table \[tab:benchmark\] confirm the statement above. Motion Averaging {#sec:motion_averaging} ---------------- Now, all the relative motions of camera pairs with feature correspondences from local incremental SfM are used to compute the global camera poses. The work in [@chatterjee2013] is first adopted for efficient and robust global rotation averaging. ### Translation Averaging {#sec:translation_averaging} Translation averaging is challenging for two reasons. First, it is difficult to discard erroneous epipolar geometry resulted from noisy feature correspondences. Second, an essential matrix can only encode the direction of a relative translation [@ozyesil2015]. Thanks to local incremental SfM, the majority of erroneous epipolar geometry is filtered, and the only problem remained is to solve the scale ambiguity. The work in [@cui2015] first globally averages the scales of all the relative translations and perform a convex $\ell1$ optimization to solve scale-aware translation averaging. [Ö]{}zyesil  [@ozyesil2015] obtain the convex “least unsquared deviations" formulation by introducing a complicated quadratic constraint. Given that all the relative translations $\{\mathbf{t}_{ij}^k\}$ from one camera cluster $\mathcal{C}_k$ are up to the same scale factor $\alpha_k$, we instead formulate our translation averaging as a convex $\ell1$ problem by solving the camera positions and cluster scales simultaneously. Obviously, the scale factors computed in terms of clusters are more robust than the pair-wise scales [@cui2015; @ozyesil2015] in terms of relative poses, especially for the camera pairs with weak association. ![The comparison of the camera position errors with the state-of-the-art translation averaging methods [@cui2015; @ozyesil2015; @theia-manual; @wilson2014] on both benchmark [@strecha2008benchmarking] (left) and Internet [@wilson2014] (right) data-sets given the same input global rotations from [@chatterjee2013] and relative translations from local incremental SfM.[]{data-label="fig:position_error"}](position_error.pdf){width="1.0\linewidth"} With the global rotations $\{\mathbf{R}_i\}$ computed from [@chatterjee2013] fixed, a linear equation of camera positions can be obtained as: $$\label{eq:translation_averaging} \alpha_k\mathbf{t}_{ij}^k = \mathbf{R}_j(\mathbf{c}_i-\mathbf{c}_j), \vspace{-1mm}$$ where $\mathbf{t}^k_{ij}$ is a relative translation between two cameras $C_i$ and $C_j$ estimated in the $k$th cluster associated with a scale $\alpha_k$. Equation \[eq:translation\_averaging\] can be rewritten as: $\alpha_k\mathbf{R}_j^T\mathbf{t}_{ij}^k = \mathbf{c}_i-\mathbf{c}_j$. Then we form the representations of all the cluster scales and camera positions as $\mathbf{x}_{s}=[\alpha_1,\cdot\cdot\cdot,\alpha_M]^T$ and $\mathbf{y}_{c} = [\mathbf{c}_1,\cdot\cdot\cdot,\mathbf{c}_N]^T$ respectively, and we have: $$\underbrace{[\;\cdot\cdot\cdot \mathbf{p} \cdot\cdot\;\cdot\;]}_{\mathbf{A}^{k}_{ij}}\mathbf{x}_{s} = \underbrace{[\;\cdot\cdot\cdot \mathbf{I} \cdot\cdot\cdot -\mathbf{I} \cdot\cdot\;\cdot\;]}_{\mathbf{B}_{ij}}\mathbf{y}_{c}. \vspace{-1mm}$$ Here, $\mathbf{A}^{k}_{ij}$ is a $3 \times M$ matrix with an appropriate location of $k$ replaced by $\mathbf{p}=\mathbf{R}_j^T\mathbf{t}_{ij}^k$, and $\mathbf{0}_{3 \times 1}$ otherwise. $\mathbf{B}_{ij}$ is a $3\times 3N$ matrix with appropriate locations of $i$ and $j$ replaced by $\mathbf{I}_{3\times3}$ and $-\mathbf{I}_{3\times3}$ respectively, and $\mathbf{0}_{3\times3}$ otherwise. Then, we can collect all such linear equations from the available camera-to-camera connectivities into the following single linear equation system: $$\label{eq:translation_linear_equation} \mathbf{A}\mathbf{x}_s=\mathbf{B}\mathbf{y}_c, \vspace{-1mm}$$ where $\mathbf{A}$ and $\mathbf{B}$ are sparse matrices made by stacking all the associate matrices $\mathbf{A}^{k}_{ij}$ and $\mathbf{B}_{ij}$ respectively. [0.33]{} ![The visual comparison between the approaches of exclusive camera clusters [@bhowmick2014; @sweeney2017] and our approach using interdependent camera clusters on the sequential data-sets with close-loop [@cui2015]. []{data-label="fig:closeloop"}](closeloop0.pdf "fig:"){width="1.0\linewidth"} [0.31]{} ![The visual comparison between the approaches of exclusive camera clusters [@bhowmick2014; @sweeney2017] and our approach using interdependent camera clusters on the sequential data-sets with close-loop [@cui2015]. []{data-label="fig:closeloop"}](closeloop1.pdf "fig:"){width="1.0\linewidth"} [0.30]{} ![The visual comparison between the approaches of exclusive camera clusters [@bhowmick2014; @sweeney2017] and our approach using interdependent camera clusters on the sequential data-sets with close-loop [@cui2015]. []{data-label="fig:closeloop"}](closeloop2.pdf "fig:"){width="1.0\linewidth"}   After removing the gauge freedom by setting $\mathbf{c}_1=\mathbf{0}_{3\times 1}$ and $\alpha_1=1$, we can obtain the positions of all the cameras by solving the following robust convex $\ell1$ optimization problem that is more robust to outliers than $\ell2$ methods and converges rapidly to a global optimum, $$\label{eq:translation_averaging_sum} \arg\min_{\mathbf{x}_s,\mathbf{y}_c}{||\mathbf{A}\mathbf{x}_s-\mathbf{B}\mathbf{y}_c||_{1}}. \vspace{-1mm}$$ Since the baseline length is encoded by the changes of cluster scales, our translation averaging algorithm can effectively handle the scale ambiguity, especially for collinear camera motion, and is much well-posed than the essential matrix based approaches [@brand2004; @govindu2001; @ozyesil2015; @wilson2014], which only consider the directions of relative translations and are limited to the parallel rigid graph [@ozyesil2015]. Bundle Adjustment {#sec:bundle_adjustment} ----------------- For each independent camera cluster, we triangulate [@hartley2003multiple] their corresponding 3D points with sufficient visible cameras ($\geq 3$) from their feature correspondences validated by local incremental SfM based on the averaged global camera geometry. Then, we follow the state-of-the-art algorithm proposed by Eriksson  [@eriksson2016] for distributed bundle adjustment. Since this work [@eriksson2016] declares to have no restriction on the partitions of cameras, we refer to the independent camera clusters with their associate cameras, tracks and projections as the sub-problems of the objective function of bundle adjustment. Discussion {#sec:motion_averaging_discussion} ---------- Given the same global camera rotations from [@chatterjee2013] and relative translations from local SfM, Figure \[fig:position\_error\] verifies that our translation averaging algorithm recovers more accurate camera positions than the state-of-the-art translation averaging methods [@cui2015; @moulon2013; @ozyesil2015; @theia-manual; @wilson2014]. Although the optimal solution to no loss of relative motions compared with the original camera graph can hardly be obtained in our clustering algorithm, the statistical comparison shown in Table \[tab:benchmark\] still demonstrates the superior accuracy of camera poses from our pipeline over the state-of-the-art SfM approaches [@cui2015; @moulon2013; @theia-manual; @wu2013] on the benchmark data-set [@strecha2008benchmarking]. Figure \[fig:closeloop\] shows the comparison with the hybrid SfM methods [@bhowmick2014; @sweeney2017] using exclusive camera clusters on the data-sets [@cui2015] consisted of sequential images with close-loop. We regard our independent camera clusters as the clusters adopted in [@bhowmick2014; @sweeney2017]. We can see that our global method with interdependent camera clusters successfully guarantees close-loop while those [@bhowmick2014; @sweeney2017] with exclusive camera clusters fail. The statistical comparison with the hybrid SfM methods [@bhowmick2014; @sweeney2017] are shown in Table \[tab:close\_loop\]. To measure the consistency of camera poses, we use the epipolar error that is the median distance between the features and corresponding epipolar lines computed from the feature correspondences of all the camera pairs, the number of camera pairs connected by 3D points, and the number of final 3D points. Since our clustering algorithm introduces sufficient camera connectivities for a fully constrained global motion averaging rather than directly merging exclusive camera clusters [@bhowmick2014; @sweeney2017], the epipolar error of our approach is only $10\%\!-\!20\%$ of that of the work [@bhowmick2014; @sweeney2017], the number of connected camera pairs is $1.8\!-\!4.5$ times of that of the work [@bhowmick2014; @sweeney2017], and we generate $1.3\!-\!3.0$ times more 3D points than the work in [@bhowmick2014; @sweeney2017]. Table \[tab:close\_loop\] also provides the results of our approach with different complements ratio. We can see that a larger completeness ratio, namely more camera-to-camera connectivities, guarantees a more accurate and complete sparse reconstruction. Experiments =========== #### Implementation We implement our approach in C++ and perform all the experiments on a distributed computing system consisted of 10 computers each of which has 6-Core (12 threads) Intel 3.40 GHz processors and 128 GB memory. All the computers are deployed on a scalable network file system similar to Hadoop File System. We implement a multicore bundle adjustment solver similar to PBA [@wu2011] to solve all the non-linear optimization problems, and a $\ell1$ solver like [@emmanuel2005] to solve Equation \[eq:translation\_averaging\_sum\]. We also utilize Graclus [@dhillon2007] to handle the normalized-cut problem. #### Benchmark data-sets The statistics of the comparisons of the benchmark data-sets [@strecha2008benchmarking] with absolute measurements of camera poses between the state-of-the-art methods [@cui2015; @moulon2013; @theia-manual; @wu2013] and our proposed method are shown in Table \[tab:benchmark\]. Since the number of cameras of the largest benchmark data-set CastleP30 is only 30, we set $\Delta_\text{up}=7$ rather than $\Delta_\text{up}=100$ adopted by our pipeline to force that valid camera clusters can be generated. Specifically, we can see that the average errors of relative rotations ($\delta\bar{R}$), relative translations ($\delta\bar{t}$), and corresponding camera positions ($\bar{x}$) from our algorithm are all obviously smaller than the work in [@cui2015; @moulon2013; @theia-manual; @wu2013]. ![ The SfM visual results of the Internet data-sets [@wilson2014]. []{data-label="fig:internet"}](internet_data.pdf){width="1.0\linewidth"} #### Internet data-sets Table \[tab:internet\] shows the statistical comparisons with the state-of-the-art SfM pipelines [@cui2015; @schoenberger2016sfm; @sweeney2016; @theia-manual; @wilson2014] on the Internet data-set. We can see that our approach achieves the best accuracy measured by the median camera position errors (in meters) after bundle adjustment in 8 out of 13 data-sets. Moreover, we register the most cameras in 4 out of 13 data-sets. Among these methods [@cui2015; @schoenberger2016sfm; @sweeney2016; @theia-manual; @wilson2014], Theia SfM [@theia-manual] is the most efficient. We can therefore conclude that our SfM pipeline achieve slightly better accuracy and its efficiency is comparable to the state-of-the-art methods [@cui2015; @schoenberger2016sfm; @sweeney2016; @theia-manual; @wilson2014] on the data-sets captured in the wild. ![image](city_scale_results.pdf){width="1.0\linewidth"} #### City-scale data-sets The statistics of the input city-scale data-sets are shown in Table \[tab:cityscale\]. The image resolution ranges from $24$ to $50$ megapixels and the average number of detected features of each image ranges from 73.0K to 170.1K. We can see that the estimated peak memory of the largest City-A data-set is 2.9TB, 39.81GB, and 10.2TB in track generation, motion averaging and bundle adjustment respectively if handle by the standard SfM pipeline [@moulon2013] in a single computer, which obviously runs out of memory of our servers with 128GB memory. The same goes for the other standard SfM pipelines [@schoenberger2016sfm; @theia-manual; @wu2013]. However, our pipeline can even recover 1.21 million accurate and consistent camera poses and 1.68 billion sparse 3D points of the largest City-A data-set. The corresponding peak memory dramatically drops to 34.62GB and 0.53GB in track generation and bundle adjustment respectively. In Figure \[fig:cityscale\], we further provide the visual results of the city-scale data-sets containing both mesh and textured models with delicate details to qualitatively demonstrate the high accuracy of the finally recovered camera poses. As shown in Table \[tab:cityscale\_downsample\], we fit the whole City-D data-set to the standard SfM pipeline [@moulon2013; @schoenberger2016sfm; @theia-manual; @wu2013] by resizing images. We can see that down-sampling images leads to an obviously smaller number of registered cameras. #### Running time We test the Internet data-set [@wilson2014] on a single computer to make a fair comparison on running time, and Table \[tab:internet\] shows that our efficiency is comparable to the works in [@jiang2013; @ozyesil2015; @theia-manual; @wilson2014]. As for the city-scale data-sets, we note in Table \[tab:cityscale\] that the running time of track generation and local incremental SfM grows linearly as the number of images increases, while the running time of bundle adjustment, the complexity of which is $\mathcal{O}((m+n)^3)$ given $m$ cameras and $n$ 3D points even in a distributed manner, and motion averaging that can only be handled in a single computer gradually dominates as the number of images drastically increases. Even for the City-B data-set, our parallel computing system composed of 10 computers can successfully reconstruct 138 thousand cameras and 100 million sparse 3D points within one day. Notably, because of the concise design of our clustering algorithm, the range of its running time on the city-scale data-sets is from 3.57 to 11.71 minutes, which is extremely efficient compared with the time cost of the whole SfM pipeline. #### Limitations Thanks to the fully scalable formulation of our SfM pipeline in terms of camera clusters, the peak memory of track generation of our pipeline is only 2.1%-8.7% of the standard pipeline [@cui2015; @moulon2013; @snavely2006; @theia-manual; @wu2013], and the peak memory of bundle adjustment of our approach is even 0.1-3.8of the standard pipeline. However, since our motion averaging formulation (Section \[sec:motion\_averaging\]) still solves all the camera poses considering available relative motions at once, it is limited by the memory of a single computer. We are therefore interested to exploit our scalable formulation to solve large-scale motion averaging problems in a scalable and parallel manner, and leave this for future study. Conclusions =========== In this paper, we propose a parallel pipeline able to handle accurate and consistent SfM problems far exceeding the memory of a single computer. A graph-based camera clustering algorithm is first introduced to divide the original problem into sub-problems while preserving sufficient connectivities among cameras for a highly accurate and consistent reconstruction. A hybrid SfM method embracing the advantages of both incremental and global SfM methods is subsequently proposed to merge partial reconstructions into a globally consistent reconstruction. Our pipeline is able to handle city-scale SfM problems containing one data-set with 1.21 million high-resolution images, which runs out of memory in the available approaches, in a highly scalable and parallel manner with superior accuracy and consistency over the state-of-the-art methods.
--- abstract: | We give a classification of (co)torsion pairs in finite $2$-Calabi-Yau triangulated categories with maximal rigid objects which are not cluster tilting. These finite $2$-Calabi-Yau triangulated categories are divided into two main classes: one denoted by $\A_{n,t}$ called of type $A$, and the other denoted by $D_{n,t}$ called of type $D$ [@bpr]. By using the geometric model of torsion pairs in cluster categories of type $A, $ or type $D$ in [@hjr1; @hjr3], we give a geometric description of torsion pairs in $\A_{n,t}$ or $D_{n,t}$ respectively, via defining the periodic Ptolemy diagrams. This allows to count the number of (co)torsion pairs in these categories. Finally, we determine the hearts of (co)torsion pairs in all finite $2$-Calabi-Yau triangulated categories with maximal rigid objects which are not cluster tilting via quivers and relations.\ **Key words:** Finite $2$-Calabi-Yau triangulated category; Periodic Ptolemy diagram; Torsion pair, Heart of torsion pair.\ **2010 Mathematics Subject Classification:**16E99; 18E99; 18D90 author: - Huimin Chang Bin Zhu date: | Department of Mathematical Sciences\ Tsinghua University\ 100084 Beijing, P. R. China\ chm14@mails.tsinghua.edu.cn (Chang); bzhu@math.tsinghua.edu.cn (Zhu) --- \[section\] \[theorem\][Lemma]{} \[theorem\][Corollary]{} \[theorem\][Proposition]{} \[theorem\][Definition]{} \[theorem\][Question]{} \[theorem\][Remark]{} \[\][Remark]{} \[theorem\][Example]{} \[\][Example]{} \[theorem\][Construction]{} \[\][Construction]{} \[theorem\][Assumption]{} \[\][Assumption]{} ¶ § c =0.5cm Introduction ============ The notion of torsion pairs in abelian categories was first introduced by Dickson [@d], and triangulated version goes back to Iyama and Yoshino [@iy]. It plays an important role in the study of the algebraic structure and geometric structure of triangulated categories, and unifies the notion of t-structures, co-t-structures, cluster tilting subcategories and maximal rigid subcategories. Torsion pairs are used to construct certain abelian structures inside triangulated categories (as the hearts of torsion pairs) after Nakaoka’s work [@n1]. For a given triangulated category, one may ask a question: How many abelian subquotient categories can be constructed from the triangulated category? In general, this is a difficult question, one may think the hearts of $t$-structures in a given triangulated category in the sense of Beilinson-Bernstein-Deligne[@bbd]. To attack this question, one plan is to classify torsion pairs in the given triangulated category, and then to determine the hearts of torsion pairs. The aim of the paper is to give a classification of torsion pairs and to determine their hearts for certain finite $2$-Calabi-Yau triangulated categories. Classification of torsion pairs (equivalently cotorsion pairs) has been studied by many people recently. Ng gave a classification of torsion pairs in the cluster category of type $A_{\infty}$ by Ptolemy diagrams of an $\infty$-gon[@ng]. Holm, J${\o}$rgensen and Rubey gave a classification of torsion pairs in the cluster category of type $A_{n}$ via Ptolemy diagrams of a regular $(n+3)$-gon [@hjr1 Theorem A], they also did the same work for the cluster category of type $D_{n}$ by Ptolemy diagrams of a regular $2n$-gon [@hjr3 Theorem 1.1] and for cluster tubes [@hjr2 Theorem 1.1]. Zhang, Zhou and Zhu gave a classification of torsion pairs in the cluster category of a marked surface [@zzz Theorem 4.5]. Zhou and Zhu gave a construction and a classification of torsion pairs in any $2$-Calabi-Yau triangulated category with cluster tilting objects[@zz2 Theorem 4.4]. Cluster categories associated with finite dimensional hereditary algebras [@bmrrt] (see also [@ccs] for type $A$) and the stable categories of the preprojective algebras $\Lambda$ of Dynkin quivers [@gls] have been used for the categorification of cluster algebras. These categories are $2$-Calabi-Yau triangulated categories with an important class of objects called cluster-tilting objects, which are the analogues of clusters in cluster algebras. The cluster-tilting objects are closely related to a class of objects called maximal rigid objets. Indeed, cluster-tilting objects are maximal rigid objects, but the converse is not true in general [@bikr; @bmv; @kz]. For a $2$-Calabi-Yau triangulated category, either all maximal rigid objects are cluster tilting, or none of them are [@zz1 Theorem 2.6]. Triangulated categories with finitely many indecomposable objects (which we call finite triangulated categories) are a special class of locally finite triangulated categories. By Amiot [@a] and Burban-Iyama-Keller-Reiten [@bikr] (see also [@bpr]), finite $2$-Calabi-Yau triangulated categories with non-zero maximal rigid objects have a classification which depends on wether the maximal rigid objects are cluster tilting or not. Standard finite $2$-Calabi-Yau triangulated categories with non-zero maximal rigid objects which are not cluster tilting are exactly the following orbit categories: - *(Type A)* ${\mathcal}A_{n,t}=D^{b}(\mathbb{K}A_{(2t+1)(n+1)-3})/\tau^{t(n+1)-1}[1]$, where $n\geq 1$ and $t>1$; - *(Type D)* ${\mathcal}D_{n,t}=D^{b}(\mathbb{K}D_{2t(n+1)})/\tau^{(n+1)}\varphi^{n}$, where $n, t\geq 1$, and where $\varphi$ is induced by an automorphism of $D_{2t(n+1)}$ of order *2*; - *(Type E)* $D^{b}(\mathbb{K}E_{7})/\tau^{2}$ and $D^{b}(\mathbb{K}E_{7})/\tau^{5}$. Recently Buan-Palu-Reiten classified the algebras arising from these triangulated categories as the endomorphism algebras of maximal rigid objects via mutations of quivers with relations [@bpr Table 1, Table 2]. In this paper, we use the geometric models of torsion pairs in cluster categories of type $A_n$ or type $D_n$ in [@hjr1] or [@hjr3] respectively to define a notion of periodic Ptolemy diagrams. This allows us to give a complete classification of torsion pairs in the categories ${\mathcal}A_{n,t},$ $ {\mathcal}D_{n,t}$. From this classification, we count the number of torsion pairs in these categories. We also determine the hearts of these torsion pairs. These results, combining with results in [@zz2] give a complete picture on torsion pairs and their hearts in finite $2$-Calabi-Yau triangulated categories with non-zero maximal rigid objects. The paper is organized as follows: In Section $2$, some basic definitions and related results are recalled and some conclusions are achieved on torsion pairs. In Section $3$, we give a geometric description of torsion pairs in $\A_{n,t}$, where $n\geq 1$ and $t>1$, in the first subsection. In the second subsection, we count the number of torsion pairs in these categories. In the final subsection, we use the same approach to count the number of torsion pairs in $\A_{n,1}$, which is a $2$-Calabi-Yau finite triangulated category of type $A$ with cluster tilting objects. In Section $4$, we give a geometric description of torsion pairs in ${\mathcal}D_{n,t}$, and count the number of torsion pairs in these categories. In the last section, we determine the hearts of torsion pairs in finite $2$-Calabi-Yau triangulated categories with maximal rigid objects which are not cluster tilting. **Notation.** Unless stated otherwise, $\mathbb{K}$ will be an algebraically closed field of characteristic zero. Our categories will be assumed $\mathbb{K}$-linear, Hom-finite, Krull-Remark-Schmidt additive categories. $\add\,T$ denotes the additive closure of $T$. Any subcategory is assumed to be one closed under finite direct sums and direct summands. Let $\X$ and $\Y$ be subcategories of a triangulated category ${\mathcal}C$. $\X\ast\Y$ denotes the extension subcategory of $\X$ by $\Y$, whose objects are by definition the objects $M$ with the triangle $X\to M\to Y\to X[1]$, where $X\in\X$ and $Y\in\Y$. We say Hom$_{{\mathcal}C}(\X,\Y)=0$ if Hom$_{{\mathcal}C}(X,Y)=0,$ for $X\in \X, Y\in \Y$. A subcategory $\X$ is called an extension closed subcategory provided that $\X\ast\X \subseteq\X.$ For a subcategory ${\mathcal}D$ of ${\mathcal}C$, we denote by ${\mathcal}D^\perp$ (resp. $^\perp{\mathcal}D$) the subcategory whose objects are $M\in{\mathcal}C$ satisfying ${Hom_{(}({_},{{\mathcal}C})}\D,M)=0$ (resp. ${Hom_{(}({_},{{\mathcal}C})}M,\D)=0$). For the sake of convenience, we write $[1]$ for the shift functor in any triangulated category unless other stated, and Ext$_{{\mathcal}C}^1(X, Y)={Hom_{,}({(},{X})}Y[1])$. Preliminaries ============= Firstly, we recall some basic notions based on [@bmrrt; @iy; @n2]. \[b0\] Let $\X$ and $\Y$ be subcategories of a triangulated category ${\mathcal}C$. - The pair $(\X,\Y)$ is a $torsion$ $pair$ if $${Hom_{(}({_},{{\mathcal}C})}\X,\Y)=0\text{ and }{\mathcal}C=\X\ast\Y\text{.}$$ The subcategory ${\mathcal}I=\X\cap\Y[-1]$ is called the $core$ of the torsion pair. - The pair $(\X,\Y)$ is a $cotorsion$ $pair$ if $$\Ext^1_{{\mathcal}C}(\X,\Y)=0\text{ and }{\mathcal}C=\X\ast\Y[1]\text{.}$$ Moreover, we call the subcategory ${\mathcal}I=\X\cap \Y $ the $core$ of the cotorsion pair. - A $t$-$structure$ $(\X, \Y )$ in ${\mathcal}C $ is a torsion pair such that $\X$ is closed under $[1]$ (equivalently $\Y $ is closed under $[-1]$). - A subcategory ${\mathcal}T$ is called $rigid$ if $\Ext^1_{{\mathcal}C}({\mathcal}T,{\mathcal}T)=0$. ${\mathcal}T$ is called $maximal$ $rigid$ if ${\mathcal}T$ is maximal with respect to this property, i.e., if $\Ext^1_{{\mathcal}C}({\mathcal}T\oplus addM, {\mathcal}T\oplus add M)=0$, then $M\in\add\, T$. $T$ is called a $rigid$ $object$ if add$T$ is rigid. $T$ is $maximal$ $rigid$ if add$T$ is maximal rigid. - A functorially finite subcategory ${\mathcal}T$ is called cluster tilting if ${\mathcal}T=\{X\in{\mathcal}C|\Ext^1_{{\mathcal}C}(X,{\mathcal}T)=0\}=\{X\in{\mathcal}C|\Ext^1_{{\mathcal}C}({\mathcal}T, X)=0\}$. An object $T$ is a $cluster$ $tilting$ object if add$T$ is a cluster tilting subcategory. By Definition \[b0\], we know that a pair $(\X,\Y)$ is a cotorsion pair if and only if $(\X,\Y[1])$ is a torsion pair. For a cotorsion pair $(\X,\Y)$ with core ${\mathcal}I=\X\cap \Y $, it is easy to see that $(\X,\Y)$ is a $t$-structure if and only if ${\mathcal}I=\{0\}$ [@zz2]; $\X$ is a cluster tilting subcategory if and only if ${\mathcal}I=\X= \Y$. The following result can be found in [@iy Proposition 2.3]. - Let $\X$ be a contravariantly finite and extension closed subcategory of a triangulated category ${\mathcal}C$. Then $(\X,\X^\perp)$ is a torsion pair. - Let $\X$ be a covariantly finite and extension closed subcategory of a triangulated category ${\mathcal}C$. Then $({^\perp\X},\X)$ is a torsion pair. <!-- --> - A triangulated category ${\mathcal}C$ is called $2$-$Calabi$-$Yau$ (shortly $2$-CY) provided there is a functorially isomorphism Hom$_{{\mathcal}C}(X, Y)\simeq D$Hom$_{{\mathcal}C}(Y, X[2])$, for all $X, Y\in {\mathcal}C$, where $D=$Hom$_{\mathbb{K}}(-,\mathbb{K})$. - [@xz] A triangulated category ${\mathcal}C$ is $locally$ $finite$ if for any indecomposable object $X$, there exists only a finite number of isomorphism classes of indecomposable objects $Y$ such that ${Hom_{(}({_},{{\mathcal}C})}X,Y)\neq0$. ${\mathcal}C$ is called a $finite$ $triangulated$ $category$ if it contains only finitely many indecomposable objects up to isomorphisms. Any finite $2$-CY triangulated category ${\mathcal}C$ contains a maximal rigid object (may be zero). If the maximal rigid objects of a connected finite $2$-CY triangulated category ${\mathcal}C$ are zero, then any torsion pair $(\X, \Y)$ is a $t$-structure. It follows that $\X$, $\Y$ are triangulated subcategories and ${\mathcal}C=\X\oplus \Y$. This implies that $(\X,\Y)=({\mathcal}C, 0)$ or $(0,{\mathcal}C)$ (see Proposition \[p4\]). So the finite $2$-CY triangulated category ${\mathcal}C$ which we consider in this paper is assumed to contain a nonzero maximal rigid object. If ${\mathcal}C$ contains a cluster tilting object, then any maximal rigid object is cluster tilting $\cite{zz1}$. Finite $2$-CY triangulated categories with non-zero maximal rigid objects are divided into two classes: one with cluster tilting objects, one without cluster tilting objects. For the first class, we can apply results in [@zz2] to obtain a classification of torsion pairs. So we are interested in triangulated categories in the second class in this paper. For these triangulated categories, Amiot gave a classification (see [@a; @bpr]) *[@bpr Proposition 2.2]* The standard, finite 2-Calabi-Yau, triangulated categories with non-zero maximal rigid objects which are not cluster tilting are exactly the orbit categories: - *(Type A)* $\A_{n,t}=D^{b}(\mathbb{K}\vec{A}_{(2t+1)(n+1)-3})/\tau^{t(n+1)-1}[1]$, where $n\geq 1$ and $t>1$; - *(Type D)* $\D_{n,t}=D^{b}(\mathbb{K}\vec{D}_{2t(n+1)})/\tau^{(n+1)}\varphi^{n}$, where $n, t\geq 1$, and where $\varphi$ is induced by an automorphism of $D_{2t(n+1)}$ of order *2*; - *(Type E)* $D^{b}(\mathbb{K}\vec{E}_{7})/\tau^{2}$ and $D^{b}(\mathbb{K}\vec{E}_{7})/\tau^{5}$. These categories depend on parameters $n,t$. We note that when $t=1$, $\A_{n,1}$ is also a finite $2$-CY triangulated category, it has cluster tilting objects [@bpr] which we are also interested in. In the following, we have some conclusions for torsion pairs in orbit triangulated categories. Firstly, we recall the definition of orbit categories[@k; @g]. Let ${\mathcal}D$ be a triangulated category and $F\colon {\mathcal}D\rightarrow {\mathcal}D$ be an autoequivalence. The $orbit$ $category$ $\mathcal O_{F}:= {\mathcal}D/F$ has the same objects as $\D$ and its morphisms from $X$ to $Y$ are in bijection with $\bigoplus_{i\in\ \mathbb{Z}} \mathrm{Hom}_{\D}(X,F^{i}Y)$. \[a1\] Let ${\mathcal}D$ be a locally finite triangulated category and $F\colon {\mathcal}D\rightarrow {\mathcal}D$ be an autoequivalence such that the orbit category $\mathcal O_{F}= {\mathcal}D/F$ is a triangulated category and the projection functor $\pi: {\mathcal}D\rightarrow \mathcal O_{F}$ is a triangle functor. If $(\X, \Y)$ is a torsion pair in $\mathcal O_{F}$, then $( \pi^{-1}(\X), \pi^{-1}(\Y))$ is a torsion pair in $\mathcal D$. We first show $\pi^{-1}(\X)$ is closed under extensions. For any $Z\in \pi^{-1}(\X)\ast\pi^{-1}(\X)$, there exists a triangle $X_{1}\rightarrow Z\rightarrow X_{2}\rightarrow X_1[1]$ with $X_{1},X_2\in \pi^{-1}(\X)$ in $\mathcal D$. Since $\pi$ is a triangle functor, we have that $\pi(X_{1})\rightarrow \pi(Z)\rightarrow \pi(X_{2})\rightarrow \pi(X_{1}[1])$ is a triangle in $\mathcal O_{F}$. Thus $\pi(Z)\in \X*\X\subseteq \X$, that is, $Z\in \pi^{-1}(\X)$. Since ${\mathcal}D$ is locally finite, any subcategory of ${\mathcal}D$ is functorially finite. It follows that $\pi^{-1}(\X)$ is functorially finite in $\D$. Since $(\X, \Y)$ is a torsion pair in $\mathcal O_{F}$, we have $\pi^{-1}(\Y)=\pi^{-1}(\X^{\perp})=\pi^{-1}(\X)^{\perp}.$ \[a2\] Let $\X$ and $\Y$ be subcategories of a triangulated category ${\mathcal}C$, and $F\colon {\mathcal}C\rightarrow {\mathcal}C$ be an autoequivalence. The pair $(\X,\Y)$ is called an *$F$-periodic* torsion pair if $(\X,\Y)$ is a torsion pair and $\X$ is $F$-periodic i.e., $F\X=\X$ (equivalently, $\Y$ is $F$-periodic). \[a3\] Let ${\mathcal}D$ be a locally finite triangulated category and $F\colon {\mathcal}D\rightarrow {\mathcal}D$ be an autoequivalence such that $\mathcal O_{F}= {\mathcal}D/F$ is a triangulated category and the projection functor $\pi: {\mathcal}D\rightarrow \mathcal O_{F}$ is a triangle functor. If $(\X, \Y)$ is an $F$-periodic torsion pair in $\mathcal D$, then $(\pi(\X), \pi(\Y))$ is a torsion pair in $\mathcal O_{F}$. Since $\X$ is $F$-periodic and $(\X, \Y)$ is a torsion pair, we have $$\mathrm{Hom}_{\mathcal O_{F}}(\pi(\X), \pi(\Y))=\bigoplus \limits_{i\in\mathbb{Z}} \mathrm{Hom}_{\mathcal D}(F^{i}\X,\Y)=0.$$ For any object $Z\in\mathcal O_{F}$, let $Z'$ be an object in its preimage in $\mathcal D$, that is, $Z'\in{\mathcal}D=\X\ast\Y$. Then there exists a triangle $X\rightarrow Z'\rightarrow Y\rightarrow X[1]$ in $\mathcal T$ with $X\in\X $ and $Y\in\Y$. Since $\pi$ is a triangle functor, we have that $\pi(X)\rightarrow Z\rightarrow \pi(Y)\rightarrow \pi(X[1])$ is a triangle in $\mathcal O_{F}$, i.e., $Z\in \pi(\X)\ast\pi(\Y)$. This proves that $(\pi(\X), \pi(\Y))$ is a torsion pair in $\mathcal O_{F}$. The following theorem gives a one-to-one correspondence between $F$-periodic torsion pairs in ${\mathcal}D$ and torsion pairs in $\mathcal O_{F}$. \[a4\] Let ${\mathcal}D$ be a locally finite triangulated category and $F\colon {\mathcal}D\rightarrow {\mathcal}D$ be an autoequivalence such that $\mathcal O_{F}:= {\mathcal}D/F$ is a triangulated category and the projection functor $\pi: {\mathcal}D\rightarrow \mathcal O_{F}$ is a triangle functor. Then there is a bijection between the following sets: 1. The set of $F$-periodic torsion pairs in $\mathcal D$; 2. The set of torsion pairs in $\mathcal O_{F}$. This follows directly from Lemma \[a1\] and Lemma \[a3\]. \[p4\] Let ${\mathcal}C$ be a connected finite $2$-CY triangulated category. Then the $t$-structures of ${\mathcal}C$ are trivial, i.e. $({\mathcal}C,0)$ or $(0,{\mathcal}C)$. Suppose $(\X,\Y)$ is a $t$-structure in $\C$. From definition we have $\X[1]\subseteq \X, \Y[-1]\subseteq \Y$. It follows that $\X[1]=\X, \Y[1]=\Y$, i.e. $\X,\Y$ are triangulated subcategories of ${\mathcal}C$, and $\C=\X\oplus \Y$. Thus $\X={\mathcal}C$ and $\Y=0$ or $\X=0$ and $\Y={\mathcal}C$. Classification of torsion pairs in $\A_{n,t}$ ============================================= In this section, we give a classification of torsion pairs in finite $2$-CY triangulated categories of type $A$ with non-zero maximal rigid objects. These categories are denoted by ${\mathcal}A_{n,t}$ [@bpr]. When $t=1$, the categories ${\mathcal}A_{n,1}$ have cluster tilting objects; when $t>1$, the categories ${\mathcal}A_{n,t}$ have non-zero maximal rigid objects which are not cluster tilting. A geometric description of torsion pairs in $\A_{n,t}$ ------------------------------------------------------- Let $\mathcal C_{A_{N-3}}$ be the cluster category of type $A_{N-3}$, where $N=(2t+1)(n+1)$. By the universal property of orbit categories [@k], also by the proof of Lemma 2.4 in [@bpr], we know that there exists a covering functor $\pi\colon\mathcal C_{A_{N-3}}\rightarrow{\mathcal}A_{n,t}$, which is a triangle functor. Write $F=\tau^{t(n+1)}$, then $F: \mathcal C_{A_{N-3}}\rightarrow\mathcal C_{A_{N-3}}$ is an autoequivalence. Since $\tau^{N-2}=[-2]$ in $D^{b}(\mathbb{K}\vec{A}_{N-3})$ by [@k] and $\tau=[1]$ in $\mathcal C_{A_{N-3}}$, $\tau$ is of order $N$ and $\tau^{n+1}$ is of order $2t+1$ in $\mathcal C_{A_{N-3}}$. Moreover, gcd$(t, 2t+1)=1$ implies that the order of $F=\tau^{t(n+1)}$ is $2t+1$, and the groups generated by $F$ and by $\tau^{n+1}$ are the same, i.e. $<F>=<\tau^{n+1}>$. Therefore $\A_{n,t}$ can be seen as the orbit category $\mathcal C_{A_{N-3}}/\tau^{n+1}$, and $\pi$ is a $(2t+1)$-covering functor (see [@bpr]). By Theorem \[a4\], we have the following consequence. There is a bijection between the set of $\tau^{n+1}$-periodic torsion pairs in $\mathcal C_{A_{N-3}}$ and the set of torsion pairs in ${\mathcal}A_{n,t}$. In the following, we recall the description of Ptolemy diagrams based on [@hjr1], and give a correspondence between subcategories of $\A_{n,t}$ and collections of diagonals of $N$-gon. Let $P_{n}$ be an $n$-gon, we label the vertices of $P_{n}$ clockwise by $1,2,\ldots n$ consecutively, where $n\geq 4$ is a positive integer. A $diagonal$ is a set of two non-neighbouring vertices $\{\alpha, \beta\}$. Two diagonals $\{\alpha_{1}, \alpha_{2}\}$ and $\{\beta_{1}, \beta_{2}\}$ $cross$ if their end points are all distinct and come in the order $\alpha_{1}$, $\beta_{1}$, $\alpha_{2}$, $\beta_{2}$ when moving around the polygon in one direction or the other. Let $\mathfrak U$ be a set of diagonals in the $n$-gon $P_{n}$. 1[@hjr1]. $\mathfrak U$ is called a $Ptolemy$ $diagram$ if for any two crossing diagonals $\alpha=\left\{\alpha_{1}, \alpha_{2}\right\}$ and $\beta=\left\{\beta_{1}, \beta_{2}\right\}$ in $\mathfrak U$, those of $\left\{\alpha_{1}, \beta_{1}\right\}$, $\left\{\alpha_{1}, \beta_{2}\right\}$, $\left\{\alpha_{2}, \beta_{1}\right\}$, $\left\{\alpha_{2}, \beta_{2}\right\}$ which are diagonals are in $\mathfrak U$ (see figure \[1\] for an example). 2\. Fix a positive integer $k|n$, and $n=k\ell$ for some integer $\ell$. $\mathfrak U$ is called a $k$-$periodic$ collection of diagonals of $P_{n}$ if for each diagonal $(i, j)\in\mathfrak U$, all diagonals $(i+kr, j+kr)$ *(*modulo $n$*)* for $1\leq r\leq \ell$ are in $\mathfrak U$ (see figure \[9\] for an example). 3\. $\mathfrak U$ is a $k$-$periodic$ $Ptolemy$ $diagram$ if it is a Ptolemy diagram and is $k$-periodic. There is a bijection between indecomposable objects of the cluster category $\C_{A_{N-3}}$ and diagonals of $N$-gon $P_{N}$ [@ccs]. In the following, we don’t distinct indecomposable objects and diagonals. The Auslander-Reiten translation $\tau$ acts on diagonals is rotation by one vertex in counterclockwise. $$\text{dim\ Ext}_{\C_{A_{N-3}}}^{1}(a,b)=\left\{ \begin{array}{cc} 1 &\text{if\ a\ and\ b\ cross},\\ 0 &\text{otherwise}. \end{array} \right.$$ Since $\C_{A_{N-3}}$ has only finitely many indecomposable objects, any subcategory of $\C_{A_{N-3}}$ closed under direct sums and direct summands is completely determined by the set of indecomposable objects it contains. Then the bijection between indecomposable objects of $\C_{A_{N-3}}$ and diagonals of $P_{N}$ extends to a bijection between subcategories of $\C_{A_{N-3}}$ and sets of diagonals of $P_{N}$. (0,0) circle (4) ; in [1,2,5,6]{}(-45\*-0:4) – (-45\*-90:4) ; in [1,5]{}(-45\*-0:4) – (-45\*-135:4) ; in [1,2,...,8]{} [ (-45\*+135:4) node [$\bullet$]{} ; ]{} ; (-45\*1+135:4.7) node [$1$]{} ; (-45\*2+135:4.7) node [$2$]{} ; (-45\*3+135:5.7) node [$3$]{} ; (-45\*4+135:5.3) node [$4$]{} ; (-45\*5+135:4.7) node [$5$]{} ; (-45\*6+135:4.8) node [$6$]{} ; (-45\*7+135:5.9) node [$7$]{} ; (-45\*8+135:4.7) node [$8$]{} ; For any subcategory $\X$ in $\A_{n,t}$, the preimage under the covering functor $\pi$ is a $\tau ^{n+1}$-periodic subcategory $\widetilde{\X}=\pi^{-1}(\X)$ in $\C_{A_{N-3}}$. Moreover, the subcategory $\widetilde{\X}$ corresponds to the set of diagonals of $N$-gon $P_{N}$ by the discussion above, we still denote the corresponding set of diagonals by $\widetilde{\X}$. The corresponding set $\widetilde{\X}$ of diagonals is $(n+1)$-periodic. In the rest of this section, we always use $(i,j)$ to represent an indecomposable object of $\C_{A_{N-3}}$ or a diagonal of $N$-gon $P_{N}$ without confusion, and $[(i,j)]$ to represent the image under the functor $\pi$. As a consequence, we have the following result. (0,0) circle (4) ; in [1,3,5,7,9]{}(-72\*-0:4) – (-72\*-72:4) ; in [1,2,...,5]{} [ (-72\*+72:4) node [$\bullet$]{} ; ]{} ; (-72\*1+72:4.7) node [$1$]{} ; (-72\*2+72:4.7) node [$3$]{} ; (-72\*3+72:5.7) node [$5$]{} ; (-72\*4+72:5.3) node [$7$]{} ; (-72\*5+72:4.7) node [$9$]{} ; There is a bijection between the following sets: 1. Subcategory $\X$ of ${\mathcal}A_{n,t}$; 2. Collection of diagonals $ \widetilde{\X}$ of the N-gon $P_{N}$ which are $(n+1)$-periodic. \[a8\] *[@hjr1 Theorem A]* There is a bijection between Ptolemy diagrams of the $(n+3)$-gon and torsion pairs in the cluster category of type $A_{n}$. \[a9\] The following Lemma gives an equivalent description of torsion pairs in ${\mathcal}A_{n,t}$. Let $\X$ be a subcategory of ${\mathcal}A_{n,t}$, and $\widetilde{\X}$ be the corresponding $(n+1)$-periodic collection of diagonals of $N$-gon $P_{N}$. Then the following statements are equivalent: 1. $(\X, \X^{\bot})$ is a torsion pair in ${\mathcal}A_{n,t}$; 2. $\X={}^{\bot}(\X^{\bot})$; 3. $\widetilde{\X}$ is an $(n+1)$-periodic Ptolemy diagram of N-gon. $(1)\Longleftrightarrow (2)$ is clear. $"(1)\Rightarrow (3)"$. If $(\X, \X^{\bot})$ is a torsion pair in ${\mathcal}A_{n,t}$, then $\X$ corresponds to an $(n+1)$-periodic collection of diagonals $\widetilde{\X}$ of the $N$-gon $P_{N}$ by Lemma \[a8\]. Moreover, $(\pi^{-1}(\X),\pi^{-1}(\X^{\bot}))$ is a torsion pair in $\C_{A_{N-3}}$ by Lemma \[a1\], and $\pi^{-1}(\X)$ corresponds to a Ptolemy diagram of $N$-gon, so $\widetilde{\X}$ is an $(n+1)$-periodic Ptolemy diagram of N-gon. $"(3)\Rightarrow (1)"$. If $\widetilde{\X}$ is an $(n+1)$-periodic Ptolemy diagram of N-gon, then it corresponds to a torsion pair in $\C_{A_{N-3}}$ by Lemma \[a9\]. Moreover, $\widetilde{\X}$ is $(n+1)$-periodic, this implies the corresponding subcategory $\X$ and $\X^{\bot}$ is a torsion pair in $\A_{n,t}$ by Lemma \[a3\]. We will frequently use in this section the coordinate system in the AR-quiver of $\A_{n,t}$, see [@bpr] for more details. For a coordinate $(i,j)$ (modulo $N$) with $j>i$ corresponding to an indecomposable object in $\mathcal C_{A_{N-3}}$, we call $j-i-1$ the $level$ of the vertex, and $j-i$ the $length$ of the vertex. Denote the vertex in the AR-quiver of ${\mathcal}A_{n,t}$ by $[(i,j)]$ such that all $(i+r(n+1),j+r(n+1))$ for $1\leq r\leq 2t+1$ has to be identified. We also call $j-i-1$ the $level$ of the vertex $[(i,j)]$. The $length$ of the vertex $[(i,j)]$ is $j-i$. Buan, Palu and Reiten determined all the indecomposable rigid objects in ${\mathcal}A_{n,t}$: for an indecomposable object $[(i,j)]$ in $\A_{n,t}$ with level $j-i-1$, it is rigid if and only if $j-i-1\leq n$ [@bpr Lemma 2.4]. Torsion pairs in $\A_{n,t}$ with $t>1$ -------------------------------------- \[a13\] Let $(\X, \X^{\bot})$ be a torsion pair in ${\mathcal}A_{n,t}$, $n\geq 1$, $t>1$, and $\widetilde{\X}$ be the corresponding $(n+1)$-periodic collection of diagonals of the $N$-gon $P_{N}$. Then precisely one of the following situations occurs: 1. The level of all the indecomposable objects in $\X\leq n$. 2. The level of all the indecomposable objects in $\X^{\bot}\leq n$. Note that we can always choose an representative $(i,j)\in[(i,j)]$ such that $1\leq i\leq n+1$, $3\leq j\leq (t+1)(n+1)$ (see Fig.2 in [@bpr]). For example, we have $[(1,(t+1)(n+1)+1)]=[(1,1+t(n+1))]$. 1. If the level of all the indecomposable objects of $\X\leq n$, we claim that $\X^{\bot}$ must contain an element with level$>n$. Indeed, since $\X$ contains only (finitely many) indecomposable rigid objects, we pick an indecomposable object from $\X$ with maximal length. Suppose that its coordinate is $[(1,\ell)]$. Since $(1,\ell)$ corresponds to a rigid object, $3\leq \ell\leq n+2$. If we can show $[(1,\ell+n+1)]\in\X^{\bot}[-1]$, then $\X^{\bot}[-1]$ contains an element with level$>n$, and so does $\X^{\bot}$. Since $t>1$, $\ell+n+1\leq n+2+n+1=2n+3<(t+1)(n+1)$, i.e., $[(1,\ell+n+1)]$ represents a different element from $[(1,\ell)]$ in the AR-quiver of ${\mathcal}A_{n,t}$, and the level of $[(1,\ell+n+1)]$ is $\ell+n-1\geq 3+n-1=n+2$. This will complete the proof of our claim. Now we prove that $[(1,\ell+n+1)]\in\X^{\bot}[-1]$. Since $[(1,\ell)]$ is in $\X$ with maximal length, there is no diagonal $(i,j)\in\widetilde{\X}$ with $1<i<\ell<j$, otherwise $(1,\ell)$ has to cross $(i,j)$, but the Ptolemy condition yields a diagonal $(1,j)$ whose length is longer than $(1,\ell)$, a contradiction. If $(1,\ell+n+1)$ crosses a diagonal $(a,b)$ in $\widetilde{\X}$, then $b-a>\ell -1$, a contradiction. This means $(1,\ell+n+1)$ does not cross any diagonal from $\widetilde{\X}$. Similarly, we can prove $(n+2,\ell+2n+2),(2n+3,\ell+3n+3),\ldots \in\widetilde{\X}$, i.e., $[(1,\ell+n+1)]\in\X^{\bot}[-1]$. 2. If $\X$ contains an element with level$>n$, we claim that $\X^{\bot}$ contains only rigid indecomposable objects. Indeed, without losing generality, we suppose $[(1,\ell)]\in\X$ with level $\ell-1-1\geq n+1$, that is, $(t+1)(n+1)\geq \ell\geq n+3$. We choose $\ell$ for different intervals $[n+3,2n+3]$, $[2n+3,3n+4],\ldots,[2t(n+1)+1,(2t+1)(n+1)+1]$. 1. If $\ell\leq 2n+3$, then the corresponding diagonals in $\widetilde{\X}$ are shown in figure \[2\]. 2. If $2n+3\leq\ell\leq3n+4$, then the corresponding diagonals in $\widetilde{\X}$ are shown in figure \[3\]. 3. The other cases are similar. This shows the level of indecomposable objects in $[(1,\ell)]^{\perp}\leq n$, so does $\X^{\perp}$, since $\X^{\perp}\subseteq[(1,\ell)]^{\perp}$. As a consequence, for a torsion pair $(\X, \X^{\bot})$ in ${\mathcal}A_{n,t}$, precisely one of $(1)$ and $(2)$ occurs. (0,0) circle (4) ; in [1,3,...,9]{} (-36\*+126:4) edge\[very thick, color=black!40, out=[-34-36\*]{}, in=[186-36\*]{}\] (-36\*+18:4) ; in [1,2,...,10]{} [ (-36\*+126:4) node [$\bullet$]{} ; ]{} ; (-36\*1+126:4.7) node [$1$]{} ; (-36\*2+126:4.7) node [ ]{} ; (-36\*3+126:3) node [$n+2$]{} ; (-36\*4+126:4.7) node [$\ell$]{} ; (-36\*5+126:4.9) node [$2n+3$]{} ; (-36\*6+126:4.6) node [$\ell+n+1$]{} ; (-36\*7+126:4.8) node [$3n+4$]{} ; (-36\*8+126:6.7) node [$\ell+2n+2$]{} ; (-36\*9+126:4.7) node [ ]{} ; (-36\*10+126:4.7) node [ ]{} ; (0,0) circle (4) ; in [1,3,4,6,9]{} (-32.7\*+122.7:4) edge\[very thick, color=black!40, out=[-47.3-32.7\*]{}, in=[172.7-32.7\*]{}\] (-32.7\*-8.1:4) ; in [1,2,...,11]{} [ (-32.7\*+122.7:4) node [$\bullet$]{} ; ]{} ; (-32.7\*1+122.7:4.6) node [$1$]{} ; (-32.7\*2+122.7:4.7) node [ ]{} ; (-32.7\*3+122.7:4.9) node [$n+2$]{} ; (-32.7\*4+122.7:5.1) node [$2n+3$]{} ; (-32.7\*5+122.7:4.5) node [$\ell$]{} ; (-32.7\*6+122.7:4.6) node [$3n+4$]{} ; (-32.7\*7+122.7:4.7) node [$\ell+n+1$]{} ; (-32.7\*8+122.7:5.3) node [$\ell+2n+2$;]{}; (-32.7\*9+122.7:4.7) node [ ]{} ; (-32.7\*10+122.7:5.4) node [$\ell+3n+3$]{} ; (-32.7\*11+122.7:4.7) node [ ]{} ; This Proposition immediately yields the following important conclusion. *[@bpr]* ${\mathcal}A_{n,t}$ do not contain any cluster tilting object. By Proposition \[a13\], the classification of torsion pairs $(\X, \X^{\bot})$ in ${\mathcal}A_{n,t}$ reduces to the classification of the possible halves $\X$ (or $\X^{\bot}$) of a torsion pair, whose all indecomposable objects are strictly below level $(n+1)$ in the AR-quiver of ${\mathcal}A_{n,t}$. Let $(i,j)$ be a diagonal of $N$-gon $P_{N}$. The $wing$ $W(i,j)$ of $(i,j)$ consists of all diagonals $(r,s)$ of the $N$-gon such that $i\leq r\leq s\leq j$, that is all diagonals which are overarched by $(i,j)$. $[(i,j)]$ represents a vertex in the AR-quiver of ${\mathcal}A_{n,t}$, the corresponding wing is denoted by $W[(i,j)]$. There are bijections between the following sets: 1. Torsion pairs $(\X, \X^{\bot})$ in ${\mathcal}A_{n,t}$ such that the level of all the indecomposable objects in $\X\leq n$; 2. $(n+1)$-periodic Ptolemy diagrams $\widetilde{\X}$ of $N$-gon $P_{N}$ such that all diagonals in $\widetilde{\X}$ have length at most $n+1$; 3. Collections $\left\{([(i_{1},j_{1})], [W_{1}]), \ldots, ([(i_{r},j_{r})], [W_{r}])\right\}$ of pairs consisting of vertices $[(i_{\ell},j_{\ell})]$ of level$\leq n$ in the AR-quiver of $A_{n,t}$ and subset $[W_{\ell}]\subset W[(i_{\ell},j_{\ell})]$ of their wings such that for any different $k, \ell\in \left\{1,2,\ldots,r\right\}$, we have $$W[(i_{k},j_{k})][1]\cap W[(i_{\ell},j_{\ell})]=\emptyset,$$ and the $(n+1)$-periodic collection $W_{\ell}$ corresponding to $[W_{\ell}]$ is a Ptolemy diagram. \[b\] The proof is similar as in the case of cluster tubes [@hjr2 Theorem 4.4]. Note that the number of indecomposable rigid objects in ${\mathcal}A_{n,t}$ is independent of $t$, we have the following result. The number of torsion pairs in ${\mathcal}A_{n,t}$ with $n\geq 1$, $t>1$ is independent of $t$. Therefore counting the number of torsion pairs in ${\mathcal}A_{n,t}$ reduces to counting the possible sets of pairs in the AR-quiver of ${\mathcal}A_{n,t}:$ $\left\{([(i_{1},j_{1})], [W_{1}]), \ldots, ([(i_{r},j_{r})], [W_{r}])\right\}$. This is the same as in the process of counting torsion pairs in the cluster tube of rank $n+1$, see [@hjr2] for details. The number of torsion pairs in ${\mathcal}A_{n,t}$ with $n\geq 1$, $t>1$ is the same as the cluster tube of rank $n+1$, that is $$T_{n+1}=\sum\limits_{\ell\geq 0}2^{\ell+1}\binom{n+\ell}{\ell}\binom{2n+1}{n-2\ell},$$ where $T_{n+1}$ represents the number of torsion pairs in the cluster tube of rank $n+1$. \[a\] \[c8\] When $n=2,t=2$, ${\mathcal}A_{2,2}=D^b(\mathbb{K}\vec{A}_{12})/\tau^5[1]$ is $2$-CY with non-zero maximal rigid objects, whose Auslander-Reiten quiver is shown in figure \[4\]. From Theorem \[a\], we have $T_{2+1}=32$. Now we construct all torsion pairs by Theorem \[b\]. $$\begin{aligned} \X_{1}=\{[(0)] \}&\quad\quad& \X^\perp_{1}[-1]={\mathcal}A_{2,2}\nonumber \\ \X_{2}=\{[(13)] \}&\quad\quad& \X^\perp_{2}[-1]=\{[(13)],[(14)],[(16)],[(17)],[(19)],[(36)],[(37)],[(39)]\}\nonumber \\ \X_{3}=\{[(24)] \}&\quad\quad&\X^\perp_{3}[-1]=\{[(14)],[(15)],[(17)],[(18)],[(24)],[(25)],[(27)],[(28)]\}\nonumber \\ \X_{4}=\{[(35)] \}&\quad\quad&\X^\perp_{4}[-1]=\{[(25)],[(26)],[(28)],[(29)],[(35)],[(36)],[(38)],[(39)]\}\nonumber \\ \X_{5}=\{[(14)]\}&\quad\quad& \X^\perp_{5}[-1]=\{[(13)],[(14)],[(17)],[(24)]\}\nonumber \\ \X_{6}=\{[(25)]\}&\quad\quad& \X^\perp_{6}[-1]=\{[(24)],[(25)],[(28)],[(35)]\}\nonumber \\ \X_{7}=\{[(36)]\}&\quad\quad& \X^\perp_{7}[-1]=\{[(13)],[(35)],[(36)],[(39)]\}\nonumber \\ \X_{8}=\{[(13)],[(14)] \}&\quad\quad& \X^\perp_{8}[-1]=\{[(13)],[(14)],[(17)]\}\nonumber \\ \X_{9}=\{[(24)],[(25)] \}&\quad\quad& \X^\perp_{9}[-1]=\{[(24)],[(25)],[(28)]\}\nonumber \\ \X_{10}=\{[(35)],[(36)] \}&\quad\quad& \X^\perp_{10}[-1]=\{[(35)],[(36)],[(39)]\}\nonumber \\ \X_{11}=\{[(14)],[(24)]\}&\quad\quad& \X^\perp_{11}[-1]=\{[(14)],[(17)],[(24)]\}\nonumber \\ \X_{12}=\{[(25)],[(35)] \}&\quad\quad& \X^\perp_{12}[-1]=\{[(25)],[(28)],[(35)]\}\nonumber \\ \X_{13}=\{[(13)],[(36)] \}&\quad\quad& \X^\perp_{13}[-1]=\{[(13)],[(36)],[(39)]\}\nonumber \\ \X_{14}=\{[(13)],[(14)],[(24)] \}&\quad\quad& \X^\perp_{14}[-1]=\{[(14)],[(17)]\}\nonumber \\ \X_{15}=\{[(24)],[(25)],[(35)]\}&\quad\quad& \X^\perp_{15}[-1]=\{[(25)],[(28)]\}\nonumber \\ \X_{16}=\{[(13)],[(35)],[(36)] \}&\quad\quad& \X^\perp_{16}[-1]=\{[(36)],[(39)]\}\nonumber \\\end{aligned}$$ In this example, the collection of pairs in Theorem \[b\] $(3)$ has one element,i.e., $r=1$. The pair $\{([(i_{1},j_{1})], [W_{1}])\}$ with $[W_{1}]$ containing zero object is the subcategory $\X_{1}$; The pairs $\{([(i_{1},j_{1})], [W_{1}])\}$ with $[W_{1}]$ containing one object are the following subcategories: $\X_{2}$ which corresponds to $([(1,3)],\{[(1,3)]\})$, $\X_{3}$ which corresponds to $([(2,4)],\{[(2,4)]\})$, $\X_{4}$ which corresponds to $([(3,5)],\{[(3,5)]\})$, $\X_{5}$ which corresponds to $([(1,4)],\{[(1,4)]\})$, $\X_{6}$ which corresponds to $([(2,5)],\{[(2,5)]\})$, $\X_{7}$ which corresponds to $([(3,6)],\{[(3,6)]\})$. The pairs $\{([(i_{1},j_{1})], [W_{1}])\}$ with $[W_{1}]$ containing two objects are the subcategories: $\X_{8}$ which corresponds to $([(1,4)],\{[(1,3)],[(1,4)]\})$, $\X_{9}$ which corresponds to $([(2,5)],\{[(2,4)],[(2,5)]\})$, $\X_{10}$ which corresponds to $([(3,6)],\{[(3,5)],[(3,6)]\})$, $\X_{11}$ which corresponds to $([(1,4)],\{[(2,4)],[(1,4)]\})$, $\X_{12}$ which corresponds to $([(2,5)],\{[(2,5)],[(3,5)]\})$, $\X_{13}$ which corresponds to $([(3,6)],\{[(1,3)],[(3,6)]\})$. The pairs $\{([(i_{1},j_{1})], [W_{1}])\}$ with $[W_{1}]$ containing three objects are the subcategories: $\X_{14}$ which corresponds to $([(1,4)],\{[(1,3)],[(1,4)],[(2,4)]\})$, $\X_{15}$ which corresponds to $([(2,5)],\{[(2,5)],[(3,5)],[(2,4)]\})$, $\X_{16}$ which corresponds to $([(3,6)],\{[(1,3)],[(3,5)],[(3,6)]\})$. Then $(\X_i, \X^\perp_{i}[-1])$ is a cotorsion pair in ${\mathcal}A_{2,2}$, where $i=1,2,3,\cdots,16$. It follows that $(\X_i, \X^\perp_{i})$ is a torsion pair in ${\mathcal}A_{2,2}$, for any $i$. Similarly, we can know that $({^\perp\X_{i}},\X_i)$ is a torsion pair in ${\mathcal}A_{2,2}$, for $i=1,2,3,\cdots,16$. in [1,2,3]{} [ ; in [2,3,...,]{} [ ; ; (2\*+-4,-2) node\[scale=0.7\] [$\;$u]{} ; (-4+2\*,-2) – (-3+2\*,-1) ; (-3+2\*,-1) – (-2+2\*,-2) ; ]{} ; ]{} ; (-1.1,-0.5) – ++(5.6,0) – ++(4.5,4.5) – ++(-2.9,2.9) – cycle ; in [1,2,3]{} [ ; in [2,3,...,]{} [ ; ; (2\*+-6,-2) node\[scale=0.7\] [$\;$]{} ; (-6+2\*,-2) – (-5+2\*,-1) ; (-5+2\*,-1) – (-4+2\*,-2) ; ]{} ; ]{} ; (-3.5,-0.5) – ++(5.6,0) – ++(4.5,4.5) – ++(-2.9,2.9) – cycle ; in [1,2,3]{} [ ; in [2,3,...,]{} [ ; ; (2\*+-5,+3) node\[scale=0.7\] [$\;$]{} ; (-5+2\*,+3) – (-4+2\*,+4) ; (-4+2\*,+4) – (-3+2\*,+3) ; ]{} ; ]{} ; (-2.2,+4.8) – ++(5.5,-0.1) – ++(4.5,4.5) – ++(-2.9,2.9) – cycle ; in [1,2,3]{} [ ; in [2,3,...,]{} [ ; ; (2\*+-7,+3) node\[scale=0.7\] [$\;$]{} ; (-7+2\*,+3) – (-6+2\*,+4) ; (-6+2\*,+4) – (-5+2\*,+3) ; ]{} ; (3.5+,12.5-) node\[circle, fill=white, scale=1.2\] ; ]{} ; (-4.3,+4.7) – ++(5.6,0) – ++(4.5,4.5) – ++(-2.9,2.9) – cycle ; (3,5) node\[scale=1.5\] [$\cdots$]{} ; (3,9) node\[scale=1.5\] [$\cdots$]{} ; (17,4) node\[scale=1.5\] [$\cdots$]{} ; Torsion pairs in $\A_{n,1}$ --------------------------- By Amiot [@a], Burban-Iyama-Keller-Reiten [@bikr], the standard, finite $2$-CY triangulated categories of type $A$ with cluster tilting objects are cluster categories of type $A$ and the orbit categories $D^{b}(\vec{A}_{3n})/\tau^{n}[1]$ with $n\geq 1$, which is $\A_{n,1}$ (see Proposition 2.1 in [@bpr]). There is also a covering functor from the cluster category $\mathcal C_{A_{3n}}$ to ${\mathcal}A_{n,1}$. We have the following result. The number of torsion pairs in ${\mathcal}A_{n,1}$ is $$N_{n,1}=T_{n+1}-t_{n,1}=\sum\limits_{\ell\geq 0}2^{\ell+1}\binom{n+\ell}{\ell}\binom{2n+1}{n-2\ell}-\sum\limits_{\ell\geq 0}2^{\ell}\binom{n+\ell}{\ell}\binom{2n}{n-2\ell},$$ where $T_{n+1}$ represents the number of torsion pairs in the cluster tube of rank $n+1$, and $t_{n,1}=(n+1)s_{n+2}$, where $s_{n+2}$ represents the number of torsion pairs in the cluster category of type $A_{n-1}$. \[d\] Recall that an object $[(i,j)]$ in ${\mathcal}A_{n,1}$ is rigid if and only if its length $\leq n+1$ [@bpr]. For a torsion pair $(\X, \Y)$ in ${\mathcal}A_{n,1}$, if $\X$ (resp. $\Y$) contains a diagonal whose length is longer than $n+1$, i.e. non-rigid object, then $\Y$ (resp. $\X$) contains only indecomposable rigid objects. The proof is the same as II in the proof of Proposition \[a13\]. Thus, we have two subclasses of torsion pairs in ${\mathcal}A_{n,1}$: - Torsion pairs $(\X, \Y)$ such that $\X$ contains only indecomposable rigid objects; - Torsion pairs $(\X, \Y)$ such that $\Y$ contains only indecomposable rigid objects. The intersection of class (I) and class (II) is the subclass of torsion pairs $(\X, \Y)$ in ${\mathcal}A_{n,1}$ such that both $\X$ and $\Y$ contain only indecomposable rigid objects. Next, we consider the case that both $\X$ and $\Y$ contain only indecomposable rigid objects in the following. If $\X$ is a cluster tilting subcategory in ${\mathcal}A_{n,1}$, then ${\mathcal}A_{n,1}$ has a cotorsion pair $(\X, \X)$. Besides these, there are some other torsion pairs $(\X, \Y)$ with both $\X$ and $\Y$ containing only indecomposable rigid objects. We give a characterization of them below. Claim: for a torsion pair $(\X, \Y)$, both $\X$ and $\Y$ contain only indecomposable rigid objects if and only if $\X$ and $\Y$ contain one of the rigid objects $[(i,(i+n+1))]$ with $i\in\left\{1,2,\ldots,n+1\right\}$, where these rigid objects are the indecomposable rigid objects with maximal length in ${\mathcal}A_{n,1}$. We prove the claim in the following. Let $\widetilde{\X}$ be the corresponding $(n+1)$-periodic collection of diagonals of the $N$-gon with $N=(2t+1)(n+1)=3(n+1)$ as before. If $\X$ contains one rigid object $[(i,(i+n+1))]$ with maximal length, where $i\in\left\{1,2,\ldots,n+1\right\}$. Without losing generality, we assume $[(1,n+2)]\in\X$ (up to shifting). Then $(1,n+2)$, $(n+2,2n+3)$ and $(2n+3,3n+4)=(2n+3,1)$ are in $\widetilde{\X}$, so the maximal length of diagonals that do not cross with diagonals in $\widetilde{\X}$ is $n+1$. Thus $\Y[1]$ contains only indecomposable rigid objects, so does $\Y$. Similarly one can prove that $\X$ contains only rigid indecomposable objects. Conversely, if $(\X, \Y)$ is a torsion pair with both $\X$ and $\Y$ contain only indecomposable rigid objects, we pick a diagonal from $\X$ with maximal length. Suppose its coordinate is $[(1,\ell)]$. It follows from $[(1,\ell)]$ corresponding to an indecomposable rigid object that $3\leq \ell\leq n+2$. If $\ell<n+2$, then $[(1,\ell+n+1)]$ is in $\X^{\bot}[-1]$, which is similar to the part I in the proof of Proposition \[a13\] , i.e., $\Y$ contains an indecomposable object with length $>n+1$, which is non-rigid, a contradiction. Thus $l=n+2$, and $\X$ contains the diagonal $[(1,n+2)]$. Same proof implies that $\Y$ contains a rigid object $[(i,(i+n+1))]$, where $i\in\left\{1,2,\ldots,n+1\right\}$. This completes the proof of the claim above. It is easy to prove that for a torsion pair $(\X,\Y)$ with $\X$ and $\Y$ containing only rigid indecomposable objects, both $\X$ and $\Y$ contain precisely one indecomposable rigid object with maximal length in ${\mathcal}A_{n,1}$. Otherwise, for any such two rigid objects in $\X$, the corresponding diagonals cross, and produce a diagonal with length bigger than $n+1$, which is non-rigid, a contradiction. Thus the number of torsion pairs in $\A_{n,1}$ is the number of torsion pairs in class (I) plus the number of those in class (II) and minus the number of torsion pairs in the intersection of class (I) and class (II). The sum of numbers of torsion pairs in class (I) and of those in class (II) is the same as the number of torsion pairs in $\A_{n,t}$ with $t>1$ (compare to Theorem 3.12). We need to count the number of torsion pairs in the intersection. The intersection of class (I) and class (II) is the set of torsion pairs $(\X,\Y)$ with $\X$ and $\Y$ containing only indecomposable rigid objects, which contains one and only one of $[(i,(i+n+1))]$, where $i\in\left\{1,2,\ldots,n+1\right\}$. For the part $\X$ of such a torsion pair $(\X,\Y)$, the corresponding $(n+1)$-periodic set $\widetilde{\X}$ of diagonals of the $N$-gon contains one diagonal of maximal length. Without losing generality, we assume that it contains $[(1,n+2)]$. Then the diagonals in $\widetilde{\X}$ can be written as $[(i,j)]$ with $1\leq i<j\leq n+2$, which does not cross with $[(1,n+2)]$. Then we have a Ptolemy diagram of $n+2$-gon consisting of diagonals $(i,j)$ such that $[(i,j)] \in \widetilde{\X}\setminus \{[(1,n+2)]\}$, and $1\leq i<j\leq n+2$. Any Ptolemy diagram of $n+2$-gon gives an $(n+1)$-periodic Ptolemy diagram $\widetilde{\X}$ of the $N$-gon by adding a diagonal $[(1,n+2)].$ Thus, the number of the intersection of class (I) and class (II) is $t_{n,1}=(n+1)s_{n+2}$, where $s_{n+2}$ is the number of Ptolemy diagrams of the $(n+2)$-gon, $s_{n+2}=\frac{1}{n+1}\sum\limits_{\ell\geq 0}2^{\ell}\binom{n+\ell}{\ell}\binom{2n}{n-2\ell}$ by [@hjr1]. Then we get the conclusion. When $n=2,t=1$, ${\mathcal}A_{2,1}=D^b(\mathbb{K}\vec{A}_{6})/\tau^2[1]$ is $2$-CY with cluster tilting objects, whose Auslander-Reiten quiver is shown in figure \[5\]. By Theorem \[d\], we have that the number of torsion pairs in ${\mathcal}A_{2,1}: N_{2,1}=20$. We can construct them as follows: $$\begin{aligned} \X_{1}=\{[(0)]\} &\quad\quad& \X^\perp_{1}[-1]={\mathcal}A_{2,1}\nonumber \\ \X_{2}=\{[(13)] \}&\quad\quad& \X^\perp_{2}[-1]=\{[(13)],[(14)],[(16)],[(36)]\}\nonumber \\ \X_{3}=\{[(24)] \}&\quad\quad&\X^\perp_{3}[-1]=\{[(14)],[(15)],[(24)],[(25)]\}\nonumber \\ \X_{4}=\{[(35)] \}&\quad\quad&\X^\perp_{4}[-1]=\{[(25)],[(26)],[(35)],[(36)]\}\nonumber \\ \X_{5}=\{[(14)] \}&\quad\quad& \X^\perp_{5}[-1]=\{[(13)],[(14)],[(24)]\}\nonumber \\ \X_{6}=\{[(25)] \}&\quad\quad& \X^\perp_{6}[-1]=\{[(24)],[(25)],[(35)]\}\nonumber \\ \X_{7}=\{[(36)] \}&\quad\quad& \X^\perp_{7}[-1]=\{[(13)],[(35)],[(36)]\}\nonumber \\ \Y_{1}=\{[(13)],[(14)] \}&\quad\quad& \Y^\perp_{1}[-1]=\{[(13)],[(14)]\}=\Y_{1}\nonumber \\ \Y_{2}=\{[(24)],[(25)] \}&\quad\quad& \Y^\perp_{2}[-1]=\{[(24)],[(25)]\}=\Y_{2}\nonumber \\ \Y_{3}=\{[(35)],[(36)] \}&\quad\quad& \Y^\perp_{3}[-1]=\{[(35)],[(36)]\}=\Y_{3}\nonumber \\ \Y_{4}=\{[(14)],[(24)]\}&\quad\quad& \Y^\perp_{4}[-1]=\{[(14)],[(24)]\}=\Y_{4}\nonumber \\ \Y_{5}=\{[(25)],[(35)] \}&\quad\quad& \Y^\perp_{5}[-1]=\{[(25)],[(35)]\}=\Y_{5}\nonumber \\ \Y_{6}=\{[(13)],[(36)]\}&\quad\quad& \Y^\perp_{6}[-1]=\{[(13)],[(36)]\}=\Y_{6}\end{aligned}$$ By Example \[c8\], we have known the number of torsion pairs in ${\mathcal}A_{2,2}$ is $T_{2+1}=32$. Next, we should count the number of extension closed subcategory $\X$, which consists of indecomposable rigid objects and contains one and only one of indecomposable rigid objects with maximal length. The rigid indecomposable objects with maximal length in ${\mathcal}A_{2,1}$ are $[(1,4)], [(2,5)], [(3,6)]$. The subcategories containing only indecomposable rigid objects and the rigid object $[(1,4)]$ are the followings: $\X_{5}=\{[(14)]\}$, $\X^\perp_{5}[-1]=\{[(13)],[(14)],[(24)]\}$, $\Y_{1}=\{[(13)],[(14)]\}$, and $\Y_{4}=\{[(14)],[(24)]\}$. Then shifting all the subcategories above, we have all the subcategories containing only indecomposable rigid objects and one of the rigid objects $[(i,i+n+1)]$ with maximal length, where $i\in\{1,2,3\}$. The number of such subcategories is $t_{2,1}=3\times 4=12=(2+1)s_{2+2}$. Then the number of torsion pairs in ${\mathcal}A_{2,1}$ is $T_{2+1}-12=20$. It follows that $(\X_i, \X^\perp_{i})$ is a torsion pair in ${\mathcal}A_{2,1}$, where $i=1,2,3,\cdots,7$. Similarly, we can know that $({^\perp\X_{i}},\X_i)$ is a torsion pair in ${\mathcal}A_{2,1}$, where $i=1,2,3,\cdots,7$. Moreover, $(\Y_j, \Y^\perp_{j}[-1])=(\Y_j,\Y_j)$ is a cotorsion pair in ${\mathcal}A_{2,1}$, where $j=1,2,3,\cdots,6$. Note that $^\perp\Y_j[1]=\Y^\perp_j[-1]$. It follows that $(\Y_j, \Y_{j}[1])$ is a torsion pair in ${\mathcal}A_{2,1}$, where $i=1,2,3,\cdots,6$. in [1,2,3]{} [ ; in [2,3,...,]{} [ ; ; (2\*+-4,-2) node\[scale=0.7\] [$\;$u]{} ; (-4+2\*,-2) – (-3+2\*,-1) ; (-3+2\*,-1) – (-2+2\*,-2) ; ]{} ; ]{} ; (-1.1,-0.5) – ++(5.6,0) – ++(1.5,1.5) – ++(-2.9,2.9) – cycle ; in [1,2,3]{} [ ; in [2,3,...,]{} [ ; ; (2\*+-6,-2) node\[scale=0.7\] [$\;$]{} ; (-6+2\*,-2) – (-5+2\*,-1) ; (-5+2\*,-1) – (-4+2\*,-2) ; ]{} ; ]{} ; (-3.5,-0.5) – ++(5.6,0) – ++(1.5,1.5) – ++(-2.9,2.9) – cycle ; in [1,2,3]{} [ ; in [2,3,...,]{} [ ; ; (2\*+-5,+9) node\[scale=0.7\] [$\;$]{} ; (-5+2\*,+9) – (-4+2\*,+10) ; (-4+2\*,+10) – (-3+2\*,+9) ; ]{} ; ]{} ; (-2.2,+10.8) – ++(5.5,-0.1) – ++(1.5,1.5) – ++(-2.9,2.9) – cycle ; in [1,2,3]{} [ ; in [2,3,...,]{} [ ; ; (2\*+-7,+9) node\[scale=0.7\] [$\;$]{} ; (-7+2\*,+9) – (-6+2\*,+10) ; (-6+2\*,+10) – (-5+2\*,+9) ; ]{} ; (0.5+,15.5-) node\[circle, fill=white, scale=1.2\] ; ]{} ; (-4.4,+10.7) – ++(5.6,0) – ++(1.5,1.5) – ++(-2.8,2.8) – cycle ; (0,4) node\[scale=1.5\] [$\cdots$]{} ; (17,4) node\[scale=1.5\] [$\cdots$]{} ; Classification of torsion pairs in $\D_{n,t}$ ============================================= In this section, we give a classification of torsion pairs in $\D_{n,t}$ and count the number. Let $u=2t(n+1)$ and $\mathcal C_{D_u}$ be the cluster category of type $D_{u}$. Then ${\mathcal}D_{n,t}=D^{b}(\mathbb{K}\vec{D}_{u})/\tau^{n+1}\varphi^{n}$, where $\varphi$ is induced by an automorphism of $D_u$ of order $2$, $n\geq1,t\geq1$. It follows from Lemma 2.9 in [@bpr] that there exists a covering functor $\pi\colon\mathcal C_{D_{u}}\rightarrow{\mathcal}D_{n,t}$, which is a triangle functor. Write $F=\tau^{(n+1)}\varphi^n$, then $F: \mathcal C_{D_u}\rightarrow\mathcal C_{D_u}$ is an autoequivalence and $\D_{n,t}$ is the orbit category $\C_{D_u}/F$ (compare Lemma 2.9 in [@bpr]). Since $\tau^{-u+1}=[1]$ in $D^{b}(\mathbb{K}\vec{D}_{u})$ by [@s], we have $\tau^{-2t(n+1)}=1=\tau^{2t(n+1)}$ in ${\mathcal}C_{D_u}$, and $\pi$ is a $2t$-covering functor. By Theorem \[a4\], we have the following. There is a bijection between the set of $F$-periodic torsion pairs in $\mathcal C_{D_{u}}$ and the set of torsion pairs in ${\mathcal}D_{n,t}$. Let us recall the definition of Ptolemy diagrams of type $D$ and its relation to torsion pairs in the cluster categories of type $D$ based on [@hjr3]. For any $n\geq 1$ we consider a regular $2n$-gon $Q_n$, we label the vertices of $Q_n$ clockwise by $1,2,\ldots 2n$ consecutively. In our arguments below vertices will also be numbered by some $r\in\mathbb{N}$ which might not be in the range $1\le r\le 2n$; in this case the numbering of vertices always has to be taken modulo $2n$. An [*arc*]{} is a set $\{i,j\}$ of vertices of $Q_n$ with $j\not\in \{i-1,i,i+1\}$, i.e. $i$ and $j$ are different and non-neighboring vertices. The arcs connecting two opposite vertices $i$ and $i+n$ are called [*diameters*]{}. We need two different copies of each of these diameters and denote them by $\{i,i+n\}_g$ and $\{i,i+n\}_r$, where $1\le i\le 2n$. The indices should indicate that these diameters are coloured in the colours green and red, which is a convenient way to think about and to visualize the diameters. By a slight abuse of notation, we sometimes omit the indices and just write $\{i,i+n\}$ for diameters, to avoid cumbersome definitions or statements. Any arc in $Q_n$ which is not a diameter is of the form $\{i,j\}$ where $j\in [i+2,i+n-1]$; here $[i+2,i+n-1]$ stands for the set of vertices of the $2n$-gon $Q_n$ which are met when going clockwise from $i+2$ to $i+n-1$ on the boundary of $Q_n$. See figure \[11\] for an example, for better visibility we draw the red diameters in a wavelike form and the green ones as straight lines. (0,0) circle (4) ; (80:4) edge\[thick, color=black!\] (-160:4) ; (-220:4) edge\[thick, color=black!\] (20:4) ; (20:4) edge\[thick, color=black!\] (-100:4) ; (-160:4) edge\[thick, color=black!\] (-40:4) ; (-160:4) edge\[thick, dashed, decorate, decoration=snake, color=red!\] (20:4); (-160:4) edge\[thick, dashed, color=green!\] (20:4); (-60\*1+140:4.7) node [$1$]{} ; (-60\*2+140:4.8) node [$2$]{} ; (-60\*3+140:5) node [$3$]{} ; (-60\*4+140:4.7) node [$4$]{} ; (-60\*5+140:5.7) node [$5$]{} ; (-60\*6+140:4.7) node [$6$]{} ; (-60\*1+140:4) node [$\bullet$]{} ; (-60\*2+140:4) node [$\bullet$]{} ; (-60\*3+140:4) node [$\bullet$]{} ; (-60\*4+140:4) node [$\bullet$]{} ; (-60\*5+140:4) node [$\bullet$]{} ; (-60\*6+140:4) node [$\bullet$]{} ; Such an arc has a partner arc $\{i+n,j+n\}$ which is obtained from $\{i,j\}$ by a rotation by 180 degrees. We denote the pair of arcs $\{\{i,j\},\{i+n,j+n\}\}$ by $\overline{\{i,j\}}$ throughout this section of the paper. The indecomposable objects in $\C_{{D}_n}$ are in bijection with the union of the set of pairs $\overline{\{i,j\}}$ of non-diameter arcs and the set of diameters $\{i,i+n\}_g$ and $\{i,i+n\}_r$ in two different colours. This bijection extends to subcategories of $\C_{{D}_n}$ closed under direct sums and direct summands and collections of arcs of $2n$-gon $Q_n$. For the pair of non-diameter arcs $\overline{\{i,j\}}$ the corresponding indecomposable object has coordinates $(i,j)$; note that the coordinates are only determined modulo $n$ so both arcs $\{i,j\}$ and $\{i+n,j+n\}$ in the pair $\overline{\{i,j\}}$ yield the same coordinate in the Auslander-Reiten quiver of $\C_{D_{n}}$. The action of $\tau$ on non-diameter arcs is rotation by one vertex, the action of $\tau$ on diameters is rotation by one vertex and changing their colour [@hjr3; @s]. We note that $\varphi$ acts on diameters by changing their colour and $\varphi=id$ on non-diameter arcs [@bpr]. Back to our consideration, for a coordinate $(i,j)$ corresponding to an indecomposable object of $\C_{D_{u}}$, we denote $[(i,j)]$ by the image under the covering functor $\pi$, then $[(i,j)]$ determines an indecomposable object of $\D_{n,t}$. For any subcategory $\X$ in $\D_{n,t}$, the preimage under the covering functor $\pi$ corresponds to an $F$-periodic subcategory $\widetilde{\X}=\pi^{-1}(\X)$ in $\C_{D_{u}}$. Moreover, the subcategory $\widetilde{\X}$ corresponds to set of arcs of $2u$-gon $Q_u$ by the discussion above, we still denote the corresponding set of arcs by $\widetilde{\X}$. Fix $F=\tau^{n+1}\varphi^{n}$. We call that the set of arcs $\widetilde{\X}$ is $F$-$periodic$, if the following conditions are satisfied: $(1)$ For each non-diameter arc $(i, j)\in\widetilde{\X}$, also all arcs $(i+(n+1)r, j+(n+1)r)$ *(*modulo $u$*)* for $1\leq r\leq 2t$ are in $\widetilde{\X}$. $(2)$ For each diameter $(i,i+u)\in\widetilde{\X}$, all diameters $(i+(n+1)r, j+u+(n+1)r)$ *(*modulo $u$*)* with the same colour as $(i,i+u)$, where $1\leq r\leq 2t$ and $r$ is even, are in $\widetilde{\X}$, and all diameters $(i+(n+1)r, j+u+(n+1)r)$ *(*modulo $u$*)* with the opposite colour as $(i,i+u)$, where $1\leq r\leq 2t$ and $r$ is odd, are in $\widetilde{\X}$. In the rest of this section, we use $(i,j)$ to represent an indecomposable object in $\C_{D_{u}}$ or an arc of $2u$-gon $Q_{u}$ without confusion. There is a bijection between the following sets: 1. Subcategory $\X$ of ${\mathcal}D_{n,t}$; 2. Collection of arcs $ \widetilde{\X}$ of the $2u$-gon $Q_u$ which are $F$-periodic. We recall the definition of Ptolemy diagrams of type $D$ for a $2n$-gon $Q_n$ from [@hjr3]. \[c1\] 1. We say that two non-diameter arcs $\{i,j\}$ and $\{k,\ell\}$ *cross* precisely if the elements $i,j,k,\ell$ are all distinct and come in the order $i, k, j, \ell$ when moving around the $2n$-gon $Q_n$ in one direction or the other (i.e. counterclockwise or clockwise). In particular, the two arcs in $\overline{\{i,j\}}$ do not cross. Similarly, in the case $j=i+n$, the above condition defines when a diameter $\{i,i+n\}_g$ (or $\{i,i+n\}_r$) crosses the non-diameter arc $\{k,\ell\}$. 2. We say that two pairs $\overline{\{i,j\}}$ and $\overline{\{k,\ell\}}$ of non-diameter arcs [*cross*]{} if there exist two arcs in these two pairs which cross in the sense of part (a). (Note that then necessarily the other two rotated arcs also cross.) Similarly, the diameter $\{i,i+n\}_g$ (or $\{i,i+n\}_r$) crosses the pair $\overline{\{k,\ell\}}$ of non-diameter arcs if it crosses one of the arcs in $\overline{\{k,\ell\}}$. (Note that it then necessarily crosses both arcs in $\overline{\{k,\ell\}}$.) 3. Two diameters $\{i,i+n\}_g$ and $\{j,j+n\}_r$ of different colour [*cross*]{} if $j\not\in \{i,i+n\}$, i.e. if they have different endpoints. But $\{i,i+n\}_g$ and $\{i,i+n\}_r$ do not cross. Moreover, any diameters of the same colour do not cross. \[c2\] Let $\X$ be a collection of arcs of the $2n$-gon $Q_n$, $n> 1$, which is invariant under rotation of $180$ degrees. Then $\X$ is called a *Ptolemy diagram of type $D$* if it satisfies the following conditions. Let $\alpha = \{i,j \}$ and $\beta = \{k,\ell \}$ be crossing arcs in $\X$ (in the sense of Definition \[c1\]). 1. If $\alpha$ and $\beta$ are not diameters, then those of $\{ i,k\}$, $\{i,\ell \}$, $\{ j,k \}$, $\{ j,\ell \}$ which are arcs in $P$ are also in $\X$. In particular, if two of the vertices $i,j,k,\ell$ are opposite vertices (i.e. one of $k$ and $\ell$ is equal to $i+n$ or $j+n$), then both the green and the red diameter connecting them are also in $\X$. 2. If both $\alpha$ and $\beta$ are diameters (necessarily of different colour by Definition \[c1\](c)) then those of $\{ i,k\}$, $\{ i,k+n \}$, $\{ i+n,k \}$, $\{ i+n,k+n \}$ which are arcs of $P$ are also in $\X$. 3. If $\alpha$ is a diameter while $\beta$ is not a diameter, then those of $\{ i,k\}$, $\{ i,\ell \}$, $\{ j,k \}$, $\{ j,\ell \}$ which are arcs and do not cross the arc $\{k+n,\ell +n \}$ are also in $\X$. Additionally, the diameters $\{ k, k+n \}$ and $\{ \ell, \ell +n \}$ of the same colour as $\alpha$ are also in $\X$. These conditions are illustrated in figure \[10\]. 1. The first Ptolemy condition in type $D$: $$\begin{tikzpicture}[scale=0.4] \draw[very thick] (0,0) circle (4) ; \draw (90:4) edge[thick, dashed, color=black!] (45:4) ; \draw (45:4) edge[thick, dashed, color=black!] (-180:4) ; \draw (-180:4) edge[thick, dashed, color=black!] (-225:4) ; \draw (-225:4) edge[thick, dashed, color=black!] (90:4) ; \draw (90:4) edge[thick, color=black!] (-180:4) ; \draw (45:4) edge[thick, color=black!] (-225:4) ; \draw (0:4) edge[thick, dashed, color=black!] (-45:4) ; \draw (-45:4) edge[thick, dashed, color=black!] (-90:4) ; \draw (-90:4) edge[thick, dashed, color=black!] (-135:4) ; \draw (-135:4) edge[thick, dashed, color=black!] (0:4) ; \draw (0:4) edge[thick, color=black!] (-90:4) ; \draw (-45:4) edge[thick, color=black!] (-135:4) ; \draw (-45*1+135:4.7) node {$\ell$} ; \draw (-45*2+135:4.7) node {$j$} ; \draw (-45*3+135:5.3) node {$k+n$} ; \draw (-45*4+135:5) node {$i+n$} ; \draw (-45*5+135:4.7) node {$\ell+n$} ; \draw (-45*6+135:5) node {$j+n$} ; \draw (-45*7+135:4.7) node {$k$} ; \draw (-45*8+135:4.7) node {$i$} ; \draw (-45*1+135:4) node {$\bullet$} ; \draw (-45*2+135:4) node {$\bullet$} ; \draw (-45*3+135:4) node {$\bullet$} ; \draw (-45*4+135:4) node {$\bullet$} ; \draw (-45*5+135:4) node {$\bullet$} ; \draw (-45*6+135:4) node {$\bullet$} ; \draw (-45*7+135:4) node {$\bullet$} ; \draw (-45*8+135:4) node {$\bullet$} ; \end{tikzpicture} \begin{tikzpicture}[scale=0.4][auto] \draw[very thick] (0,0) circle (4) ; \foreach \a in {1,2,...,5}{ \draw (-60*\a+140:4) edge[thick, dashed, color=black!] (-60*\a+80:4) ; } \draw (80:4) edge[thick, dashed, color=black!] (-220:4) ; \draw (80:4) edge[thick, color=black!] (-160:4) ; \draw (-220:4) edge[thick, color=black!] (20:4) ; \draw (20:4) edge[thick, color=black!] (-100:4) ; \draw (-160:4) edge[thick, color=black!] (-40:4) ; \draw (-160:4) edge[thick, dashed, decorate, decoration=snake, color=red!] (20:4); \draw (-160:4) edge[thick, dashed, color=green!] (20:4); \draw (-60*1+140:4.7) node {$\ell$} ; \draw (-60*2+140:4.8) node {$\;\;\;\;\;\;\;\;j=k+n$} ; \draw (-60*3+140:5) node {$i+n$} ; \draw (-60*4+140:4.7) node {$\ell+n$} ; \draw (-60*5+140:5.7) node {$k=j+n$} ; \draw (-60*6+140:4.7) node {$i$} ; \draw (-60*1+140:4) node {$\bullet$} ; \draw (-60*2+140:4) node {$\bullet$} ; \draw (-60*3+140:4) node {$\bullet$} ; \draw (-60*4+140:4) node {$\bullet$} ; \draw (-60*5+140:4) node {$\bullet$} ; \draw (-60*6+140:4) node {$\bullet$} ; \end{tikzpicture}$$ 2. The second Ptolemy condition in type $D$: $$\begin{tikzpicture}[scale=0.4] \draw[very thick] (0,0) circle (4) ; \draw [thick, dashed, color=black!] (130:4) -- (70:4) ; \draw [thick, dashed, color=black!] (70:4) -- (-50:4) ; \draw [thick, dashed, color=black!] (-50:4) -- (-110:4) ; \draw [thick, dashed, color=black!] (-110:4) -- (130:4) ; \draw [thick, color=green!] (70:4) -- (-110:4); \draw [thick, decorate, decoration=snake, color=red!] (130:4) -- (-50:4); \draw (130:4.7) node {$i$} ; \draw (70:4.7) node {$k+n$} ; \draw (-50:5) node {$i+n$} ; \draw (-110:4.7) node {$k$} ; \draw (130:4) node {$\bullet$} ; \draw (70:4) node {$\bullet$} ; \draw (-50:4) node {$\bullet$} ; \draw (-110:4) node {$\bullet$} ; \end{tikzpicture}$$ 3. The third Ptolemy condition in type $D$: $$\begin{tikzpicture}[scale=0.4] \draw[very thick] (0,0) circle (4); \draw [thick, dashed, color=black!] (60:4) -- (120:4); \draw [thick, dashed, color=black!] (120:4) -- (160:4); \draw [thick, dashed, color=black!] (-120:4) -- (-60:4); \draw [thick, dashed, color=black!] (-60:4) -- (-20:4); \draw [very thick, color=black!] (160:4) -- (60:4); \draw [very thick, color=black!] (-120:4) -- (-20:4); \draw [thick, color=green!] (120:4) -- (-60:4); \draw [thick, dashed, color=green!] (60:4) -- (-120:4); \draw [thick, dashed, color=green!] (160:4) -- (-20:4); \draw (60:4.7) node {$\ell$} ; \draw (120:4.7) node {$i$} ; \draw (160:4.7) node {$k$} ; \draw (-120:4.7) node {$\ell+n$} ; \draw (-60:4.8) node {$i+n$} ; \draw (-20:5) node {$k+n$} ; \draw (60:4) node {$\bullet$} ; \draw (120:4) node {$\bullet$} ; \draw (160:4) node {$\bullet$} ; \draw (-120:4) node {$\bullet$} ; \draw (-60:4) node {$\bullet$} ; \draw (-20:4) node {$\bullet$} ; \end{tikzpicture}$$ Now we define the $F$-$periodic$ $Ptolemy$ $diagram$ of type $D$ for $2u$-gon $Q_u$ which we will use to give a classification of torsion pairs in ${\mathcal}D_{n,t}$. Let $\widetilde{\X}$ be a collection of arcs in $2u$-gon $Q_{u}$, and $F=\tau^{n+1}\varphi^{n}$. $\widetilde{\X}$ is called an $F$-$periodic$ $Ptolemy$ $diagram$ of type $D$ if $\widetilde{\X}$ is a Ptolemy diagram of type $D$ and is $F$-periodic. \[c3\] For a coordinate $(i,j)$ (modulo $2u$) with $j>i$ corresponding to an indecomposable object in $\mathcal C_{D_{u}}$, we call $j-i-1$ the $level$ of the vertex, and $j-i$ the $length$ of the vertex. For an indecomposable object $[(i,j)]$ in $\D_{n,t}$, we also call $j-i-1$ the $level$ of the vertex. The $length$ of the vertex $[(i,j)]$ is defined by $j-i$. \[c4\] Let $(\X,\X^{\perp})$ be a torsion pair in ${\mathcal}D_{n,t}$ and $\widetilde{\X}$ be the corresponding $F$-periodic collection of arcs of the $2u$-gon $Q_u$, where $u=2t(n+1)$. If $t>1$, then precisely one of the following situation occurs: 1. The level of all indecomposable objects of $\X\leq n$; 2. The level of all indecomposable objects of $\X^{\perp}\leq n$. Recall that the elements with level $\leq n$ in ${\mathcal}D_{n,t}$ are exactly all the rigid indecomposable objects [@bpr Lemma 2.9]. 1. If the level of all the indecomposable objects of $\X\leq n$, we claim that $\X^{\bot}$ must contain an element (not diameters) with level$>n$. Indeed, since $\X$ contains only (finitely many) indecomposable rigid objects, we pick an arc from $\X$ with maximal length. We can suppose that its coordinate is $[(1,\ell)]$ with $3\leq \ell\leq n+2$, since $[(1,\ell)]$ corresponds to a rigid indecomposable object. If we can show $[(1,\ell+n+1)]\in\X^{\bot}[-1]$, then $\X^{\bot}$ contains an element with level$>n$. Firstly, since $t>1$, $\ell+n+1\leq n+2+n+1=2n+3<u+1$, that is $[(1,\ell+n+1)]$ represents a non-diameter arc, and the level of $[(1,\ell+n+1)]$ is $\ell+n-1\geq 3+n-1=n+2$. Secondly, since $(1,\ell)$ is in $\widetilde{\X}$ with maximal length, there is no arc $(i,j)\in\widetilde{\X}$ with $1<i<\ell<j$. Otherwise $(1,\ell)$ has to cross $(i,j)$, but the Ptolemy condition yields an arc longer than $(1,\ell)$, a contradiction. This means $(1,\ell+n+1)$ cannot cross any arc from $\widetilde{\X}$. Similarly, we can prove $(n+2,\ell+2n+2),(2n+3,\ell+3n+3),\ldots \in\widetilde{\X}$, i.e., $[(1,\ell+n+1)]\in\X^{\bot}[-1]$, i.e., $(1,\ell+n+1)\in\X^{\bot}[-1]$. 2. Suppose ${\mathcal}X$ contains an element with level$>n$. Note that all indecomposable objects of ${\mathcal}D_{n,t}$ with level$>n$ may diameters or non-diameters, we claim that $\X^{\perp}$ contains only elements with level$\leq n$. 3. If $\X$ contains a non-diameter arc with level$>n$, without losing generality, we suppose $[(1,\ell)]$ is such an element. Note that we can choose $n+3\leq\ell\leq u$ [@bpr Fig. 6]. We choose $\ell$ for different intervals $[n+3,2n+3]$, $[2n+3,3n+4]$,$[3n+4,4n+5],\ldots, [(2t-1)(n+1)+1,2t(n+1)+1]$. If $n+3\leq\ell\leq 2n+3$, then the corresponding arcs in $\widetilde{\X}$ are shown in figure \[6\]. (0,0) circle (4) ; in [1,3,...,11]{} (-30\*+120:4) edge\[very thick, color=black!40, out=[-40-30\*]{}, in=[190-30\*]{}\] (-30\*+30:4) ; in [1,2,...,12]{} [ (-30\*+120:4) node [$\bullet$]{} ; ]{} ; (-30\*1+120:4.7) node [$1$]{} ; (-30\*2+120:4.7) node ; (-30\*3+120:5) node [$n+2$]{} ; (-30\*4+120:4.7) node [$\ell$]{} ; (-30\*5+120:5.1) node [$2n+3$]{} ; (-30\*6+120:5) node [$\ell+n+1$]{} ; (-30\*7+120:4.7) node [$3n+4$]{} ; (-30\*8+120:5) node [$\ell+2n+2$]{} ; (-30\*9+120:5.3) node [$4n+5$]{} ; (-30\*10+120:5.9) node [$\ell+3n+3$]{} ; (-30\*11+120:4.7) node ; (-30\*12+120:4.7) node ; If $2n+3\leq\ell\leq 3n+4$, then the corresponding arcs in $\widetilde{\X}$ are shown in figure \[7\]. (0,0) circle (4) ; in [1,3,8,10]{} (-25.7\*+115.7:4) edge\[very thick, color=black!40, out=[-40-25.7\*]{}, in=[170-25.7\*]{}\] (-25.7\*+12.9:4) ; in [4,6]{} (-25.7\*+115.7:4) edge\[very thick, color=black!40, out=[-47.3-25.7\*]{}, in=[155.7-25.7\*]{}\] (-25.7\*-12.8:4) ; in [13]{} (-25.7\*+115.7:4) edge\[very thick, color=black!40, out=[-37.3-25.7\*]{}, in=[195.7-25.7\*]{}\] (-25.7\*+38.6:4) ; in [1,2,...,14]{} [ (-25.7\*+115.7:4) node [$\bullet$]{} ; ]{} ; (-25.7\*1+115.7:4.7) node [$1$]{} ; (-25.7\*2+115.7:4.7) node ; (-25.7\*3+115.7:5) node [$n+2$]{} ; (-25.7\*4+115.7:5.3) node [$2n+3$]{} ; (-25.7\*5+115.7:4.7) node [$\ell$]{} ; (-25.7\*6+115.7:5) node [$3n+4$]{} ; (-25.7\*7+115.7:4.8) node [$\ell+n+1$]{} ; (-25.7\*8+115.7:5) node [$4n+5$]{} ; (-25.7\*9+115.7:4.8) node [$\ell+2n+2$]{} ; (-25.7\*10+115.7:5) node [$5n+6$]{} ; (-25.7\*11+115.7:5.8) node [$\ell+3n+3$]{} ; (-25.7\*12+115.7:5.8) node [$\ell+4n+4$]{} ; (-25.7\*13+115.7:4.7) node ; (-25.7\*14+115.7:4.8) node [$\ell+5n+6$]{} ; The other cases are similar. This shows the level of indecomposable objects in $[(1,\ell)]^{\perp}[-1]\leq n$. Therefore the level of indecomposable objects in $\X^{\perp}\leq n$, since $\X^{\perp}[-1]\subseteq[(1,\ell)]^{\perp}[-1]$. 4. If $\X$ contains a diameter $[(1,u+1)]$ (green or red) in $\X$, without losing generality, we suppose $[(1,u+1)]_g\in\X$. Then $(1,u+1)_g,(n+2,u+n+2)_r,\cdots,\in\widetilde{\X}$ and the first two objects cross. Then $\mathrm{(Pt2)}$ implies that $(n+2,u+1)\in\widetilde{\X}$ which is not a diameter and then the non-diameter $[(n+2,u+1)]\in\X$, its level is $u+1-(n+2)-1=u-(n+2)=(2t-1)(n+1)-1\geq 3n+2>n$, since $t>1$. Thus $\X^{\perp}$ contains only indecomposable rigid objects by case $1)$. As a consequence, for a torsion pair $(\X, \X^{\bot})$ in ${\mathcal}D_{n,t}$, precisely one of $(1)$ and $(2)$ occurs. We complete the proof. Let $(i,j)$ be a non-diameter arc of $2u$-gon $Q_{u}$. The $wing$ $W(i,j)$ of $(i,j)$ consists of all arcs $(r,s)$ of the $2u$-gon such that $i\leq r\leq s\leq j$, that is all arcs which are overarched by $(i,j)$. $[(i,j)]$ represents a vertex in the AR-quiver of ${\mathcal}D_{n,t}$, the corresponding wing is denoted by $W[(i,j)]$. Combining Lemma \[c4\], we have the classification of torsion pairs in ${\mathcal}D_{n,t}, t>1$, whose proof is the same as Theorem \[b\] (compare [@hjr2]). There are bijections between the following sets for $t>1$: 1. Torsion pairs $(\X, \X^{\bot})$ in ${\mathcal}D_{n,t}$ such that the level of all the indecomposable objects in $\X\leq n$; 2. $F$-periodic Ptolemy diagrams $\widetilde{\X}$ of type $D$ of $2u$-gon $Q_{u}$ such that all arcs in $\widetilde{\X}$ have length at most $n+1$; 3. Collections $\left\{([(i_{1},j_{1})], [W_{1}]), \ldots, ([(i_{r},j_{r})], [W_{r}])\right\}$ of pairs consisting of vertices $[(i_{\ell},j_{\ell})]$ of level$\leq n$ in the AR-quiver of $D_{n,t}$ and subset $[W_{\ell}]\subset W[(i_{\ell},j_{\ell})]$ of their wings such that for any different $k, \ell\in \left\{1,2,\ldots,r\right\}$, we have $$W[(i_{k},j_{k})][1]\cap W[(i_{\ell},j_{\ell})]=\emptyset,$$ and the $F$-periodic collection $W_{\ell}$ corresponding to $[W_{\ell}]$ is a Ptolemy diagram of type $D$. \[f\] Next, we describe torsion pairs in $\D_{n,1}$. Recall that two diameters are called $paired$ if they connect the same two vertices (and thus of different colour). For a torsion pair $(\X, \Y)$ in $\D_{n,1}$, precisely one of the following situations occurs: - $\X$ (resp. $\Y$) contains one paired diameters and $\Y$ (resp. $\X$) contains only indecomposable rigid objects. - Both $\X$ and $\Y$ contain only one non-paired diameter and some indecomposable rigid objects. Let $(\X,\Y)$ be a torsion pair in ${\mathcal}D_{n,1}$ and $\widetilde{\X}$, $\widetilde{\Y}$ be the corresponding $F$-periodic collections of arcs of the $2u$-gon $Q_{u}$, where $u=2(n+1)$. We note that $(\X,\Y)$ is a torsion pair if and only if so is $(\Y, \X[2])$, since ${\mathcal}C$ is $2$-CY. - If $\widetilde{\X}$ contains a non-diameter arc with length longer than $n+1$, then $\Y$ contains only indecomposable rigid objects. The proof is the same as the part (II) of the proof of Lemma \[c4\]. - If $\widetilde{\X}$ contains a paired diameters, then the indecomposable objects in $\Y$ are all rigid. Suppose $[(1,u+1)]\in\widetilde{\X}$ (green and red) is one paired diameters. Because $\widetilde{\X}$ is $F$-periodic, $(1,u+1)_{r,g}\in\widetilde{\X}$, $(n+2,3n+4)_{r,g}\in\widetilde{\X}$ and they cross. So $(1,n+2)$ and $(n+2,2n+3)$ are in $\widetilde{\X}$ by $\mathrm{(Pt2)}$ of Definition \[c2\]. Thus the maximal length of arcs that do not cross any arc in $\widetilde{\X}$ is $n+1$ and any diameter crosses $(1,u+1)$ in $\widetilde{\X}$, that is $\X^{\perp}[-1]$ contains only indecomposable rigid objects, so does $\Y$. - If $\X$ contains only indecomposable rigid objects, then $\widetilde{\Y}$ will contain a paired diameters. Suppose the arc in $\widetilde{\X}$ with maximal length is $(1,\ell)$ (up to shifting), then $3\leq \ell\leq n+2$ since $(1,\ell)$ corresponds to a rigid object. We claim the paired diameters $(2,u+2)$ are in $\widetilde{\Y}$. Otherwise, there is an arc in $\widetilde{\X}$ crosses the paired diameters $(1,u+1)$, then there exists an arc with length longer than $(1,\ell)$ similarly as the proof of Lemma \[c4\]. So the paired diameters $(1,u+1)$ are in $\Y[-1]$, that is, $(2,u+2)$ (red and green) are in $\Y$. - If $\X$ contains diameters, but no paired diameter, then $\widetilde{\X}$ contains no paired diameters. We first claim that $\X$ contains only one diameter (red or green). In fact, if $\X$ contains $2$ non-paired diameters $[(i,i+u)],[(j,j+u)]$ (red or green) $(j\neq i,j\neq i+n+1)$, then $\widetilde{\X}$ will contain $4$ non-paired diameters with different colours, see figure \[8\]. Note that $(i,i+u)$ and $(i+n+1,i+3n+3)$ are two different coloured diameters and they cross, then $(i,i+n+1)\in\widetilde{\X}$ by (Pt2). Moreover, the arc $(i,i+n+1)$ crosses $(j,j+u)$, then $(i,i+u)$ and $(i+n+1,i+3n+3)$ with the same colour as $(j,j+u)$ are in $\widetilde{\X}$, but $(i,i+u)$ and $(i+n+1,i+3n+3)$ are different colours, this implies that $[(i,i+u)]$ are paired diameters in $\X$, a contradiction. Moreover, if $\X$ contains only one diameter, without losing generality, we assume its coordinate is $[(1,1+u)]$. Then $(1,1+u)$ and $(n+2,u+n+1)$ are in $\widetilde{\X}$ and they are different colours, so they cross. The Ptolemy condition implies that $(1,n+2)$ and $(n+2,2n+3)$ are in $\widetilde{\X}$, so the maximal length of arcs that do not cross any arc in $\widetilde{\X}$ is $n+1$, and the diameter $[(1,1+u)]$ with different colour as it in $\X$ does not cross any arc in $\widetilde{\X}$ either. That means that $\X^{\perp}[-1]$ contains only indecomposable rigid objects and one diameter, so does $\Y=\X^{\perp}$. (0,0) circle (4) ; in [1,2,3,4]{}(-45\*+90:4) – (-45\*-90:4) ; in [1,3,5,7]{}(-45\*+135:4) – (-45\*+45:4) ; in [1,2,...,8]{} [ (-45\*+135:4) node [$\bullet$]{} ; ]{} ; (-45\*1+135:4.7) node [$i$]{} ; (-45\*2+135:4.7) node [$j$]{} ; (-45\*3+135:5.7) node [$i+n+1$]{} ; (-45\*4+135:5.3) node [$j+n+1$]{} ; (-45\*5+135:4.7) node [$i+u=i+2n+2$]{} ; (-45\*6+135:4.8) node [$j+u$]{} ; (-45\*7+135:5.9) node [$i+3n+3$]{} ; (-45\*8+135:5.7) node [$j+u+n+1$]{} ; As a consequence, if $\X$ (resp. $\Y$) contains only indecomposable rigid objects, then (3) ensures $\Y$ (resp. $\X$) contains a paired diameters, so case 1 occurs. Suppose $\X$ (resp. $\Y$) contains a non-rigid object. If the non-rigid object is non-diameter, then (1) ensures $\Y$ (resp. $\X$) contains only indecomposable rigid objects, and case 1 occurs. If the non-rigid object is a diameter, then (2) implies that case 1 occurs if the diameters are paired, and (4) implies that case 2 occurs if the diameter is non-paired. Obviously, case 1 and case 2 cannot occur simultaneously. Let $D_{n,t}$ be the number of torsion pairs in ${\mathcal}D_{n,t}$. - If $t>1$, then $D_{n,t}=T_{n+1},$ the number of torsion pairs in ${\mathcal}A_{n,t}$. - If $t=1$, then $D_{n,t}=T_{n+1}+2t_{n,1}=\sum\limits_{\ell\geq 0}2^{\ell+1}\binom{n+\ell}{\ell}\bigg[\binom{2n+1}{n-2\ell}+ \binom{2n}{n-2\ell}\bigg]$ \[h2\] For $t>1$, we only consider the subcategories $\X$ of ${\mathcal}D_{n,t}$ with the level of indecomposable objects in $\X\leq n$. Because such subcategories $\X$ cannot contain any diameter and $\mathrm{(Pt1)}$ in Definition \[c2\] coincides with the Ptolemy condition in type $A$, the Ptolemy diagram of type $D$ is the same as Ptolemy diagram of type $A$. Moreover, because $\X$ contains only indecomposable rigid objects, the $F$-periodic of the corresponding collection of non-diameter arcs are $(n+1)$-periodic. By Theorem 4.9 and Theorem \[b\], we know that the number of torsion pairs in ${\mathcal}D_{n,t}$ equals to the number of torsion pairs in ${\mathcal}A_{n,t}$ and equals to the number of torsion pairs in the cluster tube of rank $n+1$. For $t=1$, The torsion pairs in ${\mathcal}D_{n,1}$ divide into two subclasses: one is the torsion pairs $(\X,\Y)$ in ${\mathcal}D_{n,1}$ with $\X$ or $\Y$ (not both) containing a paired diameters, counting the number of this case reduces to counting of the possible halves $\X$ or $\Y$ of a torsion pair, whose all indecomposable objects are rigid by Theorem 4.10. The number of such torsion pairs in ${\mathcal}D_{n,1}$ equals to the number of torsion pairs in $\A_{n,t}$ with $t>1$. Another one is the torsion pairs $(\X,\Y)$ in ${\mathcal}D_{n,1}$ with both $\X$ and $\Y$ containing one non-paired diameter and some indecomposable rigid objects. The number of such torsion pairs is $2t_{n,1}$. When $n=1,t=2$, ${\mathcal}D_{1,2}=D^b(\mathbb{K}D_{8})/\tau^2\varphi$ is $2$-Calabi-Yau with maximal rigid objects, whose Auslander-Reiten quiver is shown in figure \[13\]. By Theorem \[h2\], the number of torsion pairs in ${\mathcal}D_{1,2}$ is $T_{1+1}=6$. We list the torsion pairs according to Theorem \[f\]: in [1,2]{} [ ; in [2,3,...,7]{} [ ; ; (2\*+-4,-2) node\[scale=0.7\] [$\;$u]{} ; ]{} ; (2\*1+8-4,8-2) node\[scale=0.7\] [$1\;9^+$]{} ; (2\*2+8-4,8-2) node\[scale=0.7\] [$2\;10^+$]{} ; in [2,3,...,7]{} [ ; ; (-4+2\*,-2) – (-3+2\*,-1) ; (-3+2\*,-1) – (-2+2\*,-2) ; ]{} ; (5,5) – (6,5) ; (6,5) node\[scale=0.7\] [$1\;9^-$]{} ; (6,5) – (7,5) ; (7,5) – (7.9,5) ; (8,5) node\[scale=0.7\] [$2\;10^-$]{} ; (8,5) – (9,5) ; ]{} ; (-1.1,-0.5) – ++(4,0) – ++(6.8,6.8) – ++(-4,0) – cycle ; in [1,2]{} [ ; in [2,3,...,7]{} [ ; ; (2\*+-8,-2) node\[scale=0.7\] [$\;$u]{} ; ]{} ; (2\*1+8-8,8-2) node\[scale=0.7\] [$1\;9^+$]{} ; (2\*2+8-8,8-2) node\[scale=0.7\] [$2\;10^+$]{} ; in [2,3,...,7]{} [ ; ; (-8+2\*,-2) – (-7+2\*,-1) ; (-7+2\*,-1) – (-6+2\*,-2) ; ]{} ; (1,5) – (2,5) ; (2,5) node\[scale=0.7\] [$1\;9^-$]{} ; (2,5) – (3,5) ; (3,5) – (3.9,5) ; (4,5) node\[scale=0.7\] [$2\;10^-$]{} ; (4,5) – (5,5) ; ]{} ; (-5.3,-0.5) – ++(3.7,0) – ++(6.8,6.8) – ++(-3.7,0) – cycle ; (0,3) node\[scale=1.5\] [$\cdots$]{} ; (13,3) node\[scale=1.5\] [$\cdots$]{} ; $$\begin{aligned} \X_{1}=\{[(0)] \}&\quad\quad& \X^\perp_{1}[-1]={\mathcal}D_{1,2}\nonumber \\ \X_{2}=\{[(13)] \}&\quad\quad& \X^\perp_{2}[-1]=\{[(13)],[(15)],[(17)],[( 19^{+})],[(19^{-})]\}\nonumber \\ \X_{3}=\{[(24)] \}&\quad\quad&\X^\perp_{3}[-1]=\{[(24)],[(26)],[(28)],[(2 10^{+})],[(2 10^{-})]\}\end{aligned}$$ We only need to consider the subcategories containing only indecomposable rigid objects in ${\mathcal}D_{1,2}$. By Theorem \[f\], all the torsion pairs $(\X_{i}, \X^\perp_{i})$ and $( ^\perp\X_{i}, \X_{i})$ are listed above for $i=1,2,3$. When $n=1,t=1$, ${\mathcal}D_{1,1}=D^b(\mathbb{K}D_{4})/\tau^2\varphi$ is $2$-Calabi-Yau with maximal rigid objects, whose Auslander-Reiten quiver is shown in figure \[12\]. By Theorem \[h2\], the number of torsion pairs in ${\mathcal}D_{1,1}$ is $T_{1+1}+2t_{1,1}=10$. We list the torsion pairs according to Theorem 4.10: in [1,2]{} [ ; in [2,3]{} [ ; ; (2\*+-4,-2) node\[scale=0.7\] [$\;$u]{} ; ]{} ; (2\*1+4-4,4-2) node\[scale=0.7\] [$1\;5^+$]{} ; (2\*2+4-4,4-2) node\[scale=0.7\] [$2\;6^+$]{} ; in [2,3]{} [ ; ; (-4+2\*,-2) – (-3+2\*,-1) ; (-3+2\*,-1) – (-2+2\*,-2) ; ]{} ; (1,1) – (2,1) ; (2,1) node\[scale=0.7\] [$1\;5^-$]{} ; (2,1) – (3,1) ; (3,1) – (3.9,1) ; (4,1) node\[scale=0.7\] [$2\;6^-$]{} ; (4,1) – (5,1) ; ]{} ; (-1.1,-0.3) – ++(4.2,0) – ++(2.6,2.6) – ++(-4,0) – cycle ; in [1,2]{} [ ; in [2,3]{} [ ; ; (2\*+-8,-2) node\[scale=0.7\] [$\;$u]{} ; ]{} ; (2\*1+4-8,4-2) node\[scale=0.7\] [$1\;5^+$]{} ; (2\*2+4-8,4-2) node\[scale=0.7\] [$2\;6^+$]{} ; in [2,3]{} [ ; ; (-8+2\*,-2) – (-7+2\*,-1) ; (-7+2\*,-1) – (-6+2\*,-2) ; ]{} ; (-3,1) – (-2,1) ; (-2,1) node\[scale=0.7\] [$1\;5^-$]{} ; (-2,1) – (-1,1) ; (-1,1) – (-0.1,1) ; (0,1) node\[scale=0.7\] [$2\;6^-$]{} ; (0,1) – (1,1) ; ]{} ; (-5.2,-0.3) – ++(3.8,0) – ++(2.6,2.6) – ++(-3.8,0) – cycle ; (-0.5,1) node\[scale=1.5\] [$\cdots$]{} ; (10,1) node\[scale=1.5\] [$\cdots$]{} ; $$\begin{aligned} \X_{1}=\{[(0)] \}&\quad\quad& \X^\perp_{1}[-1]=\{{\mathcal}D_{1,1}\}\nonumber \\ \X_{2}=\{[(13)]\}&\quad\quad& \X^\perp_{2}[-1]=\{[(13)],[(15^{+})],[(15^{-})]\}\nonumber \\ \X_{3}=\{[(24)] \}&\quad\quad&\X^\perp_{3}[-1]=\{[(24)],[(26^{+})],[(26^{-})]\}\nonumber \\ \X_{4}=\{[(13)],[(15^{+})] \}&\quad\quad&\X^\perp_{4}[-1]=\{[(13)],[( 15^{-})]\}\nonumber \\ \X_{5}=\{[(24)],[(26^{+})] \}&\quad\quad& \X^\perp_{5}[-1]=\{[(24)],[( 26^{-})]\}\nonumber \\\end{aligned}$$ Note that $\X_{4}=\{[(13)],[(15^{+})]\}$, $\X^\perp_{4}[-1]=\{[(13)],[( 15^{-})]\}$, $\X_{5}=\{[(24)],[(26^{+})] \}$, and $\X^\perp_{5}[-1]=\{[(24)],[( 26^{-})]\}$ are all the subcategories containing one diameter and some indecomposable rigid objects, its number is $4=2t_{1,1}$. All the torsion pairs $(\X_{i}, \X^\perp_{i})$ and $( ^\perp\X_{i}, \X_{i})$ are listed above for $i=1,2,3,4,5$. Hearts of torsion pairs ======================= In this section, we determine the hearts of torsion pairs in finite $2$-CY triangulated categories with maximal rigid objects. Hearts of cotorsion pairs in any triangulated category were introduced by Nakaoka [@n1], which unify the construction of hearts of t-structures [@bbd] and construction of the abelian quotient categories by cluster tilting subcategories [@bmrrt; @kr; @kz]. For two subcategories $\X,\Y$ in a triangulated category ${\mathcal}C$, the pair $(\X,\Y)$ is a torsion pair in ${\mathcal}C$ if and only if $(\X,\Y[-1])$ is a cotorsion pair in ${\mathcal}C$. The heart of torsion pair $(\X,\Y)$ is by definition the heart of cotorsion pair $(\X,\Y[-1])$. We will use the notation of cotorsion pairs in this section. We recall the construction of hearts of cotorsion pairs from [@n1]: Let ${\mathcal}C$ be a triangulated category and $(\X,\Y)$ a cotorsion pair with core ${\mathcal}I$ in ${\mathcal}C$. Denote by ${\mathcal}H$ the subcategory $({\mathcal}X[-1]\ast {\mathcal}I)\cap ({\mathcal}I\ast{\mathcal}Y[1])$. The heart of the cotorsion pair $({\mathcal}X,{\mathcal}Y)$ is defined as the quotient category ${\mathcal}H/{\mathcal}I$, denoted by $\underline{{\mathcal}H}$. It was proved that $\underline{{\mathcal}H}$ is an abelian category [@n1]. There is a cohomological functor $H=h \pi$ from ${\mathcal}C$ to $\underline{{\mathcal}H}$, where $\pi$ is the quotient functor from ${\mathcal}C$ to $\underline{{\mathcal}C}={\mathcal}C/{\mathcal}I$ and $h$ is a functor from $\underline{\C}$ to $\underline{\H}$. See [@an; @n1] for the details of the constructions. We give the main result in this section. Let $\C$ be a finite $2$-Calabi-Yau triangulated category, and $(\X, \Y)$ be a cotorsion pair in $\C$ with core $\I=add I$, where $I$ is a rigid object, Then we have an equivalence of abelian categories $$\underline{{\mathcal}H}\simeq mod \ End I$$ \[g1\] For any cotorsion pair $(\X,\Y)$ in $\C$ with core $\I$, $({}^\bot\I[1])/\I$ is also a finite $2$-CY triangulated category with shift functor $<1>$ [@iy], and $(\X/\I,\Y/\I)$ is a t-structure by [@zz2]. It follows from Proposition \[p4\] that ${}^\bot\I[1])/\I=\X/\I\bigoplus \Y/\I$. On the other hand, $(\I,{}^\bot\I[1] )$ is a cotorsion pair with the same core $\I$, and the heart $\underline{H}_1$ of $(\I,{}^\bot\I[1] )$ is equivalent to the module category mod End $I$ by [@iy], i.e, $\underline{H}_1\simeq \text{mod}\ EndI$ . By the same proof as Theorem 6.4 in [@zz2], the heart $\underline{H}$ of $(\X,\Y)$ is equivalent to $\underline{H}_1$. Thus $\underline{H}\simeq\mod\ EndI$. Now we have the following conclusions about the hearts of cotorsion pairs in finite $2$-CY triangulated categories ${\mathcal}C$: 1. If ${\mathcal}C$ contains cluster tilting objects, then hearts have been determined in [@zz2]. 2. If ${\mathcal}C$ has only zero maximal rigid objects, then any cotorsion pair $(\X,\Y)$ is a t-structure, and $\X[1]=\X, \Y[1]=\Y.$ Then the heart $\H=\X[-1]\bigcap \Y[1]=0$. 3. If ${\mathcal}C$ has non-zero maximal rigid objects which are not cluster tilting, then the hearts are determined in the following result combining Proposition \[g1\]. - The heart of any cotorsion pair in $\A_{n,t}$ is module category over the algebras given by one of the following quivers with relations: - $\xymatrix@C=1.5em@R=1em{ 1 \ar@{->}[r] & 2 \ar@{->}[r] & \ar@{..}[r]& k-1\ar@{->}[r]&k}$ with $1\leq k\leq n$. - $\xymatrix@C=1.5em@R=1em{ 1 \ar@{->}[r] & 2 \ar@{->}[r] & \ar@{..}[r]& k-1 \ar@{->}[r] & k \ar@(ur,dr)^{\alpha} }$ with relation $\alpha^{2}$, $1\leq k\leq n$. - Mutations of the quiver occurred in the above $(1)$ or $(2)$. - The heart of any cotorsion pair in $\D_{n,t}$ is module category over the algebras given by one of the following quivers with relations: - $\xymatrix@C=1.5em@R=1em{ 1 \ar@{->}[r] & 2 \ar@{->}[r] & \ar@{..}[r]& k-1\ar@{->}[r]&k}$ with $1\leq k\leq n$. - $\xymatrix@C=1.5em@R=1em{ 1 \ar@{->}[r] & 2 \ar@{->}[r] & \ar@{..}[r]& k-1 \ar@{->}[r] & k \ar@(ur,dr)^{\alpha} }$ with relation $\alpha^{2}$, $1\leq k\leq n$. - Mutations of the quiver occurred in the above $(1)$ or $(2)$. - The heart of any cotorsion pair in $\D^{b}(\mathbb{K}E_{7})/\tau^{2}$ is module category over the algebras given by the following quiver with relation: $$\xymatrix@C=1.5em@R=1em{\cdot \ar@(ur,dr)^{\alpha} }$$ with relation $\alpha^{3}$. - The heart of any cotorsion pair in $\D^{b}(\mathbb{K}E_{7})/\tau^{5}$ is module category over the algebras given by one of the following quivers with relations:: - $\xymatrix@C=1.5em@R=1em{\cdot \ar@(ur,dr)^{\alpha} }$ with relation $\alpha^{2}$. - $\xymatrix@C=1.5em@R=1em{\cdot \ar@(ul,dl)_{\alpha} \ar@{->}[r]^{\beta} & \cdot\ar@(ur,dr)^{\gamma} }$ with relations $\beta\alpha-\gamma\beta$, $\alpha^{2}$, $\gamma^{2}$. By Proposition \[g1\], for any cotorsion pair $(\X,\Y)$ with core $\I$ in a finite $2$-CY triangulated category, the heart $\underline{H}\simeq mod \ End I$, $I$ is a rigid object in ${\mathcal}C$. So the heart is determined by the endomorphism algebra of some rigid objects, a subalgebra of endomorphism algebra of a maximal rigid object. Since $[(1,3)]\bigoplus[(1,4)]\bigoplus\ldots\bigoplus[(1,n+2)]$ is a maximal rigid object in $\A_{n,t}$, and its endomorphism algebra is given by $\xymatrix@C=1.5em@R=1em{ 1 \ar@{->}[r] & 2 \ar@{->}[r] & 3 \ar@{..}[r]& n-1 \ar@{->}[r] & n \ar@(ur,dr)^{\alpha} }$ with relation $\alpha^{2}$ by [@bpr], we can get any endomorphism algebra of a maximal rigid object through some mutations, as a result, the endomorphism algebra of any rigid objects is obtained. We prove the assertion in $1.$ Statement in $2.$ can be proved similarly. $\D^{b}(\mathbb{K}E_{7})/\tau^{2}$ has only two indecomposable rigid objects and each is maximal, the endomorphism algebra is given in Proposition $2.14$ in [@bpr], so the result in $3.$ is clear. For $4.$, $\D^{b}(\mathbb{K}E_{7})/\tau^{5}$ has five indecomposable rigid objects, the endomorphism algebra of any maximal rigid object is given in Proposition $2.12$ in [@bpr]. $\bold {Acknowlegement}$: The first author expresses her great thankfulness to Panyue Zhou for his generous discussion on the topic of the paper. Both authors would like to thank Yu Zhou for his comments on the earlier version of the paper. [99]{} C. Amiot. On the structure of triangulated categories with finitely many indecomposables. Bull. Soc. Math. France, 135(3), 435-474, 2007. N. Abe, H. Nakaoka. General heart construction on a triangulated category (II): Associated homological functor. Appl. Categ. Structures, 20(2), 161-174, 2012. A. A. Beilinson, J. Bernstein, P. Deligne. Faisceaux pervers. Astérisque, Soc. Math. France, Paris, 100, 1982. I. Burban, O. Iyama, B. Keller, I. Reiten. Cluster tilting for one-dimensional hypersurface singularities. Adv. Math., 217, 2443-2484, 2008. A.B. Buan, R. Marsh, M. Reineke, I. Reiten, G.Todorov. Tilting theory and cluster combinatorics. Adv. Math., 204, 572-618, 2006. A.B. Buan, R. Marsh, D. Vatne. Cluster structure from $2$-Calabi-Yau categories with loops. Math. Z., 265, 951-970, 2010. A. B. Buan, Y. Palu, I. Reiten. Algebras of finite representation type arising from maximal rigid objects. J. Algebra, 446, 426-449, 2016. P. Caldero, F. Chapoton, R. Schiffler. Quivers with relations arising from clusters ($A_n$ case). Trans. Am. Math. Soc., 358(3), 1347-1364, 2006. S. E. Dickson. A torsion theory for Abelian categories. Trans. Amer. Math. Soc., 121, 223-235, 1966. B. Grimeland. Periodicity of cluster tilting objects. Preprint, arXiv: 1601.00314v1, 2016. C. Gei[ß]{}, B. Leclerc, J. Schröer, Rigid modules over preprojective algebras. Invent. math., 165(3), 589-632, 2006. T. Holm, P. J[ø]{}rgensen, M. Rubey. Ptolemy diagrams and torsion pairs in the cluster category of Dynkin type $A_{n}$. J. Algebraic Combin., 34(3), 507-523, 2011. T. Holm, P. J[ø]{}rgensen, M. Rubey. Torsion pairs in the cluster tubes. J. Algebraic Combin., 39(3), 587-605, 2014. T. Holm, P. J[ø]{}rgensen, M. Rubey. Ptolemy diagrams and torsion pairs in the cluster categories of Dynkin type $D$. Adv. Appl. Math., 51, 583-605, 2013. O. Iyama, Y. Yoshino. Mutations in triangulated categories and rigid Cohen-Macaulay modules. Invent. Math., 172(1), 117-168, 2008. B. Keller. On triangulated orbit categories. Doc. Math., 10, 551-581, 2005. B. Keller, I. Reiten. Cluster-tilted algebras are Gorenstein and stably Calabi-Yau. Adv. Math., 211(1), 123-151, 2007. S. Koenig, B. Zhu. From triangulated categories to abelian categories– cluster tilting in a general framework. Math. Z., 258(1), 143-160, 2008. H. Nakaoka. General heart construction on a triangulated category (I):unifying t-structures and cluster tilting subcategories. Appl. Category structure, 19(6), 879-899, 2011. H. Nakaoka. General heart construction for twin torsion pairs on triangulated categories. J. Algebra, 374: 195-215, 2013. P. Ng. A characterization of torsion theories in the cluster categoryof type $A_\infty$. Preprint, arXiv:1005.4364, 2010. R. Schiffler. A geometric model for cluster categories of type $D_{n}$. J. Algebraic Combin., 27: 1-21, 2008. J. Xiao, B. Zhu, Locally finite triangulated categories. J. Algebra, 290, 473-490, 2005. Y. Zhou and B. Zhu. Maximal rigid subcategories in 2-Calabi-Yau triangulated categories. J. Algebra, 348, 49-60, 2011. Y. Zhou, B. Zhu. $T$-structures and torsion pairs in a $2$-Calabi-Yau triangulated category. J. Lond. Math. Soc., 89(1), 213-234, 2014. Y. Zhou, B. Zhu. Mutation of torsion pairs in triangulated categories and its geometric realization. Prepint, arXiv:1105.3521, 2011. J. Zhang, Y. Zhou, B. Zhu. Cotorsion pairs in the cluster category of a marked surface. J. Algebra, 391, 209-226, 2013.
--- author: - 'C. Trundle , A. Pastorello , S. Benetti, , R. Kotak , S. Valenti , I. Agnoletto , F. Bufano , M. Dolci , N. Elias-Rosa , T. Greiner , D. Hunter , F.P. Keenan , V. Lorenzi, K. Maguire , S. Taubenberger' bibliography: - 'sn07rt.bib' title: 'Possible Evidence of Asymmetry in SN 2007rt, a Type IIn Supernova. ' --- Introduction ============ Stars with masses greater than 7-8 M$_{\odot}$ are thought to end their lives as core-collapse supernovae (CCSNe). During their lifetimes massive stars undergo significant mass-loss, particularly when massive enough to pass through a luminous blue variable (LBV) or Wolf-Rayet (WR) phase. Not surprisingly, this mass-loss leaves behind circumstellar material (CSM) surrounding the star. Evidence for the presence of surrounding CSM has been detected in a number of hydrogen rich SNe. The spectra of these objects do not show the broad P-Cygni profiles of the prototypical Type II supernovae. Instead, they are distinguished by their narrow H$_{\alpha}$ emission ($<$ 1000 ) on top of a broader emission profile. This narrow feature is a signature for the presence of circumstellar matter previously shed by the progenitor star. Such objects led @sch90 to identify a sub-class of objects amongst the hydrogen rich core-collapse supernovae, namely Type IIn. It is now generally believed that if a star undergoes significant mass-loss in its lifetime and subsequently evolves into a supernova, the fast moving ejecta from the explosion interacts with the discarded wind-material. This interaction is believed to cause a fast shock wave in the CSM and a reverse shock in the ejecta, with the shocked regions emitting high energy radiation [@chev94; @chug94]. The intensity of this interaction and its effect on the supernova spectrum and light-curve is dependent on the density, composition and geometrical configuration of the CSM, and can provide an excellent trace of the mass-loss in the pre-explosion lifetime of the progenitor star. Type IIn SNe constitute a very heterogeneous group of objects showing a wide variation in the strengths of their emission lines and behaviour of their light curves. Whilst the prototype IIn, SN 1988Z [@stat91; @tur93; @art99] is less luminous than Type Ia SNe, some of the most luminous SNe belong to this class [viz. 2006gy, 2006tf; see @ofek07; @smith07; @smith08; @ag09]. In addition to the characteristic narrow H features, type IIn spectra generally have strong blue continua, to which a single blackbody can not provide an adequate fit whilst simultaneously fitting the red part of the spectrum. [@smith08b] suggested that at late times the blue continuum present in the type IIn SN 2005ip was a result of the presence of a forest of high ionisation forbidden emission lines and they refer to this as a pseudo-continuum. This verified earlier speculation by @stat91 and @tur93 that the source of the blue spectral region in SN 1988Z was a strong interaction with the surrounding circumstellar material. The presence of CSM provides a unique insight into the mass-loss history of the progenitor prior to core-collapse. However, there remains an element of ambiguity over the progenitors of type IIn supernovae. Recent work has indicated that some of these objects may be connected to luminous blue variables (LBVs) or at least have undergone LBV-like behaviour shortly before core-collapse [see @k06; @GY07; @GY08; @smith07; @tru08; @smith08; @ag09]. Another group, the hybrid Type Ia/IIn, are thought to be a Type Ia disguised as a Type IIn, due to the strong narrow H$_{\alpha}$ emission in their spectra and the possible presence of the S [ii]{} and Si [ii]{} features typical of Type Ia [viz. SN 2002ic; see @h03; @ald06; @k04]. However, this is largely under debate within the community [@b06; @tru08]. There are also a number of so-called ‘transitional’ objects, where the presence of varying degrees of narrow hydrogen and helium lines in their spectra place them in a classification scheme between Type IIn and Type Ib/c objects [such as SN 2005la and the type Ibn, SN 2006jc @pas07; @fol07; @pas08a; @pas08b; @smith08a]. The ambiguities surrounding Type IIn progenitors leads us to tread carefully whilst discussing this group of supernovae, and justifies an in-depth analysis of those in the class which are dissimilar to the groups prototype, SN 1988Z [@stat91; @tur93; @art99]. In this paper we will discuss the photometric and spectroscopic evolution of SN 2007rt, for more than 400 days post-discovery. SN 2007rt was discovered by @li07 in UGC 6109, from unfiltered KAIT images on the 24$^{\rm th}$ November 2007. @blond07, as part of the CfA Supernova Survey, classified SN 2007rt as a type IIn supernova, 2-3 months past maximum, and claim the two best comparison spectra for this object are of the type IIn’s SN 1998S and 1996L. However from followup spectra we identified a broad helium feature, which is not detected in SN 1998S or many other type IIn SNe, and hence warrants further investigation. Observations {#obs} ============ Photometric and spectroscopic data of SN 2007rt were collected from November 2007 to March 2009. The details of these observations are logged in Table \[obslog2\] and are outlined below. Photometry {#obsphot} ---------- Our collaboration obtained optical photometry of SN 2007rt with the Telescopio Nazionale Galileo (TNG) and Nordic Optical Telescope (NOT) in La Palma (Canary Islands, Spain), the 1.82m Copernico telescope of the Asiago Observatory (Italy), the 1.52m telescope of the Loiano Observatory (Italy), and the 2.2m Calar Alto telescope (Spain). In addition four data points provided by amateur astronomers were used. In total this gives a coverage from 4 to 481 days after discovery (see Fig. \[phot1\]). The images were trimmed, de-biased and flatfielded. Since template images of the host galaxy were not available, the SN magnitudes were measured using a point spread function (PSF) fitting technique in the Image Reduction and Analysis Facility (IRAF)[^1]. Zero-points were defined making use of standard Landolt fields observed on the same night as the SN. The magnitudes of SN 2007rt where then calibrated relative to the average magnitudes of a sequence of stars in the SN field obtained during three photometric nights (highlighted by (ph) in column 7 of Table \[obslog2\]). Unfiltered magnitudes obtained from amateur images were rescaled to the V or R band magnitudes depending on the wavelength position of the maximum of the quantum-efficiency curve of the detectors used [see also the discussion in @pas08c]. These unfiltered magnitudes are denoted by C in Table \[obslog2\]. The calibrated SN magnitudes are presented in Table \[photval\] and Fig. \[phot1\]. Spectroscopic Data {#obsspec} ------------------ An intermediate resolution ($\sim$30 ) optical spectrum of SN 2007rt was obtained using the William Herschel telescope (WHT) on La Palma, as well as low resolution ($\sim$200 ) spectra from the telescopes listed above for photometry. These provided spectral coverage over a 426 day period from discovery on 24$^{\rm th}$ November 2007. Details of the epoch, wavelength coverage and resolution of these spectra are presented in Table \[obslog2\]. The spectra were reduced using standard spectral reduction procedures in IRAF. They were wavelength and flux calibrated using arc lamps and spectrophotometric standards observed on the same night. The wavelength calibrations were verified using the narrow sky lines. Absolute flux calibrations were then made with photometry taken on the same night. Prior to flux calibration, it was necessary to apply a second order correction to the ALFOSC/NOT spectra, due to contamination in the red part of the spectra from blue light of the second order. This was accomplished using the procedure outlined by [@stan07]. The spectroscopic data have been corrected for the redshift of the host galaxy, UGC 06109 (z=0.022365 $\pm$ 0.00080) as published in the updated Zwicky catalog [@fal99]. The blue continuum plus absence of Na [i]{} D lines in the spectra of SN 2007rt suggest that there is a negligible effect due to extinction on the observed spectral energy distribution. Thus we have only corrected the spectra by the small Galactic extinction contribution as suggested by @sch98. Along the line of sight of SN 2007rt the Galactic extinction is E(B-V) = 0.02. Light Curve evolution {#photanalysis} ===================== Fig. \[phot1\] shows the UBVRI light curves of SN 2007rt. Over the first 130 days after discovery, the light curve of SN 2007rt evolves slowly (at a rate of 0.003 mag d$^{-1}$) and it is clear the supernova has not be caught at maximum (see Sect. \[specevol\]). Following this there is a gradual decline in the light curve, which steepens at late phases (0.01 mag d$^{-1}$ from 458-562 days post-explosion). It was reported by @li07 that on a KAIT image taken almost seven months prior to discovery, on 8$^{\rm th}$ May 2007, nothing was detected at the position of the supernova. Since this supernova has been discovered quite late on in its evolution, as suggested by the non-detection of a maximum in the light curve, it is difficult to determine the explosion epoch. However we obtained an estimate of the explosion date of SN 2007rt indirectly using the spectra and light-curve of an interacting type IIn SN, SN 2005ip. Age of SN 2007rt {#age} ---------------- The light-curve of SN 2005ip declined rapidly after discovery, suggesting SN 2005ip was discovered close to maximum [@smith08b]. Many core collapse SNe, for which the rise time to peak was observed, have rise times of 3 weeks or less. Few SNe Type IIns have been detected during this rise phase, and those that have, give rise times of 20-50 days (see SN 2006gy and SN 2005gj in Fig. \[phot2\]). The fact that type IIns are rarely detected before maximum, suggests that the rise time is quite fast. Based on this, it is reasonable to assume that SN 2005ip was detected within a couple of weeks from explosion. Hence we adopt an explosion date for SN 2005ip of JD=2453673, which is 7 days prior to its detection (Fig. \[phot2\]). This is consistent with the findings of @smith08b, however we cannot ignore that there is a large degree of uncertainty in this. The assumption of a short rise time of SN2005ip is also supported by the rapid change in the spectra after discovery. The first spectrum shows an almost featureless and very blue spectrum [@smith08b] and over the following days the broad features and continuum evolve rapidly (from unpublished spectra, Trundle et al. in prep). At 95 days after its discovery, the spectrum of SN 2005ip best fits the broad features and slope of our first SN 2007rt spectrum (see Sect. \[ccsne\]). Assuming the similarity of the spectra of SN 2005ip and SN 2007rt indicates that these supernovae are of similar age, the age estimate of the first spectral epoch of SN 2007rt can be refined to approximately 102 $\pm$ 40 days after explosion, and thus the explosion date is set to JD=2454349 $\pm$ 40. Before adopting this age, for the following discussions, we will attempt to qualify our assumption that these two objects are close to or of a similar age at these epochs. The spectra of SN 2007rt at discovery and SN 2005ip at 95 days are heavily dominated by the CSM interaction. So although there is evidence for spectral similarity the properties of the CSM may mask the true epoch. However a very young age can be ruled out for SN 2007rt. Firstly, the first spectrum was obtained 21 days post-discovery and hence SN 2007rt must be at least 21 days old. Secondly, in an early epoch ($<$40 days) Type IIn, a strong blue continuum would be expected which evolves rapidly along with the broad features (viz. SN 2005ip, SN 2005gj; [@smith08b; @ald06]). However the blue continuum and broad features in SN 2007rt, do not evolve rapidly over a 60 day period between our first and fifth spectrum (see further discussion on continuum Sect. \[specevol\]). Similarly a large amount of CSM, prolonging the interaction duration of SN 2007rt could lead to an underestimate of its age. In spite of this, uncertainties of a few weeks in our estimates will not substantially alter our findings. From this point forward we adopt the explosion date of JD=2454349 $\pm$ 40, implying the observations of SN 2007rt were taken between 85 to 562 days post-explosion. [^2] The light curve of SN 2007rt is compared to those of a range of SNe types in Fig. \[phot2\]. At approximately 100 days past maximum, SN 2007rt is more luminous than the fast-declining type IIn SN 1999el, but fainter than some of the most luminous type IIn objects, SN 2006gy and SN 2005gj. Despite the similarity in the spectra of SN 2007rt and SN 2005ip at early times, there is a significant difference in the absolute luminosity of their light curves. This may be caused by differing densities and distributions of circumstellar matter surrounding the two SNe. The early time light-curve evolution of SN 2007rt is very slow, declining by 0.003 mag d$^{-1}$. This is considerably slower than SN 2005ip which has a decline rate for a similar period of 0.015 mag d$^{-1}$[@smith08b]. Spectral evolution {#specevol} ================== The spectra of SN 2007rt, displayed in Fig. \[optspec\], have been corrected for a Galactic reddening of E(B-V) = 0.02 mag as discussed in Sect. \[obsspec\]. There is a blue excess present in the spectra in the early epochs, which flattens after more than 300 days post-explosion. We fit a blackbody to the spectrum of SN 2007rt, and obtain a temperature of $\sim$7500 K for the first epoch. This fit in itself was poor and as the SN evolved it became increasingly difficult to simultaneously reconcile the flat red spectral region and the blue excess with a single blackbody. [@smith08b] showed that, in the case of SN 2005ip, spectra taken some 200 days after maximum revealed a blue pseudo-continuum composed of a forest of narrow emission lines, rather than a real thermal continuum with a blackbody like behaviour. This indicated that for SN 2005ip the blue pseudo-continuum at late times arose from the interaction with circumstellar material, and was not a real thermal continuum. This had previously been speculated by @stat91 and @tur93 for the case of SN 1988Z. The low velocity of the CSM in SN 2005ip allowed the detection of the separate components of the emission line spectrum in the blue which formed the pseudo-continuum at late-stages. It therefore may be possible that the temperature derived by fitting a blackbody to the spectrum of SN 2007rt is not meaningful. This will be discussed further in Sect. \[ccsne\].[^3] Prominent features in the spectra of SN 2007rt are the narrow and broad H$_{\alpha}$ features, the intermediate velocity H$_{\beta}$ line and broad Ca [ii]{} infrared triplet. There are also a number of narrow emission features associated with He and heavier metals in the spectra. However, possibly the most peculiar spectral feature is the broad, relatively flat topped He [i]{} 5875 Å emission line. In Fig. \[optspec\] these main features are identified. At all phases the H and He lines are asymmetric. At late times the H$_{\alpha}$ and He [i ]{}5875 Å lines have a pronounced blueshift, and a significant lack of redshifted emission in the broad component. There is also a hint of a narrow component of He [i]{} 5875 and 7065 Å  in the early epochs (see Fig. \[ha\]), suggesting there is a certain amount of helium in the CSM. The narrow He [i]{} 5875 Å line diminishes from the first spectra (102 days post-explosion), and disappears by day 208. In the later epochs, from day 475, an intermediate velocity component appears. This suggests that there is interaction between the ejecta and the He in the CSM. In addition, a strong permitted O [i]{} 8446 Å line develops from day 273, which has velocities comparable to the intermediate H component. To determine the properties of the spectral features we have fit Gaussians to their profiles. Each Gaussian was parameterized by three quantities, the central wavelength, full-width-half-maximum () and peak height. These parameters were freely fit by the fitting routine. In a small number of cases, the central wavelengths of the narrow lines were fixed, as the low intensity of the line relative to the continuum prohibited the detection of its peak. The resulting wavelengths, velocities and intensities of the lines are presented in Tables \[hafittbl\] - \[narrow\]. All  given in the aforementioned tables and the discussions below are corrected for the instrumental resolution. In interacting supernovae the flux calibration is subject to large uncertainties due to the presence of strong emission lines. Errors on the measured intensities of the lines are therefore on the order of 20%, whereas the [fwhm]{} uncertainties are 10%. The narrow lines which were detected and fitted in the spectrum had  velocities that were not fully resolved by the low resolution spectra (these are discussed further in Sect. \[narrowdis\]). As a guide to the properties of these lines, their parameters as determined from the TNG spectrum taken on the 11$^{\rm th}$ February are presented in Table \[narrow\]. In addition a number of the lines were resolved in the intermediate resolution WHT spectrum taken on the 4$^{\rm th}$ February, and these are also presented in the aforementioned table. In the case of H$_{\alpha}$ and He [i]{} 5875 Å, multiple Gaussian fits were employed (see Sect. \[haevol\]) due to the asymmetry of the observed profiles. As discussed in more detail below, H$_{\alpha}$ was first fit with a broad, intermediate and narrow Gaussian whilst the He [i]{} 5875 Å line was fit with a broad and narrow component for the early epochs, and a broad and intermediate component for the latest epochs. The parameters of the broad and intermediate components from these fits are shown in Table \[hafittbl\]. However these initial fits were poor, and hence we refit the profiles with up to four Gaussians; an intermediate component, two broad components, and where appropriate a narrow component. The parameters of the broad and intermediate components of the latter fits are presented in Table \[hafittbl2\]. Narrow H$_{\alpha}$ P-Cygni profile {#narrowha} ----------------------------------- Fig. \[ha\] shows the time-series of the H$_{\alpha}$ line. From day 102 to day 208 there is a narrow emission component present which decreases in strength over time. The luminosity of this narrow component decreases by more than a factor of 2 between our first epoch of data and that taken 58 days later. It has disappeared completely 138 days after our first spectrum. This feature is not fully resolved by the low resolution spectra which makes up most of our dataset. However, we can place an upper limit on the  velocity of $\leq$ 200 (see Table \[narrow\]). To resolve this narrow H$_{\alpha}$ feature we obtained intermediate resolution spectra of SN 2007rt on the 4$^{\rm th}$ February 2008 with ISIS on the WHT. This revealed a previously unresolved P-Cygni profile (see Fig. \[hahr\]). The  of the emission and absorption feature are 84 and 54 , respectively (Table \[narrow\]). The blue wing of the absorption profile extends out to 128 . If we assume that the narrow feature is representative of unshocked CSM, this edge velocity of 128 corresponds to the terminal velocity of the wind of the progenitor star (see Sect. \[progenitor\] for more discussion on the progenitor). H$_{\alpha}$ and He [i]{} evolution {#haevol} ----------------------------------- From the H$_{\alpha}$ profiles in the first panel of Fig. \[ha\], we can see that the line is highly asymmetric and consists of multiple components. Typically, the H$_{\alpha}$ profile of Type IIn supernovae can be decomposed into three parts: broad, intermediate and narrow. These are normally attributed to shocked (intermediate) and unshocked (narrow) circumstellar matter and emission from the underlying supernova ejecta (broad). In SN 2007rt, the red wing of the broad feature appears to be less extended than the blue wing, and both the broad and intermediate features show some asymmetry. The extent of the blue and red wings decrease rapidly with time, showing a reduction in the [fwhm]{} of the broad component, and hence a change in the ejecta. There is also a notable blueshift in the 240 day spectrum, which becomes more pronounced by day 273, and which is accompanied by an increased asymmetry or decrease in the emission in the red wing. In the latest epochs, between 475 and 507 days, the blueshift is still present and there is a remarkable lack of redshifted emission. This change in the line profiles is accompanied by a simultaneous decline in the light curve of SN 2007rt (see Fig. \[phot2\]). Asymmetry in the line profile and a blue-shifted peak are often related to dust formation. Since this affects the broad component of the line it could indicate the presence of newly formed dust in the SN ejecta [viz. SN 1987A; see @dan89; @lucy89]. In the first instance the complex profile of H$_{\alpha}$ in SN 2007rt was fit with three Gaussian profiles allowing all parameters to be freely fit. In the early epochs, the He [i]{} 5875 Å line did not require an intermediate component and hence was fit only with components representing the broad and narrow features. An additional narrow component was added to account for the contribution to the profile from \[N [ii]{}\] 5755 Å. In the case of He [i]{} 5875 Å the central wavelength of the narrow component was fixed. The narrow line is no longer present in either the H or He lines after 208 days, and hence the profiles at these epochs were fit with only two Gaussians (see Fig. \[hafitonline1\][^4]). From day 475, two Gaussians were fit to the He [i]{} 5875 Å line representing the broad and intermediate components. The top panel of Fig. \[hefit\] shows a typical fit to H$_{\alpha}$ for one epoch. The parameters determined from this process for the intermediate and broad components are reported in Table \[hafittbl\], those of the narrow components are presented separately. Uncertainties in the central wavelengths, FWHM, and intensities of the fits are $\pm$ 5-7 Å, 10%, and 20%, respectively. The intermediate component has a [fwhm]{} velocity range of $\sim$2900 - 1800  over the first 192 days from discovery. There is a gradual reduction in the [fwhm]{} of the broad components of the two lines over the observed period. In the first spectrum the broad component has a [fwhm]{} velocity of $\sim$10,000 , decreasing to 6000  171 days later (see upper panel of Fig. \[vels\]). The fact that there is nearly a 40% reduction in the width of the line clearly shows that this broad component is related to the ejecta. Nevertheless, the high velocity of 10,000  detected over three months after explosion is inconsistent with non-interacting CCSNe and requires an extremely energetic explosion. We note here that the high expansion velocities are only inconsistent if the age of SN 2007rt is greater than a month or two. However as discussed in Sect. \[age\] we cannot provide an accurate age estimate of the SN, the implications of these assumptions will be discussed further in Sect. \[asymm\]. Careful inspection of Fig. \[hefit\] reveals that the combination of the three Gaussian profiles does not provide a suitable fit to the red wing of the H$_{\alpha}$ profile. In addition, the peculiarly flat-topped He [i]{} mentioned in Sect.\[specevol\] is not consistent with a single broad component. An enhanced fit to both lines was found by fitting up to four Gaussian profiles: an intermediate component, broader blue and red-shifted components, and where appropriate, a narrow component (see Fig. \[hefit\]). The significance of these fits will be discussed in more detail in Sect. \[discuss\], but briefly the two components at rest represent the shocked and unshocked circumstellar material. The other two components represent blobs of higher velocity material moving away from the supernova, one in the direction of the observer and one opposite to this. To produce these fits, the H$_{\alpha}$ line was fit with multiple components, fixing the central wavelengths of the blue and red -shifted broad components in an iterative manner, whilst the other parameters were set free. For the helium line, the  and separation of the broad and intermediate components determined for the H$_{\alpha}$ line, were used to fix their counterparts in the He [i]{} feature. The lower panels of Fig. \[hefit\] show the resultant fits to H$_{\alpha}$ and He [i]{} 5875 Å. (Fits to the spectra for additional epochs can be seen in Fig. \[hafitonline2\], which is only available online). The parameters for the intermediate and broad components are presented in Table \[hafittbl2\]. In the first epoch the intermediate component in both profiles has a  of $\sim$2700 , whilst the blue and red-shifted broader components are at $\sim$5800 and 6700 , respectively (see Table \[hafittbl2\] and lower panel of Fig. \[vels\]). Over the first six months of the observational period of SN 2007rt, the broad component shows a 40% reduction in , whereas the intermediate component does not experience any noticeable change. It is therefore clear that the broader components are related to the ejecta, whilst the intermediate component is consistent with shocked material. In the last epochs taken a further 6 months later, only two components (a blue-shifted broad and intermediate component) were used to fit the data due to a lack of emission in the red wing of the lines. H$_{\beta}$ {#beta} ----------- A strong intermediate component of H$_{\beta}$ is detected in SN 2007rt, with some indication of a weak narrow component in the earliest epochs (see second panel of Fig. \[ha\]). Unlike H$_{\alpha}$, the higher order Balmer lines, H$_{\beta, \gamma}$, do not appear to have a broad component. For H$_{\beta}$ this is clearer, as it has a higher intensity relative to the strong blue continuum, but the behaviour of these lines are similar. The line profile of H$_{\beta}$ has an emission peak with a  of $\sim$4000  in the first epoch of data. This corresponds to the intermediate component seen in H$_{\alpha}$, and changes at a similar rate (see Fig. \[vels\]). In SN 1988Z, similar differences in the H$_{\alpha}$ and H$_{\beta}$ profiles were observed, with no broad component detected in the latter [@tur93]. Blended with the H$_{\beta}$ line is an absorption trough or depression in the continuum. Whilst there may be some contribution from hydrogen, the  is twice the width of the emission in H$_{\beta}$, and it is unlikely to be due to hydrogen alone. Narrow emission lines {#narrowdis} --------------------- Besides the prototypical narrow H$_{\alpha}$ feature, a number of other narrow emission features, many of which are forbidden lines, were observed in the low resolution spectra of SN 2007rt. These narrow lines have been identified as He [i]{} 5875, 7065 Å, \[O [iii]{}\] 4363, 4959, 5007 Å, \[N [ii]{}\] 5755 Å, \[Ne [iii]{}\] 3866 Å, and very weak \[Fe [iii]{}\] 5270 Å and \[Fe [vii]{}\] 6086 Å. There is also a possible detection of \[Fe [xi]{}\] 7891, although fringing in that part of the spectrum makes this identification marginal. Unfortunately the narrow lines were not resolved in our low resolution spectra. In the highest resolution spectrum in this low resolution group, taken on the 11$^{\rm th}$ February 2008 the  of the lines are typically a few 100 (see Table \[narrow\]). The \[O [iii]{}\] lines have velocities of $\sim$200 , which compares well with the narrow emission component of H$_{\alpha}$, suggesting they are produced in the same region of the CSM. An intermediate resolution spectrum of the H$_{\alpha}$ region of SN 2007rt was obtained on the 4$^{\rm th}$ February 2008 with ISIS/WHT and revealed additional narrow features of \[O [i]{}\] 6300, 6363 Å, \[N [ii]{}\] 6548, 6583 Å and the high ionisation line \[Fe [x]{}\] 6374 Å. The \[Fe [vii]{}\] 6086 line was also clearly detected in this spectrum. In Table \[narrow\] the parameters of the gaussian fits to the narrow lines detected in the TNG spectrum on 11$^{\rm th}$ February 2008 (day 160) and the ISIS/WHT spectrum (day 153) are presented as a guide to their parameters. Note that the velocities of the lines from the ISIS/WHT spectrum are significantly lower than derived from the lines in the spectrum taken seven days later, on the 11$^{\rm th}$ February. This is due to the differences in the spectral resolution, as the lines in the low resolution spectra are unresolved. We note here that as the lines are unresolved there may be some unidentified contribution to these narrow emission lines from underlying H [ii]{} regions. Discussion {#discuss} ========== In the introduction we mentioned the ambiguity surrounding the progenitors of Type IIn SNe and the debate over the mechanism for the SN 2002ic-like supernovae. The strong interaction in these so-called hybrid Type Ia/IIn objects makes it difficult to definitively conclude whether these are Type Ia or core-collapse supernovae. However in the case of SN 2007rt the presence of He in the ejecta, indicated by the broad He [i]{} 5875 Å line and the lack of broad forbidden Fe features at late phases, which are prominent in type Ia SNe, suggests that SN 2007rt is a core-collapse supernova. Comparison with other interacting SNe {#ccsne} ------------------------------------- In Sect. \[photanalysis\] we noted the similarity between SN 2007rt and another type IIn supernova, SN 2005ip. At early stages this SN displayed a weak series of both low and high ionisation, narrow features [@smith08b see their Fig. 2], as detected here in SN 2007rt. SN 2005ip bears a strong resemblance to the prototype for the type IIn class, 1988Z, which also had a narrow, forbidden, emission line spectrum [@stat91; @tur93]. In SN 2005ip and 1988Z there is a strong narrow He [i]{} feature, with SN 2005ip developing an intermediate feature over time [@smith08b see their Fig. 8]. SN 2007rt also has narrow He [i]{} 5875 and 7065 Å lines, which indicate the presence of helium in the circumstellar medium. However the main spectral difference between these two SNe is the presence of a broad He [i]{} 5875 Å feature in SN 2007rt. This broad helium line is not usually detected in Type IIn spectra but is present in SN 2007rt from the first spectrum taken $\sim$102 days post-explosion and narrows with time. A supernova with striking similarities to SN 2007rt is the type IIn SN 1997eg [@sal02; @hoff08]. In Fig. \[05ipspec\] these two supernovae are compared along with SN 2005ip and SN 1988Z. SN 1997eg has strong broad He [i]{} 5875 and 7065 Å lines with velocities comparable to the broad H$_{\alpha}$ profile. @hoff08 suggest that the high He/H ratio in the ejecta of SN 1997eg, indicates that the progenitor star must have shed a significant amount of its hydrogen shell, possibly through an episode of mass-loss prior to the supernova explosion. This mass-loss episode would rid the star of a large quantity of its hydrogen shell and deposit the hydrogen rich material into its immediate environment, thus a helium rich atmosphere would be left behind in the progenitor star to be revealed in the supernova ejecta. We suggest that this is also a likely scenario for SN 2007rt. However, it should be noted that strong lines can be formed not just by a high abundance but are dependent on the temperature and density conditions of the material. Comparing the spectra of the four supernovae mentioned above, it is clear that there are varying degrees of He line strengths in these objects, with SN 2007rt being intermediate between SN 2005ip/SN 1988Z and SN 1997eg. One interpretation of this, is that there is varying degrees of H and He in the ejecta of these SNe. In addition to the high helium abundance, @hoff08 remarked that the peaked He profiles are suggestive of ejecta interacting with an asymmetric circumstellar medium, and this is substantiated by spectropolarimetry results. At early times these type IIn supernovae have strong blue continua, with flatter red spectra. In the case of SN 2005ip a blackbody of 7300 K [@smith08b] was required to fit the data. Our earliest spectrum of SN 2007rt, at 102 days post-explosion, was poorly fit by a blackbody with $\sim$7550 K. Over time the blue excess decreased, and the red part of the spectrum flattened, making it difficult to fit a single blackbody. @smith08b reported a similar effect in SN 2005ip. However, in this case the late time data resolved this continuum into a forest of high-ionisation (mainly Fe) emission lines. They suggest these lines are ionised by X-rays formed in the shocked material of a slow wind, and refer to a pseudo-continuum. This provides support for the suggestion that the blue excess in type IIn supernovae is related to the interaction of the ejecta with the CSM. @smith08b suggest that the blue excess in SN 2006jc is a result of a similar effect, but the wind of the progenitor (likely a WR star) was faster, and hence the blue pseudo-continuum consisted of a blend of broader emission lines. If confirmed, this highlights the role the density and geometrical configuration of the progenitors wind plays in the evolution of these interacting SNe. In the case of SN 2007rt the narrow H$_{\alpha}$ line suggests that the progenitors wind has a velocity similar to that of SN 2005ip ($\sim$100 ). The partially resolved features of O [iii]{} at 200  suggest we should be able to at least partially resolve a forest of narrow emission lines, if present. Nevertheless even at 507 days post-explosion we do not see this forest of narrow emission lines in the blue, despite the broad features in the continua of the two SNe being similar. The lack of evidence for this forest in our spectrum may be due to the continuum in SN 2007rt consisting of intermediate width emission lines, possibly of permitted/forbidden iron lines, formed in the shocked region. In Sect. \[beta\] we noted the depression in the continuum to the blue of the H$_{\beta}$ line. There is another such feature to the red of the \[O [iii]{} 5007\] Å line (see Fig. \[pseudo\]). These features are detected in many interacting supernovae such as 1997eg, 1995G, 2005la [@sal02; @hoff08; @pas02; @pas08b see their Fig. 4], and coincide well with the broad Fe [ii]{} absorption features from multiplet 42 (4924, 5018, 5169 Å) present in type Ib/c spectra. However a comparison of late time spectra of SN 2007rt and SN 2005ip, as in Fig. \[pseudo\], shows convincing evidence that in the spectra of interacting SNe this is due to a lack of emission lines amongst a dense forest, rather than an absorption trough. This is also thought to be the case for SN 2006jc [@pas07]. The non-blackbody like continuum, similarity in the broad features in the blue part of the spectrum with those of many interacting supernovae, and the clear case of a pseudo-continuum in SN 2005ip leads us to believe that SN 2007rt also has a pseudo-continuum. However we can not rule out that a real thermal continuum contributes wholly or partially to the early time spectra of SN 2007rt. Properties of the CSM --------------------- The CSM electron density and temperature can be inferred from the narrow forbidden lines in the spectrum of SN 2007rt. Coronal lines, such as \[FeX\] 6374 Å, as well as low ionisation \[O [iii]{}\] lines are present, suggesting that there are regions of the CSM with different temperatures and densities. This is not unexpected in the complex environment of the CSM, and the detection of high ionisation Fe lines indicates that X-rays are produced in the shocked region. The presence of \[Fe [vii]{}\] 6086 Å and \[Fe [x]{}\] 6374 Å combined with the absence of \[Fe [vi]{}\] suggests that these lines are formed in a region of the CSM with an electron temperature between 2.5 and 8 $\times$ 10$^{5}$K [@bry08]. The \[O [iii]{}\] lines, however, are expected to form in cooler regions with electron temperatures of $\sim$1 $\times$ 10$^{5}$K [@bry08]. Using the relationship between the \[O [iii]{}\] lines and electron density by @ost89, as cited in @sal02, we can determine the density of the CSM. Thus using the temperature range implied above, with the \[O [iii]{}\] line intensity ratio (I$_{4959+5007}$/I$_{4363}$) of 2.93 the density of the CSM, where the \[O [iii]{}\] lines form, is 4.6 $\times$ 10$^{7}$ cm$^{-3}$. The narrow H$_{\alpha}$ component is present in the early spectra of SN 2007rt but has completely disappeared in the spectrum taken on 1$^{\rm st}$ May 2008. If we assume that this is due to the ejecta having swept up the entire CSM, we can determine the extent of the CSM. Assuming the explosion date discussed in Sect. \[photanalysis\] and an ejecta velocity of 10, 000 , the unshocked region extends out to a maximum of 2.13 $\times$ 10$^{16}$ cm. Material ejected from a star at a wind speed of $\sim$100  would take $\sim$ 70 years to reach such a distance, hence the mass-loss must have occurred on this timescale. However, given the high densities of the CSM, as implied above from the O [iii]{} lines, it is possible that this narrow H$_{\alpha}$ feature disappears due to recombination. This explanation has been given for the disappearance of such features in SN 1994aj and SN 1996L [@b98; @b99]. Nature of the Progenitor Star {#progenitor} ----------------------------- In Sect. \[narrowha\] we discussed the very narrow P-Cygni feature detected in SN 2007rt, the absorption component of which has a  of 54 , and extends out to 128 . Narrow P-Cygni profiles have been detected in a number of type IIn SNe. However, they appear to fall into two different categories; those with very narrow profiles with the blue edge of the absorption profile extending out to $<$200  and those with velocities in the range 600-1000 . Some supernovae which fall into the latter category are SNe 1994aj, 1994W, 1995G, and 1996L, while those with lower velocities are SNe 1997ab, 1997eg, 2005gj [@chug94; @sol98; @b98; @b99; @pas02; @sal98; @sal02; @tru08]. As mentioned in Sect. \[narrowha\] these P-Cygni profiles provide insight into the stellar wind velocity of the progenitor star. The high velocities observed in SN 1994W and others are consistent with the wind velocities observed in WR stars [@crow07]. Luminous blue variables (LBVs) have slower wind velocities in the range of 100-500  [@stahl01; @stahl03; @smith07], which may explain some of the objects in the low velocity category, such as in the case of SN 1997ab. Typically red supergiants have velocities of approximately 10  but there are a few cases for which 30-40  edge velocities have been detected [viz. VY CMa see @smith09b and references therein]. Therefore it cannot be ruled out that the wind velocities of some of these lower velocity objects may be consistent with extreme red supergiants. The expansion velocity of $\sim$ 128  is on the lower extreme of the LBV range, comparable to quiescent LBVs such as HD160529 [@stahl03], and is quite high for red supergiants. Type II spectra show no indication of He spectral lines at optical wavelengths, except for at very early stages ($\sim$ 1 week) when the high temperatures in the ejecta allow for their formation. Type IIb are the only SNe which have He and H present in their spectra, with possibly type Ib showing a trace of H accompanied by He. If the interpretation that the broad He [i]{} 5875 Å line in SN 2007rt is formed by a high helium abundance is valid, it suggests that the atmosphere of SN 2007rt’s progenitor has a higher He/H ratio than that of many Type II SNe progenitors. This places the progenitor of SN 2007rt as a transitional object between those of normal Type IIP’s and those of hydrogen stripped core collapse SNe. The combination of H and He in the ejecta would suggest either the progenitor passed through an LBV phase and has lost a significant amount of its H shell through its previous mass-loss history or that the progenitor is in a more evolved WN or WNH phase, i.e. mass-loss via stellar winds has revealed H-burning products at the stars surface but not He-burning products as would be the case in WC stars. However, the very low (LBV-like) wind velocity of the unshocked material is not consistent with a WR wind, as velocities in such stars are typically greater than 500  [@crow07 and references therein]. If the progenitor is a WR star, the CSM detected by the narrow H components could be the result of an LBV outburst, which occurred prior to the progenitor entering a WN phase. In this case the progenitor would need to be in a very early stage of the WN phase, as otherwise the LBV wind would be swept up by the WN wind [@vm07]. However we should note that there is no clear distinction between quiescent LBVs and WNH stars, as many of the latter are know to be quiescent LBVs [viz. AG Car, see @smith08c and references therein]. Explanation of the H and He [i]{} Asymmetries {#asymm} --------------------------------------------- As discussed in Sect \[haevol\], the H$_{\alpha}$ and He [i]{} 5875 Å  profiles in the spectra of SN 2007rt are peculiar. An asymmetry is present in these lines from the first spectrum onwards (see Fig. \[ha\]). In the last few epochs of our spectral dataset, from 240 days, a blueshift is detected. ### Late phase evolution: $\sim$240-507 days post-explosion. Once the temperature in the ejecta has dropped below the threshold for dust grains to condense, dust can form. The presence of dust grains causes a net blueshift due to the absorption of redshifted light. Hence, asymmetric and blueshifted profiles formed as the ejecta expands and cools, can be explained by dust forming in the ejecta. In Fig. \[ha\] it can be seen that the asymmetry of the H$_{\alpha}$ profile in SN 2007rt increases with time and at late phases become blueshifted with significant absorption of the redshifted light. This behaviour appears to be consistent with the presence of newly formed dust. In addition there is an increasing asymmetry in the He [i]{} line. Evidence of such behavior due to dust has been seen in a number of Type II SNe, viz. 1987A [@dan89; @lucy89]. In SN 2007rt this blueshift is first detected in the spectrum taken on day 240 and becomes more pronounced by day 273. At even later epochs (475-507 days) the H$_{\alpha}$ and He [i]{} 5875 Å  lines show significant absorption of the redshifted light. Additionally, there is a significant decrease in SN 2007rt’s magnitude during this late phase. The R-band magnitude declines at a rate of 0.01 mag d$^{-1}$ from 458 to 562 days post-maximum. In the case of Type II SN 1987A a clear rise in the IR magnitudes was detected and is accompanied by a decline in the optical band at over 450 days post-explosion, indicating the formation of dust [0.016 mag d$^{-1}$ at 467-562 days, @whit89]. Hence the formation of dust is the most probable justification for the decline in SN 2007rt’s lightcurve in these latter points. ### Early phase evolution: $\sim$102-240 days post-explosion. The explanation of asymmetries in the H and He profiles in the earliest SN 2007rt spectra is uncertain, due in part to the uncertainty in its age. Here we outline two possible scenarios: (1) the object is young and can form dust in the fast expanding ejecta, (2) the broad components are inconsistent with the SNe’s age and it has an asymmetric or bipolar outflow. As mentioned above the reduced emission in the red wing of line profiles are suggestive of dust formation, however most dust detections have been made in late-time ($\sim$ 300 days) SNe spectra. The asymmetry in the first spectrum, taken approximately 102 days post-explosion, would appear to be inconsistent with the late-time formation of dust. Additionally if dust is invoked to explain the asymmetry in the profiles, an explanation for high ejecta velocities of 10,000 at approximately 102 days after explosion is required due to the high energetic explosions implied by such velocities. Typically non-interacting core collapse supernovae have velocities of 10,000  and greater only within 30 days of the explosion [@pat01 see Fig. 5]. At 100 days post-explosion, 7000-5000  are more typical expansion velocities. Whilst the age of SN 2007rt is uncertain it is unlikely to be less than one month old, as the SN was first observed 21 days after discovery, was caught post-maximum, and in an interacting SNe such as this the spectrum is expected to be significantly bluer than detected (see Sect. \[age\]). Nevertheless assuming that SN 2007rt has an age significantly less than or equal to 100 days, it is difficult to form dust at an early epoch. Unlike the case of the peculiar Ibn SN 2006jc [@sak07; @mat08; @smith08a; @dicarlo08; @noz08], the dust must have formed in the ejecta not in a post-shock region as the asymmetries observed are in the broad rather than the intermediate component. The only other object, for which dust has been invoked to explain early-time fading of the broad component, is SN 2005ip [@smith08b]. @smith08b suggest that the dust forms in the fast expanding ejecta, however it is unclear what mechanism would allow for dust formation in the high temperature gas of the ejecta. A possible added support to dust formation at a young age is the detection of IR excess by @fox09 from NIR photometry of SN 2005ip from 50-200 days post-discovery. An alternative scenario requires the presence of an aspherical or bipolar outflow. This scenario does not require such high expansion velocities as is the case above and hence is more consistent with our adopted explosion epoch. It also provides fits to the H$_{\alpha}$ and He [i]{} 5875 Å  lines, which are more consistent with their profiles. The profile fit in this case requires a blue and red-shifted component at velocities in the range 6000-7000  with an additional intermediate and narrow component (see Fig. \[hefit\]). The blue and red-shifted components represent high velocity material moving away from the SN, which may be indicative of a asymmetric outflow of material from the SNe. The profiles seen in SN 2007rt are reminiscent of the double-peaked profiles seen in Type Ib/c, viz. the broad-lined Ic, SN 2003jd, and the Type IIb, SN 2006T [@mae08; @val08a; @tau09]. In the case of these latter objects the profile shapes can be explained by aspherical jet-like explosions viewed nearly sideways on [@maz05; @mae08]. @maz05 and @mae08 suggest that these double-peaked features can be detected if viewed from angles of 60-90 degrees to the jet axis. For SN 2007rt, there is still a significant amount of H in the shell and any model would need to account for this. Conclusions =========== We have presented a photometric and spectroscopic analysis of the Type IIn supernova SN 2007rt over more than 1 year after discovery. At 102 days post-explosion, SN 2007rt bears a striking resemblance to the type IIn supernovae SN 1988Z, SN 1997eg and SN 2005ip [@stat91; @tur93; @art99; @sal02; @hoff08; @smith08b], with strong narrow/intermediate/broad H${\alpha}$ emission lines, a strong blue continuum, as well as weak narrow emission lines from neutral to highly ionised states (viz. He [i]{}, \[O [iii]{}\], N [ii]{}, \[Fe [vii]{}\], \[Fe [x]{}\]). The narrow H lines indicate the presence of a H-rich CSM surrounding the SN. An intermediate resolution spectrum of the H$_{\alpha}$ region resolved the narrow emission feature into a P-Cygni profile with an edge velocity of 128 . This suggests the SN progenitor underwent mass-loss with velocities at the low end of those detected in LBV winds. The first spectrum contains a strong intermediate H$_{\alpha}$ component suggesting the ejecta had already begun to interact with the CSM. This is supported by the light curve of SN 2007rt, as it evolves very slowly over the early epochs of our data, declining at a rate of 0.003 mag.d$^{-1}$. By day 240 the narrow H$_{\alpha}$ component has disappeared and provides an estimate of the maximum extension of 2.13 $\times$ 10$^{16}$ cm for the unshocked CSM shell. Furthermore, a blue shift in the broad H$_{\alpha}$ feature at this late stage suggests that dust has begun to form in the ejecta. A broad He [i]{} 5875 Å component was also present in SN 2007rt. This is not a typical feature of SNe type IIn, and may be indicative of ejecta with a higher He/H ratio than generally observed in type IIn SNe. It is therefore possible that its progenitor is transitional between those of normal type II’s and hydrogen stripped core-collapse SNe, which as a result of its previous mass-loss history has lost a large amount of its hydrogen shell. There is also a hint of He present in the CSM, and hence the progenitor may be a WNH star in an early stage of evolution or an LBV which has lost a significant amount of H in previous mass-loss events. Throughout the spectral observations, the H$_{\alpha}$ profiles show a strong asymmetry that increases over time, the red wing being dampened compared to the blue wing. The presence of dust in the ejecta beyond 240 days is clear, however what causes the asymmetry in the earlier spectra is less certain. Two possible scenarios are presented to account for this which cannot be distinguished by our current dataset: (1) the supernova is significantly younger than estimated and dust is formed through some unknown mechanism in the fast expanding ejecta or (2) an asymmetric or bipolar outflow viewed nearly side on accounts for the asymmetry in the early epochs. The first scenario is similar to that invoked for SN 2005ip by @smith08b, however the mechanism for forming dust in the fast expanding ejecta is unclear. The second scenario relies on the SNe being older than one or two months and that the expansion velocity behaves like normal non-interacting core-collapse supernovae. Acknowledgements ================ The authors are grateful for the feedback from the referee and to Daniel Mendicini and Martin Nicholson for providing a number of photometric data points (http://ar.geocities.com/daniel\_mendicini/index.html; http://www.martin-nicholson.info/1/1a.htm). In addition we are grateful for the support from Avet Harutyunyan at the TNG, La Palma. CT acknowledges financial support from the STFC. FPK is grateful to AWE Aldermaston for the award of a William Penney Fellowship. This paper is based on observations from a number of telescopes; 2.2-m Calar telescope at the German-Spanish Astronomical Center at Calar Alto operated jointly by the Max-Planck-Institut für Astronomie (MPIA) and the Instituto de Astrofísica de Andalucía (CSIC), 1.82m Copernico telescope at Asiago Observatory operated by Padova Observatory, ALFOSC owned by the Instituto de Astrofísica de Andalucia (IAA) and operated at the Nordic Optical Telescope under agreement between IAA and the NBIfAFG of the Astronomical Observatory of Copenhagen, WHT and the Italian Telescopio Nazionale Galileo (TNG) operated by the Isaac Newton Group, and the FundaciÑn Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) on the island of La Palma at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. Observations were also carried on with the 0.35-m SLOOH telescope at the Teide Observatory (Canary Islands, Spain). SLOOH (http://www.slooh.com) is a subscription-based website enabling affordable, user-friendly control of observatories in Teide, Chile, and Australia. Sequence Stars in local region of SN 20007rt ============================================ The sequence stars used to estimate the SN magnitudes are identified in Fig \[seq\] and their magnitudes are presented in Table \[photseq\] [^1]: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. [^2]: All dates in the rest of this article refer to the adopted explosion epoch, unless otherwise stated. [^3]: From this point forward the term continuum refers to the overall shape of the spectrum, while ‘thermal continuum’ and ‘pseudo-continuum’ are used to distinguish between a thermal blackbody like continuum and an unresolved forest of emission lines. [^4]: Fig. \[hafitonline1\] is only available online
--- author: - | Minghong G. Wu and Michael W. Deem\ Chemical Engineering Department\ University of California\ Los Angeles, CA  90095-1592 bibliography: - 'rebridge.bib' title: Efficient Monte Carlo Methods for Cyclic Peptides --- **Abstract** > We present a new, biased Monte Carlo scheme for simulating complex, cyclic peptides. Backbone atoms are equilibrated with a biased rebridging scheme, and side-chain atoms are equilibrated with a look-ahead configurational bias Monte Carlo. Parallel tempering is shown to be an important ingredient in the construction of an efficient approach. , to appear. Introduction {#sec-intro} ============ Peptides are of fundamental importance in biological systems. They regulate homeostasis, particularly thirst, feeding and pain [@Kandel], serve as important signaling molecules in the nervous system [@Li98], and are used as a chemical defense mechanism by some organisms [@Olivera]. Peptides have been used within the biotechnology industry to identify antagonists blocking various abnormal enzymatic reactions or ligand-receptor interactions [@Clackson]. Cyclic peptides or constrained peptides are often preferred for this application, since such molecules lose less configurational entropy upon binding [@Alberg]. Cyclic peptides have backbones with a cyclic topology that is formed either by the condensation of two sulfhydryl (-SH) groups from two cysteine side chains or by the dyhydration of the head NH$_2$ and tail COOH groups. A classic example of using a cyclic peptide as an antagonist is the blocking of platelet aggregation by RGD peptides. The GPIIb/IIIa-fibronectin interaction is known to be responsible for blood platelet aggregation [@Ruoslahti]. Roughly eight cyclic peptides of the form CRGDxxxC(-5,15)[(-1,0)[53]{}]{} (-5,10)[(0,1)[5]{}]{} (-58,10)[(0,1)[5]{}]{} , CxxxRGDC(-5,15)[(-1,0)[53]{}]{} (-5,10)[(0,1)[5]{}]{} (-58,10)[(0,1)[5]{}]{} , and CxxxKGDC(-5,15)[(-1,0)[53]{}]{} (-5,10)[(0,1)[5]{}]{} (-58,10)[(0,1)[5]{}]{} that are effective platelet aggregation blockers were identified [@II_ONeil]. Several companies are now pursuing organic analogs of these RGD peptides in clinical trials. Although no successful drug has yet been designed by purely computational methods, the discovery of the RGD peptide and roughly thirty other pharmaceuticals has benefited in some way from computer simulation [@Boyd]. Simulation of complex biomolecules with standard Metropolis Monte Carlo or conventional molecular dynamics, however, often fails to sample conformations from the correct Boltzmann distribution. The difficulty lies in the intrinsic high energy barriers between the conformations adopted at room- or body-temperature, barriers that cannot be overcome with these methods. High temperature [@Bruccoleri] or potential-scaled [@Tsujishita] molecular dynamics can cross these barriers, but these methods sample from a distribution that is not the one of interest. The configurational bias Monte Carlo method (CBMC), first developed by Frenkel, Smit, and de Pablo [@Frenkel; @dePablo1], successfully samples complex energy landscapes by using local information when proposing moves. This method has been successfully applied to long chain molecules [@Smit2], phase behavior of long chain alkanes [@Smit3; @dePablo2], and conformations of hydrocarbons within zeolite channels [@Bates]. A combination with a generalized concerted rotation scheme, inspired by the method for alkane chains [@II_Dodd], has been applied to the simulation of linear and cyclic peptides [@I_Deem1]. This approach proved to be especially efficient in sampling cyclic peptides with barrier-separated conformations, even when the location of the conformation and energy barriers were not known [*a priori*]{}. For cyclic peptides, this method changes conformations locally by perturbing backbone segments of two to three amino acids. Such moves have the potential to equilibrate large molecules with complex topologies. Despite the successes of the configurational bias and concerted rotation scheme, difficulties still remain for complex cyclic peptides. Long or bulky side chains are not well equilibrated, for example. In some cases, the backbone of cyclic peptides is not sampled efficiently, due to the unpredictable presence of large barriers in the multidimensional torsional-angle free-energy landscape. We present an integrated methodology for simulation of cyclic peptides. Special concerns are given to optimizing and quantifying the efficiency of our method. We propose a peptide rebridging scheme, inspired by a method proposed for polymers [@Boone; @Pant95; @Mavrantzas98], and suitable for backbone equilibration of peptides. Eight torsional degrees of freedom are altered with this backbone move. Implementation of the move is reduced in all cases to the solution of a one-dimensional numerical problem. Four approaches to biasing the rebridging moves are proposed and compared. For side chain regrowth, we propose two new methods, ‘semi-look-ahead’ and ‘look-ahead’, inspired by Meirovitch’s lattice scanning method [@Meirovitch]. We find that both methods equilibrate side chains rapidly. We compare their efficiency and discuss optimal parameter values. For the most complex cyclic peptides, biased Monte Carlo is still not optimally efficient. To overcome the remaining barriers to effective sampling, we add parallel tempering to our range of techniques. Parallel tempering is a rigorous Monte Carlo method, first proposed for the study of glassy systems with large free energy barriers [@geyer91]. This method has been successfully applied to spin glasses [@hukushima96; @marinari98], self-avoiding random walks [@tesi96], lattice QCD [@Boyd98], linear peptides [@Hansmann], and crystal structure determination [@Falcioni]. In parallel tempering, we consider a set of identical systems, each at a distinct temperature. Each system is equilibrated with both updating and swapping moves. The swapping moves couple the systems in such a way that the lowest temperature system is able to escape from local energy minima without explicit knowledge of the barriers. This method achieves rigorously correct canonical sampling, and it significantly reduces the equilibration time. We show that the combination of biased Monte Carlo and parallel tempering achieves effective sampling, quickly overcoming energy barriers and approaching the Boltzmann distribution. We define our all-atom, molecular model of peptides in Sec. \[sec-moleint\]. The peptide rebridging scheme is described in Sec. \[sec-RBsch\], where technical details are provided. This section can be skipped on a first reading, as it is simply an extension of the method in ref. [@I_Deem1] to include pre-screening. How biasing can be done is discussed in Sec. \[sec-biasRB\]. In Sec. \[sec-para\] we apply the concept of parallel tempering to our system. The ‘semi-look-ahead’ and ‘look-ahead’ methods for side chains are presented in Sec. \[sec-SLA\] and Sec. \[sec-LA\], respectively. Results for the simulation of complex, cyclic peptides are given in Sec. \[sec-results\], where the efficiency of our approach is demonstrated. We discuss the results in Sec. \[sec-discuss\], and make our conclusions in Sec. \[sec-conclude\]. Simulation Methods {#sec-simuM} ================== Molecular Model {#sec-moleint} --------------- We chose to use the AMBER force field [@Weiner] with explicit atoms. Other suitable potential models are ECEPP [@Kang96] and CHARMm [@Mackerell]. Dielectric theory was used to estimate solvent effects [@Smith]. Fast coordinates such as bond lengths and bond angles were fixed at their equilibrium value. Only the biologically-relevant, torsional degrees of freedom were sampled. Nonetheless, this method can be easily generalized to flexible systems. With this assumption, a molecule is comprised of a set of so-called ‘rigid units’. Following the definition in ref. [@I_Deem1], a rigid unit consists of a set of atoms and bonds that form a rigid body. The relative distance between any pair of atoms within a rigid unit is constant. Adjacent rigid units are connected by a single sigma bond. The rigid units are labeled from the head NH$_2$ to the tail COOH group of the peptide. Each rigid unit has exactly one incoming bond that starts from the previous unit and ends within it. All other bonds that leave the unit are defined to be outgoing bonds. For example, a $\mathrm{C_{\alpha}H}$ unit has two outgoing bonds, the first going to the residue and the second going to the next backbone unit. For unit $i$, we define $\theta_i$ to be the angle formed by the incoming bond and the outgoing bond to the next backbone unit. The atom that ends the incoming bond is defined to be a head atom, and the atom that starts the outgoing bond is defined to be a tail atom. We define and to be the positions of the head and tail atoms of unit $i$, respectively. Rigid units that appear in the backbone are divided into two topological types. Type A includes all rigid units with identical head and tail atoms. Type B includes the CONH amide group, which has $\thet{i}=0$. Figure \[fig:clsunit\] illustrates the geometry of these two types and the definitions of and , which are the angles spanned by $\rv{i\mathrm{t}}-\rv{i\mathrm{h}}$ and the incoming and outgoing bonds, respectively. Rebridging Scheme {#sec-RBsch} ----------------- We display in Fig. \[fig:CNWKRGDC\] a typical cyclic peptide, CNWKRGDC(-5,15)[(-1,0)[65]{}]{} (-5,10)[(0,1)[5]{}]{} (-70,10)[(0,1)[5]{}]{}. Although the chemical functionality of peptides lies mostly in the freely-rotating side chains, backbone equilibration is important since the backbone serves as a scaffold for the side chains. We, therefore, use two types of biased Monte Carlo moves, chosen at random: movement of a random segment of the backbone with rigid rotation of the associated side chains and regrowth of a randomly picked side chain. Here we describe the backbone move, a peptide rebridging scheme. The peptide rebridging scheme is inspired by the concerted rotation [@II_Dodd] and rebridging [@Boone] moves for alkane chains and the extension of concerted rotation to peptides [@I_Deem1]. Peptide rebridging causes a local conformational change within the molecule, leaving the rest of the molecule fixed. Rebridging moves are not only suitable for cyclic peptides but also suitable for the internal parts of larger linear peptides and proteins. The main features that distinguish our rebridging scheme are the pre-screening process, more degrees of freedom per move, and more efficient biasing. We proposed five variations of rebridging moves, differing in the probabilities of choosing one of the many possible geometric solutions. They are Metropolis (MT), no Jacobian (NJ), with Jacobian (WJ), with Jacobian and old solutions (WJO), and with Jacobian and multiple rotations (WJM). Here we describe WJ. Other variations will be described in Section \[sec-biasRB\]. Peptide rebridging is carried out in several steps: 1. Randomly select two torsional degrees of freedom that are separated by six other torsional degrees of freedom. We label the two torsional angles as and . The eight rigid units, including both ends, are labeled from unit $0$ to unit $7$. Backbone positions are denoted , where $i=0,\ldots,6$ and $a=\mathrm{h}$ (head) or $\mathrm{t}$ (tail). Figure \[fig:babaa\] depicts a segment that is selected to be rebridged. 2. The angles and are rotated, causing the rigid units between 0 and 6 to change while leaving the rigid units before 0 and after 6 unchanged. The range of rotation is within $\pm\Delta\ph{\mathrm{max}}$. The two rotations break the connectivity of the molecule and provide new trial positions for and . We denote the new values by $\ph{0}'$ and $\ph{7}'$. 3. Find all geometrical solutions, $\ph{0}',\ldots,\ph{7}'$, that re-insert the backbone units in a valid way between rigid units 1 to 6. How we solve this geometrical problem will be described below. If no solution is found, this move is rejected. Otherwise, calculate the Rosenbluth factor, $\W{}{\Nst}$, which is defined as $$\begin{aligned} \label{eqn:rosen_WJn} \W{}{\Nst} &=& \sum_{i=1}^{k^\Nst} \mathrm{J}_i^\Nst\boltz{\mathrm{U}_i^\Nst}\ ,\end{aligned}$$ where $k^\Nst$ is the number of geometrical solutions found. The Jacobian associated with the constraints for the $i$th solution is $\mathrm{J}_i^\Nst$. 4. Pick a solution from these $k^\Nst$ solutions with probability $$\label{eqn:pick1} p_{i} = \frac{\mathrm{J}_i^\Nst \boltz{\mathrm{U}_i^\Nst}}{\W{i}{\Nst}}\ .$$ 5. Solve the geometrical problem corresponding to and . These solutions include the old configuration $\ph{0},\ldots,\ph{7}$ and are used to calculate the old Rosenbluth factor $$\begin{aligned} \label{eqn:rosen_WJo} \W{}{\Ost}& = &\sum_{i=1}^{k^\Ost}\mathrm{J}_i^\Ost \boltz{\mathrm{U}_i^\Ost}\ ,\end{aligned}$$ where $k^\Ost$ is the number of solutions in the old geometry. 6. The attempted move is accepted with the probability $$\label{eqn:acc-WJ} \acc(\mathrm{o\rightarrow n}) = \min \left(1, \frac{\W{}{\Nst}}{\W{}{\Ost}}\right)\ .$$ The Jacobian in eqs. (\[eqn:rosen\_WJn\]) and (\[eqn:rosen\_WJo\]) accounts for the fact that when we solve for the angles $\ph{1},\ldots,\ph{6}$, we do not produce uniform distributions. The Jacobian is defined by $$\begin{aligned} \label{eqn:jac_RB} \mathrm{J}\left(\frac{\ph{1},\ph{2},\ph{3},\ph{4},\ph{5},\ph{6}} {\rv{5\mathrm{t}},\ \uv{6},\ \gamma_6}\right) & = & \frac{\uv{6}\cdot\hat{\mathbf{e}}_{3}}{\det|\mathrm{B}|}\nonumber \\ \mathrm{B}_{ij} & = & [\uv{j}\times(\rv{5\mathrm{t}}-\rv{\mathit{j}\mathrm{h}})]_i \mbox{, if $j\leq3$}, \nonumber \\ & & [\uv{j}\times\uv{6}]_{j-3} \mbox{, if $j=4,\ 5$} .\end{aligned}$$ Here is the unit vector of the $i$th incoming bond, and $\hat{\mathbf{e}}_3$ is a unit vector along the laboratory $z$-axis. The Eulerian angle ${\gamma_6}$ is the azimuthal angle of $\hat {\bf u}_7$ in a spherical coordinate system defined with $\hat {\bf u}_6$ as the $z$-axis. The angle is measured with respect to the plane defined by $\hat {\bf u}_6$ and $\hat{\mathbf{e}}_3$. It is worth mentioning that in refs. [@I_Deem1] and [@II_Dodd], the Jacobian lacked the $\uv{6}\cdot\hat{\mathbf{e}}_{3}$ term. The Jacobian should be invariant under orthogonal transformations, but the Jacobians in refs. [@I_Deem1] and [@II_Dodd] are not. Despite this, proper sampling was attained, because the omitted terms cancel in the acceptance ratio. This cancellation does not occur in rebridging, since is changed by the rotation of . The Jacobian appears as a consequence of the end-atom constraints in the canonical partition function of a constrained or cyclic molecule. The Jacobian is derived in Appendix \[AppA\], where a careful discussion of the cyclic constraint is given as well. The geometrical problem in rebridging is solved by seeking conserved quantities. It is conceptually helpful to imagine a break point in the segment to be regrown. The rigid units before the break point are built upon the positions of the preceding units, whereas the rigid units after the break point are built upon the positions of the following units. When and are expressed in local coordinates of the (i-1)th unit, the positions are said to be defined in ‘forward notation’. When we build up these positions from the opposite direction, the positions are said to be defined in ‘backward notation’. How we choose the break point depends on the identity of the rigid units to be regrown. Rigid units before the break point are always defined by forward notation and rigid units after the break point are always defined by backward notation. With forward notation we have $\rv{1\mathrm{t}}=\rv{1\mathrm{t}}(\ph{1})$, $\rv{2\mathrm{h}}=\rv{2\mathrm{h}}(\ph{1})$, $\rv{2\mathrm{t}}=\rv{2\mathrm{t}}(\ph{1},\ph{2})$, $\rv{3\mathrm{h}}=\rv{3\mathrm{h}}(\ph{1},\ph{2})$, and so on. With backward notation, we have $\rv{5\mathrm{h}}=\rv{5\mathrm{h}}(\ph{6})$, $\rv{4\mathrm{t}}=\rv{4\mathrm{t}}(\ph{6})$, $\rv{4\mathrm{h}}=\rv{4\mathrm{h}}(\ph{6},\ph{5})$, $\rv{3\mathrm{t}}=\rv{3\mathrm{t}}(\ph{6},\ph{5})$, and so on. We use a variant of Flory’s local coordinate system [@Flory69]. The system was modified for units with $\thet{i}=0$ to reduce the number of variables appearing in the constraint equations. The general formulas for $(\ph{1},\ldots,\ph{i})$ and $(\ph{1},\ldots,\ph{i})$ are $$\begin{aligned} \label{eqn:loctrni} \rv{(i+1)\mathrm{h}}(\ph{1},\ldots,\ph{i}) & = & \rv{i\mathrm{t}}(\ph{1},\ldots,\ph{i}) + l_{i\mathrm{t},(i+1)\mathrm{h}} \Trx{i}{lab}\Lb{i}\Ph{i} \nonumber \\ \rv{i\mathrm{t}}(\ph{1},\ldots,\ph{i}) & = & \rv{i\mathrm{h}} + l_{i\mathrm{h},i\mathrm{t}}\Trx{i}{lab}\Lb{i\mathrm{h}}\Ph{i}\end{aligned}$$ where $$\begin{aligned} \Lb{i} & \equiv & \left(\begin{array}{ccc} \cos\thet{i} & 0 & 0 \\ 0 & \sin\thet{i} & 0 \\ 0 & 0 & \sin\thet{i} \end{array} \right) \nonumber \\ \Lb{i\mathrm{h}} &\equiv &\left ( \begin{array}{ccc} \cos\thet{i\mathrm{h}} & 0 & 0 \\ 0 & \sin\thet{i\mathrm{h}}& 0 \\ 0 & 0& \sin\thet{i\mathrm{h}} \end{array} \right) \nonumber \\ \Ph{i} & \equiv & \left( \begin{array}{c} 1 \\ \cos\ph{i} \\ \sin\ph{i} \end{array} \right)\end{aligned}$$ Here $l_{i a,j b}$ denotes the constant distance between and . We call unit $i$ the reference unit of unit $i+1$. We use the form of eq. (\[eqn:loctrni\]) because it explicitly isolates the terms involving the variable . The labels $ a$ and $ b$ can be either $\mathrm{h}$ or $\mathrm{t}$. This notation is dropped when the unit is an A unit. For example, $l_{1,2\mathrm{h}}$ says unit 1 is an A unit and defines the distance between and . In this case we also drop the head or tail notation for vectors and write $\rv{1}=\rv{1\mathrm{h}}=\rv{1\mathrm{t}}$. The transformation from local coordinates to the laboratory coordinates in forward notation is $$\begin{aligned} \Trx{1}{lab} &\equiv & (\begin{array}{ccc} \uv{1} & \vv{1} & \wv{1} \end{array}) \nonumber \\ \Trx{i}{lab}(\ph{1},\ldots,\ph{i-1})& \equiv & (\begin{array}{ccc} \uv{i} & \vv{i} & \wv{i} \end{array}) \nonumber \\ & = &\Trx{1}{lab}\Trx{1\phi}{}\Trx{1\theta}{}\Trx{2\phi}{}\cdots \Trx{(i-1)\phi}{}\Trx{(i-1)\theta}{} \nonumber \\ \Trx{i\theta}{} & \equiv&\left( \begin{array}{ccc} \cos\thet{i} & -\sin\thet{i} & 0 \\ \sin\thet{i} & \cos\thet{i} & 0 \\ 0 & 0 & 1 \end{array} \right) \nonumber \\ \Trx{i\phi}{} & \equiv &\left\{ \begin{array}{ll} \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 &\cos\ph{i} & -\sin\ph{i} \\ 0 &\sin\ph{i} & \cos\ph{i} \end{array} \right) & \mbox{, if } \thet{i}\neq0 \\ \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right) & \mbox{, if } \thet{i}=0\ . \end{array} \right.\end{aligned}$$ Here , , and are the axes of the local coordinates of unit $i$ in forward notation in the laboratory frame. The last matrix is modified from Flory’s coordinate system. It is defined so that $\Trx{i}{lab}=\Trx{i-1}{lab}$ when $\thet{i}=0$. This definition simplifies our algorithm. For the rigid units beyond the break point, we use backward notation. In backward notation, unit $i+1$ is the reference unit of unit $i$. The general formulas for $(\ph{6},\ldots,\ph{i+1})$ and $(\ph{6},\ldots,\ph{i+1})$ are $$\begin{aligned} \label{eqn:locrbi} \rv{(i-1)\mathrm{t}}(\ph{6},\ldots,\ph{i+1}) & = & \rv{i\mathrm{h}}(\ph{6},\ldots,\ph{i+1}) + l_{(i-1)\mathrm{t},i\mathrm{h}}\Trx{i}{lab}\Lb{i}\Ph{i+1} \nonumber \\ \rv{i\mathrm{h}}(\ph{6},\ldots,\ph{i+1}) & = & \rv{i\mathrm{t}} + l_{i\mathrm{h},i\mathrm{t}}\Trx{i}{lab}\Lb{i\mathrm{t}}\Ph{i+1} \nonumber \\ \Lb{i\mathrm{t}}&\equiv &\left( \begin{array}{ccc} \cos\thet{i\mathrm{t}} & 0 & 0 \\ 0 & \sin\thet{i\mathrm{t}} & 0 \\ 0 & 0 & \sin\thet{i\mathrm{t}} \end{array} \right)\ . \end{aligned}$$ The transformation from local coordinates to the laboratory coordinates in backward notation is $$\begin{aligned} \Trx{i}{lab} & \equiv& (\begin{array}{ccc} \xuv{i} & \yuv{i} & \zuv{i} \end{array}) \nonumber \\ & = & \Trx{5}{lab}\Trx{6\phi}{}\Trx{5\theta}{}\Trx{5\phi}{}\cdots \Trx{(i+2)\phi}{}\Trx{(i+1)\theta}{} \ .\end{aligned}$$ Here , , and are the axes of the local coordinates of unit $i$ in backward notation in the laboratory frame. For all rebridging cases that we consider, it is possible to find three constraint equations with three independent torsional angles and to determine the solutions by solving a one-dimensional equation numerically. The constraint equations vary depending on the types of the units 1 to 5. Table \[tab:0pro\] lists the six distinct cases that can occur and the corresponding constraint equations. The dependencies of the backbone positions on the torsional angles are specified explicitly in the argument, and one can tell from the arguments if the positions are in forward or backward notation. Actually, cases 4 and 5 are mirror images of case 1 and 2, with the rigid units labeled in the opposite direction. Strictly speaking, then, there are only four distinct cases: cases 1, 2, 3, and 6. In all cases, the first two constraint equations have at most two independent variables each. In one special case (case 3), an equation with only one variable is found. In the first three cases, the first two equations of each set are used to derive two torsional angles as analytic functions of . These two analytic expressions are in turn substituted into the third equation, which is solved numerically in the domain. In cases 1 and 2 the other two independent angles are and . Case 3 is special because is a constant. The other torsional angle needed in the third constraint equation is . In cases 4 and 5, the approach is similar, except that the equations are solved numerically in the domain. In case 6, the second and fourth rigid units are arbitrary and can be either A or B. This is a special case in which can be written as a function of a single, new torsional angle. This case includes, for example, ABABA, which corresponds to $\mathrm{C}_\alpha$-amide-$\mathrm{C}_\alpha$--$ \mathrm{C}_\alpha$. It is obvious that the geometrical constraints keep the distances $|\rv{1}-\rv{3}|$ and $|\rv{3}-\rv{5}|$ constant in all possible solutions. We also know the trial distance $|\rv{1}-\rv{5}|$ after performing the two rotations. These distances are conserved in all solutions. Therefore, possible positions of should fall on the intersection of two spheres centered at and . Figure \[fig:ababa\] shows the geometry of this segment and the conserved distances. If the triangle inequality $$\label{eqn:triieq} l_{1,3} + l_{3,5} \geq |\rv{1}-\rv{5}|_\mathrm{trial}$$ holds, we can define a new set of local coordinates for unit 3: $$\begin{aligned} \uv{3}' &= & (\rv{5}-\rv{1})/l_{1,5} \nonumber \label{eqn:locnew}\\ \wv{3}' &= &\uv{1}\times\uv{3}'/|\uv{1}\times\uv{3}'| \nonumber \\ \vv{3}' &= &\wv{3}'\times\uv{3}'\end{aligned}$$ to obtain an expression for as a function of a single, new angle $\ph{3}'$: $$\begin{aligned} \label{eqn:locmtxn} \rv{3}(\ph{3}') &= &\rv{1}+l_{1,3}{\Trx{3}{lab}}'\Lb{3}'\Ph{3}'\ ,\end{aligned}$$ where $$\begin{aligned} {\Trx{3}{lab}}' & \equiv & (\begin{array}{ccc} \uv{3}' & \vv{3}' & \wv{3}' \end{array}) \nonumber \\ \Lb{3}' & \equiv & \left(\begin{array}{ccc} \cos\thet{3}' & 0 & 0 \\ 0 & \sin\thet{3}' & 0 \\ 0 & 0 & \sin\thet{3}' \end{array} \right) \nonumber \\ \thet{3}' &\equiv & \left|\cos^{-1} \frac{{l_{1,3}}^2+{l_{1,5}}^2-{l_{3,5}}^2}{2l_{1,3}l_{1,5}}\right|\nonumber \\ \Ph{3}' & = & \left( \begin{array}{c} 1 \\ \cos\ph{3}' \\ \sin\ph{3}' \end{array} \right)\ .\end{aligned}$$ All of the constraint equations in table \[tab:0pro\] can be grouped by their functional forms into four types. The fourth column of table \[tab:0pro\] shows the type of each constraint equation. The general functional forms of these constraint equations are listed in table \[tab:constype\]. The first type is a quadratic function of a single variable. The other types are functions of two torsional angles. They are based on either conserved distances, as in ‘dist’, or conserved angles, as in ‘dot’ and ‘dot1’. The last column of table \[tab:constype\] lists the characteristic matrix, which is used in the pre-screening process and the evaluation of the third target function. We illustrate the peptide rebridging algorithm by taking case 6 in table \[tab:0pro\] as an example. If eq. (\[eqn:triieq\]) is not satisfied, the trial move is immediately rejected because of a geometrical failure. Otherwise, we go on. The first constraint equation allows us to express in terms of $\ph{3}'$. To do this, we rewrite the constraint equation as $$\begin{aligned} 0 = [\rv{3}(\ph{3}')-\rv{2\mathrm{h}}]^{\mathrm{\top}} [\rv{3}(\ph{3}')-\rv{2\mathrm{h}}]-{l_{2\mathrm{h},3}}^2 \nonumber\end{aligned}$$ and use eqs. (\[eqn:loctrni\]) and eq. (\[eqn:locmtxn\]) to obtain $$\begin{aligned} \label{eqn:consababa} 0 &= &(l_{1,3}{\Trx{3}{lab}}'\Lb{3}'\Ph{3}'-l_{1,2\mathrm{h}} \Trx{1}{lab}\Lb{1}\Ph{1})^{\mathrm{\top}} (l_{1,3}{\Trx{3}{lab}}'\Lb{3}'\Ph{3}'-l_{1,2\mathrm{h}} \Trx{1}{lab}\Lb{1}\Ph{1})-{l_{2\mathrm{h},3}}^2\nonumber \\ &= &{l_{1,3}}^2+{l_{1,2\mathrm{h}}}^2-{l_{2\mathrm{h},3}}^2- 2l_{1,3}l_{1,2\mathrm{h}}\tp{\Ph{3}'}\tp{\Lb{3}'}\tp{{\Trx{3}{lab}}'} \Trx{1}{lab}\Lb{1}\Ph{1}\ .\end{aligned}$$ We introduce the constant matrix $$\C\equiv\left( \begin{array}{ccc} 1 &0 &0 \\ 0 &0 &0 \\ 0 &0 &0 \end{array} \right)$$ and multiply the first three constant terms in eq. (\[eqn:consababa\]) by $\tp{\Ph{3}'}\C\Ph{1}$, which is unity, to obtain $$\tp{\Ph{3}'}\Mx{}\Ph{1} = 0\ , \label{eqn:pmpp}$$ where the constant characteristic matrix is defined as $$\label{eqn:Mx_case_6} \Mx{}= ({l_{1,3}}^2+{l_{1,2\mathrm{h}}}^2-{l_{2\mathrm{h},3}}^2)\C - 2l_{1,3}l_{1,2\mathrm{h}}\tp{\Lb{3}'}\tp{({\Trx{3}{lab}}')}\Trx{1}{lab}\Lb{1}\ .$$ In each case, the first two constraint equations can be cast into the form of eq. (\[eqn:pmpp\]). The right hand column of table \[tab:constype\] lists the constraint equations and the corresponding characteristic matrix for each case. Equation (\[eqn:Mx\_case\_6\]), for example, is a special case of the ‘dist’ type constraint equation in table \[tab:constype\], in which we have $\rv{i'}=\rv{j'}=\rv{1}$. To solve the constraint equations, we set $\omg{i}=\cos(\ph{i}/2)$ and use $$\begin{aligned} \label{eqn:triden} \cos\ph{i}& =&(1-\omg{i}^2)/(1+\omg{i}^2) \nonumber \\ \sin\ph{i}& =&2\omg{i}/(1+\omg{i}^2)\end{aligned}$$ to replace each $\cos\ph{i}$ and $\sin\ph{i}$ in eq. (\[eqn:pmpp\]). We rewrite eq. (\[eqn:Mx\_case\_6\]) as $$\begin{aligned} \tp{\Omg{3}'}\Mpx{}\Omg{1} &= &0\ , \label{eqn:omo}\end{aligned}$$ where $$\begin{aligned} \Omg{i}&\equiv &\left( \begin{array}{c} 1 \\ \omg{i} \\ \omg{i}^2 \end{array} \right) \ .\end{aligned}$$ The matrix is related to by $$\label{eqn:MtoMp} \Mpx{} = \left( \begin{array}{ccc} \Mx{11}+\Mx{12}+\Mx{21}+\Mx{22}\;\;\;\;\;\; & 2(\Mx{13}+\Mx{23}) &\;\;\;\;\;\; \Mx{11}-\Mx{12}+\Mx{21}-\Mx{22}\\ 2(\Mx{31}+\Mx{32}) & 4\Mx{33} & 2(\Mx{31}-\Mx{32}) \\ \Mx{11}+\Mx{12}-\Mx{21}-\Mx{22}\;\;\;\;\;\; & 2(\Mx{13}-\Mx{23}) &\;\;\;\;\;\; \Mx{11}-\Mx{12}-\Mx{21}+\Mx{22} \end{array} \right)\; .$$ Equation (\[eqn:omo\]) is quadratic in , and we find $$\begin{aligned} \label{eqn:quad} \omg{1} &= &\frac{1}{2c_2} \left[-c_1\pm (c_1^2-4c_0c_2)^{\frac{1}{2}}\right]\nonumber \\ &= &f_{1\pm}(\omg{3}')\ ,\end{aligned}$$ where $$\begin{aligned} c_0 &= &\Mpx{11}+\Mpx{21}\omg{3}'+\Mpx{31}{\omg{3}'}^2 \nonumber \\ c_1 &= &\Mpx{12}+\Mpx{22}\omg{3}'+\Mpx{32}{\omg{3}'}^2 \nonumber \\ c_2 &= &\Mpx{13}+\Mpx{23}\omg{3}'+\Mpx{33}{\omg{3}'}^2\; .\end{aligned}$$ Since $\omg{3}'$ must produce a non-negative discriminant in eq. (\[eqn:quad\]) in order to produce a real-valued , pre-screening can be done by solving $$\label{eqn:discri} c_1^2-4c_0c_2=0 \ .$$ Equation (\[eqn:discri\]) is a quartic polynomial equation. We use an eigenvalue method to solve this equation and to determine the valid domains of $\ph{3}'$ in the first constraint equation [@C_recipes]. Note that $\ph{1} = f_{1\pm}(\ph{3}')$ has two branches. Following a derivation parallel to that in eqs. (\[eqn:consababa\])–(\[eqn:discri\]), we can write $\ph{6}=f_{2\pm}(\ph{3}')$. A similar pre-screening process is done to determine the valid domains of $\ph{3}'$ in the second equation. This pre-screening process reduces the CPU cost and increases the efficiency of the algorithm considerably. Evaluation of the third target function is performed over the valid domains of $\ph{3}'$, which are the intersections of the valid domains found by pre-screening. The independent variable is chosen to be $\ph{3}'$ instead of $\omg{3}'$, since the latter may be valid on an infinite domain. To find the acceptable new rigid unit positions, the third target function is solved. To evaluate the target function, a series of calculations is repeated for each $\ph{3}'$. First, we calculate the corresponding and . Second, we determine $\rv{3}(\ph{3}')$, $\rv{2\mathrm{h}}(\ph{1})$, and $\rv{4\mathrm{t}}(\ph{6})$. Third, we calculate and , which are uniquely determined by the trial $\rv{3}(\ph{3}')$, $\rv{2\mathrm{h}}(\ph{1})$, and $\rv{4\mathrm{t}}(\ph{6})$ (see figure \[fig:ababa\]). Finally, we substitute and into the target function. We evaluate the target function on a grid, using a grid width of 0.003 radians. A finer grid is used when the function approaches zero. The function values so obtained are used to locate approximately the roots. Brent’s method is used to refine the roots [@C_recipes]. The roots for $\ph{3}' $ are sufficient to determine all the backbone positions. Substituting each root into $f_{1\pm}$ and $f_{2\pm}$, we obtain and , and thus , , and . Other backbone positions can be calculated easily. Side chains are rigidly rotated so as to connect to the backbone properly, and the geometrical problem is solved. For each valid $\ph{3}'$, there are two branches of the solution for $\ph{1} = f_{1\pm}(\ph{3}')$ and also two branches of the solution for $\ph{6} = f_{2\pm}(\ph{3}')$. Therefore, the target function has four branches. Figure \[fig:2-2var\] shows a typical target function. In the case shown in figure \[fig:2-2var\], there are six solutions. In summary, the algorithm for solving the geometrical problem for case 6 works as follows: 1. If the geometry does not satisfy eq. (\[eqn:triieq\]), the move is rejected. 2. Calculate the characteristic matrices, and , of the first two constraint equations. Transform to and to , using eq. (\[eqn:MtoMp\]). Find the intersection of valid domains using eq. (\[eqn:discri\]). If no common domain is found, the move is rejected. 3. Search for roots of the third equation on the valid $\ph{3}'$ domains. Determine all the backbone positions associated with each solution for $\ph{3}'$. Determine the positions of all associated side chains. Other cases in table \[tab:0pro\] are solved similarly, except that the independent variable is either or . Biasing of the Rebridging Moves {#sec-biasRB} ------------------------------- There are several ways to bias solutions in the rebridging scheme. The first method is discussed in [@I_Deem1]. The Rosenbluth factors are defined as $$\begin{aligned} \label{eqn:rosenb_CR} \W{}{\Nst}& = &\sum_{i=1}^{k^\Nst} \boltz{\mathrm{U}_i^\Nst} \nonumber \\ \W{}{\Ost}& = &\sum_{i=1}^{k^\Ost} \boltz{\mathrm{U}_i^\Ost}\end{aligned}$$ The proposed move is accepted with the probability $$\begin{aligned} \label{eqn:acc-CR} \acc(\mathrm{o\rightarrow n}) &= &\min \left(1, \frac{\mathrm{J}^\Nst \W{}{\Nst}}{\mathrm{J}^\Ost \W{}{\Ost}} \right) \ ,\end{aligned}$$ where $\mathrm{J}$ is the Jacobian. This method is called no Jacobian (NJ). A second method of bias, called with Jacobian (WJ), includes the bias introduced by the Jacobian within the Rosenbluth factors, as in eqs. (\[eqn:rosen\_WJn\]) and (\[eqn:rosen\_WJo\]). The proposed move is accepted with the probability given by eq. (\[eqn:acc-WJ\]). This approach is expected to achieve better sampling than NJ, since it explicitly includes the bias introduced by the Jacobian within the move [@Escobedo_95]. A third method of bias includes the old and new solutions within a single Rosenbluth factor. Solutions are picked with the probability $$\begin{aligned} \label{eqn:pickboth} p_{i} & = &\frac{\mathrm{J}_i\boltz{\mathrm{U}_i}}{\W{}{}}\; \mbox{, }i=1,\ldots, (k^\Ost + k^\Nst) \nonumber \\ \W{}{} & = & \W{}{\Ost}+\W{}{\Nst} \ .\end{aligned}$$ Here the Rosenbluth factors are, again, defined as in eqs. (\[eqn:rosen\_WJn\]) and (\[eqn:rosen\_WJo\]). Such a move is always accepted, although the new state may be identical with the old state. This method is called with Jacobian and old solutions (WJO). It is possible to perform multiple rotations on and . This scheme must be based on WJ or NJ so as to satisfy detailed balance. We choose WJ and call this method with Jacobian and multiple rotations (WJM). For a rebridging move with $k_{\max}$ rotations, $k_{\max}-1$ rotations around the old state must be performed to obtain a correct old Rosenbluth factor. A new configuration is selected from the solutions with the probability $$\label{eqn:pickWJM} p_i = \frac{\mathrm{J}_i^\Nst\boltz{\mathrm{U}_i^\Nst}} {\sum_{k=1}^{k_{\max}}\W{k}{\Nst}}\ ,$$ where $\W{k}{\Nst}$ is the Rosenbluth factor of the $k$th rotation, as calculated by eq. (\[eqn:rosen\_WJn\]). The acceptance probability is $$\begin{aligned} \label{eqn:acc-WJM} \acc(\mathrm{o\rightarrow n}) &= &\min \left(1, \frac{\sum_{k=1}^{k_\mathrm{max}}{\W{}{\Nst}(k)}} {\sum_{k'=1}^{k_\mathrm{max}}{\W{}{\Ost}(k')}}\right)\ .\end{aligned}$$ The last method is based on Metropolis rules, in which a solution is picked at random without any bias, as in ref. [@II_Dodd]. The picking probability and acceptance criteria are $$\begin{aligned} p_{i} & = &\frac{1}{k^\Nst}\nonumber \\ \acc(\mathrm{o\rightarrow n}) &= &\min \left[1, \frac{\mathrm{J}^\Nst k^\Nst \boltz{\mathrm{U}^\Nst}} {\mathrm{J}^\Ost k^\Ost \boltz{\mathrm{U}^\Ost}}\right]\ .\end{aligned}$$ The method is called Metropolis (MT). Parallel Tempering {#sec-para} ------------------ The use of biasing mitigates, but does not eliminate, the various free energy barriers in cyclic peptides. Even a small cyclic peptide is, in a sense, a ‘glassy’ system due to these significant and unpredictably-located free energy barriers. To deal with this issue, we use parallel tempering [@geyer91]. In parallel tempering we consider an extended ensemble with $n$ systems, labeled as $i=1,\ldots,n$. Each system is a copy of the original system, except that each is equilibrated at a distinct temperature, $T_i$, where $i=1,\ldots, n$ and $T_1 < T_2 <\ldots < T_n$. The canonical partition function of this extended canonical ensemble is given by $${\mathrm Q} = \prod_{i=1}^n {\mathrm Q}_i\ , \label{eq:part_pt}$$ where ${\mathrm Q}_i$ is the individual canonical partition function of the $i$th system. Two types of moves are performed in the ensemble. The first is a regular Monte Carlo move within a randomly chosen system. The second is a swapping move. A swapping move proposes to exchange the configurations of the two systems $i$ and $j=i+1$, $1\leq i< n$. This move is accepted with the probability $$\begin{aligned} \label{eqn:acc-pt} \acc[(i,\ j)\rightarrow (j,\ i)] & = & \min[1,\exp(-\beta_i\mathrm{U}_j-\beta_j\mathrm{U}_i+\beta_i\mathrm{U}_i+ \beta_j\mathrm{U}_j)] \nonumber \\ & = &\min[1,\exp(-\Delta\beta\Delta\mathrm{U})]\; .\end{aligned}$$ This technique forces each system to sample the Boltzmann distribution at the appropriate temperature. In our case, we are interested in the lowest temperature distribution only. The higher temperature systems are included solely to help the lowest temperature system to escape from local energy minima via the swapping moves. To achieve efficient sampling, the highest temperature should be such that no significant free energy barriers are observed. To ensure that the swapping moves are accepted, the energy histograms of adjacent systems should overlap. The sampling efficiency is modestly affected by the fraction of overlap. We arbitrarily chose to adjust the temperatures so that the probability of accepting a swapping move was roughly 0.1, and no attempt was made to optimize further these temperatures. We will show that the extra computational cost of simulating the higher temperature systems is more than compensated for by the increased sampling efficiency of the lowest temperature system. Semi-Look-Ahead {#sec-SLA} --------------- Since the conformations of the side chains determine the biological activity of peptides, effective sampling of side chains is important. Our method is based on the side chain dihedral angle moves in ref. [@I_Deem1]. A finite regrowth probability is assigned to each side chain of the molecule. A side chain move proceeds by regrowing the side chain unit by unit, beginning from the bond connecting the backbone to the side chain. At each step, $n_1$ twigs are generated and used to calculate the new partial Rosenbluth factor. One of the twigs is selected with a probability proportional to the Boltzmann factor associated with that twig. The old configuration and $n_{\alpha}-1$ random twigs are generated and used to calculate the old partial Rosenbluth factor. This procedure is repeated until the end of chain is reached. The new chain is accepted with the probability $$\label{eqn:acc1} \acc(\mathrm{o\rightarrow n}) = \min\left(1, \frac{\W{}{\Nst}}{\W{}{\Ost}}\right)\ ,$$ where is the product of the new partial Rosenbluth factors, and is the product of the old partial Rosenbluth factors. We propose a new method called semi-look-ahead (SLA) for side chain regrowth. For each torsionally-flexible bond, we define the group of atoms included in the partial Rosenbluth factor to be the maximum set of atoms whose positions are uniquely determined by choosing the trial rotation of this bond. Figure \[fig:sidechai\] sketches this new definition of atom groups and contrasts it with the one in ref. [@I_Deem1]. Our definition includes atoms beyond the boundary of rigid units, including the head atoms of rigid units adjacent to the current one. We expect SLA to achieve better sampling efficiency and faster equilibration than the method without look-ahead [@I_Deem1], due to the improved energy estimate for the biasing. The incremental energy at each step can be split into the internal and external components [@Smit1]. With this decomposition, torsional angles are generated with a probability derived from the internal energy, and the partial Rosenbluth factors include the external energy only. In our system, only torsional energies can be put into internal energy, and these energies account for only a small fraction of the total energy. We find it most efficient to set the internal energy to zero and to include all of the energy within the external component. Look Ahead {#sec-LA} ---------- For long chains or chains with bulky units, a more extensive form of look-ahead may help to avoid proposing high energy configurations [@Meirovitch]. The idea is illustrated in figure \[fig:lookahead\]. If we regrow the molecule by exploring the energy landscape only one rigid unit ahead, we will choose one configuration, as in figure \[fig:lookahead\]a. If we can look ahead two rigid units at one time, we may find the high energy region associated with that configuration and choose a more likely one instead, as in figure \[fig:lookahead\]b. We proposed two methods for look-ahead. The idea is to include a contribution from the energetic surroundings of the succeeding unit within the Rosenbluth factor. The first method, look-ahead (LA), generates $n_1$ trial rotations of the unit to be regrown and $n_2$ trial rotations of the succeeding unit for each of the trial rotations of the first unit. In the second method, we set $n_1=n_2=n$. We generate $n$ configurations of the first rigid unit, with $n$ configurations of the second unit associated to each. When regrowing the second unit, we use the $n$ configurations already proposed during the regrowth of the first configuration. We, therefore, generate only the configurations for the third unit when regrowing the second unit. This method of look-ahead with recycled configurations is abbreviated as LARC. We now describe the procedure for carrying out these methods. Suppose we want to cut and regrow rigid units $i=1,\ldots,N$. The following procedure describes how to generate and accept these units: 1. \[itm:beg\]Generate a set of $n_1$ trial torsional angles $\{{\ph{1}(\alpha)}\}$, $\alpha=1,\ldots,n_1$. Each angle is generated according to the internal potential of unit 1 $$\begin{aligned} \label{eqn:p_int_1} p_{1}^\Nst(\alpha)&=&\mathrm{C_1}\boltzb{\mathrm{U_1^{int}}[\ph{1}(\alpha)]}\ .\end{aligned}$$ Denote the external energy of unit 1 at $\ph{1}(\alpha)$ by $\mathrm{U_{1}^{ext}}(\alpha)$. 2. \[itm:ahead1\]For each trial $\ph{1}(\alpha)$, generate a set of $n_2$ torsional angles $\{{\ph{2}(\alpha,\ \gamma)}\}$. Each angle is generated according to the internal potential $$\begin{aligned} p_2^\Nst(\alpha,\ \gamma) &= &\mathrm{C_2}\boltzb{\mathrm{U_2^{int}}[\ph{2}(\alpha,\ \gamma)]}\ .\end{aligned}$$ Denote the external energy of unit 2 at $\ph{1}(\alpha)$ and $\ph{2}(\alpha,\ \gamma)$ by $\mathrm{U_2^{ext}}(\alpha,\ \gamma)$. For LA, the number $n_1$ can be different from $n_2$. For LARC $n_1=n_2$. 3. \[itm:ahead2\]Define $$\begin{aligned} \mathrm{w}_2^\Nst(\alpha)&= &\frac{\sum_{\gamma =1}^{n_2}\boltzm{\mathrm{U_2^{ext}}(\alpha,\ \gamma)}}{n_2}\end{aligned}$$ and $$\begin{aligned} \mathrm{w}_1^\Nst&= &\frac{\sum_{\alpha=1}^{n_1} \sum_{\gamma=1}^{n_2}\boltzm{\mathrm{U_1^{ext}}(\alpha)} \boltzm{\mathrm{U_2^{ext}}(\alpha,\ \gamma)}} {n_1n_2} \nonumber \\ & = & \frac{\sum_{\alpha =1}^{n_1}\boltzm{\mathrm{U_1^{ext}}(\alpha)} \mathrm{w}_2^\Nst(\alpha)}{n_1}\ .\end{aligned}$$ 4. \[itm:end\] Pick a $\ph{1}(\alpha)$ with the probability $$\begin{aligned} \label{eqn:pick_LA} q_1^\Nst(\alpha)&=& \frac{\boltzm{\mathrm{U_1^{ext}}(\alpha)}\mathrm{w}_2^\Nst(\alpha)} {\mathrm{w}_1^\Nst}\ .\end{aligned}$$ To simplify the notation, we switch the labels of the chosen $\alpha$th angle with the first torsional angle so that the chosen angle is first. 5. Repeat steps (\[itm:beg\])-(\[itm:end\]) for rigid unit 2 to unit N-1, except that for LARC, the $n_2$ twigs of unit 2 corresponding to the chosen unit 1 are recycled to be the $n_1$ ($n_1=n_2$) trial configurations of unit 2, and so on. 6. For the Nth unit, which is the last unit, there is no need to look ahead, so we repeat step (1) and generate $n_1$ torsional angles $\{\ph{N}(\alpha)\}$. Calculate $$\begin{aligned} \mathrm{w_N^\Nst} &\equiv&\sum_{\alpha =1}^{n_1}\boltzm{\mathrm{U_\mathit{N}^{ext}}(\alpha)}\ ,\end{aligned}$$ and pick a $\ph{N}(\alpha)$ with the probability $$\begin{aligned} q_N^\Nst(\alpha)&=& \frac{\boltzm{\mathrm{U_\mathit{N}^{ext}} (\alpha)}}{\mathrm{w}_N^\Nst}\ .\end{aligned}$$ We also need to generate and calculate the old Rosenbluth weights: 1. Generate $n_1-1$ trial torsional angles with the probability given by eq. (\[eqn:p\_int\_1\]). These angles and the original angle comprise a set of torsional angles $\{\ph{1}(\alpha)\}$. Let the original angle be labeled as $\ph{1}(1)$. 2. For each $\ph{1}(\alpha)$ other than $\ph{1}(1)$, generate $n_2$ torsional angles $\{\ph{2}(\alpha,\ \gamma)\}$. For the original angle $\ph{1}(1)$, generate $n_2$ angles if the method is LA and $n_2-1$ angles if the method is LARC. For LARC add the original to the set of angles generated and label the original angle as $\ph{2}(1,\ 1)$. All configurations other than the original one are generated according to the probability $$\begin{aligned} p_2^\Ost(\alpha,\ \gamma)&=&\mathrm{C_2}\boltzb{\mathrm{U_2^{int}}[\ph{2}(\alpha,\ \gamma)]}\ .\end{aligned}$$ Define $\mathrm{w}_2^\Ost(\alpha)$ and $\mathrm{w}_1^\Ost$ in an analogous way as $\mathrm{w}_2^\Nst(\alpha)$ and $\mathrm{w}_1^\Nst$. 3. Repeat the preceding two steps for unit 2 to N-1. 4. For the Nth unit, which is the last unit, generate a set of $n_1-1$ angles $\{\ph{N}(\alpha)\}$ with the probability given by eq. (\[eqn:p\_int\_1\]) Add the original angle. Calculate $\mathrm{w}_N^\Ost$. The proposed move is accepted with the probability $$\begin{aligned} \label{eqn:acc_LA} \acc(\mathrm{o\rightarrow n})&=&\min\left(1,\ \frac{\W{}{\Nst}}{\W{}{\Ost}}\right)\ ,\end{aligned}$$ where the Rosenbluth factors are defined as $$\begin{aligned} \label{eqn:globalW} \W{}{\Nst}& =&\frac{\prod_{i=1}^{N}\mathrm{w}_i^\Nst}{\prod_{j=2}^N\mathrm{w}_j^\Nst(1)} \nonumber \\ \W{}{\Ost}&= &\frac{\prod_{i=1}^{N}\mathrm{w}_i^\Ost}{\prod_{j=2}^N\mathrm{w}_j^\Ost(1)}\ .\end{aligned}$$ Note that the denominators of eq. (\[eqn:globalW\]) come from the bias introduced by eq. (\[eqn:pick\_LA\]). In Appendix \[AppB\] we prove that the LA method satisfies detailed balance. Results {#sec-results} ======= Backbone -------- We first apply the rebridging scheme to the cyclic peptide $\mathrm{CG_6C}$(-5,15)[(-1,0)[23]{}]{} (-5,10)[(0,1)[5]{}]{} (-28,10)[(0,1)[5]{}]{} . Simulation results for the five different variations of the rebridging scheme were generated. All simulations were performed on a Silicon Graphic Indigo$^2$ 195 MHz R10000 workstation. The system was equilibrated at 298 K. We used an optimized value of $\Delta\ph{\max}=10^\circ$ in all simulations except for WJM, in which the optimal value was $\Delta\ph{\max}=30^\circ$. A probability of 0.05 was assigned for equilibration of either of the two side chains, NH$_2$ and COOH. These two side short side chains are well equilibrated by the method without look-ahead, which is used in our simulations. We define the acceptance probability $P_\acc$ to be the ratio of accepted backbone moves to trial backbone moves. The efficiency of the Monte Carlo scheme is measured by the average displacement of the molecule per CPU time. We define $\Delta\ph{\mathrm{avg}}$ as the average of the absolute change of torsional angles per trial backbone move: $$\begin{aligned} \Delta\ph{\mathrm{avg}}& = & \frac{\sum_{i=1}^{N_\mathrm{trial}}\sum_{j=0}^7|\Delta\ph{j}(i)|} {N_{\mathrm{trial}}}\ .\end{aligned}$$ This value is a measure of the size of successful moves and the efficiency of the rebridging scheme. There is an intrinsic energy barrier for the $\mathrm{C_{\beta}SSC_{\beta}}$ dihedral angle at $\ph{\mathrm{C_\beta SSC_\beta}}\simeq 180^\circ$. The magnitude of this barrier is estimated to be 5.5-6.5 Kcal$\cdot$mol$^{-1}$ [@I_Deem1]. A barrier-crossing event happens whenever this angle crosses $\ph{\mathrm{C_\beta SSC_\beta}}=180^\circ$. We define the barrier-crossing frequency as the total number of barrier-crossing events divided by the total number of backbone moves. Table \[tab:comp4\] lists simulation results obtained with the five different rebridging methods. Figure \[fig:CSSC\] shows histograms of the angles observed in these simulations. The NJ method yields a left peak that is slightly higher than those from other methods. The MT method yields the lowest left peak. Although the histograms are similar, they did not converge to a unique distribution within our chosen simulation time. This is because barrier crossing was not frequent enough to produce accurate statistics. To increase the sampling efficiency, we performed a parallel tempering simulation with 4 systems. The system temperatures were 298 K, 500 K, 1000 K, and 3000 K. The rebridging moves were performed using the WJ biasing method. The probabilities for proposing swapping moves, backbone moves, and side chain moves were 0.1, 0.45, and 0.45, respectively. When a swapping move was chosen, two randomly chosen adjacent systems were proposed to swap configurations. The probabilities for swapping the two pairs with lower temperatures were doubled to accelerate de-correlations. When a backbone move or a side chain move was proposed, the system was picked with a probability that updates the two lowest temperature systems twice as frequently. We do this because of the longer correlation times at lower temperatures. The simulation consisted of 160000 Monte Carlo cycles. Each cycle proposed four swapping or updating moves, chosen at random. The whole CPU time taken in this run was 48 hours. The initial 20000 cycles were discarded to avoid equilibration effects. The swapping moves can occur with sufficient probability only if the energy histograms of adjacent systems overlap. Figure \[fig:e\_histo\] shows that this condition is satisfied for our choice of temperatures. Table \[tab:acc\_para\] lists the acceptance probabilities of swapping moves in this simulation. Figure \[fig:csscpara\] shows the distribution of the $\mathrm{C_\beta SSC_\beta}$ angle observed in the simulation. The histogram converged to a unique distribution with very little simulation data. After 80000 cycles, the observed distribution was almost indistinguishable from the one observed at 160000 cycles. With parallel tempering, we obtain substantially better statistics in less computation time. If fact, the computation time was two-thirds of that used in the single temperature simulations in figure \[fig:CSSC\]. Note that the histogram at 3000 K is essentially flat, and so at this temperature the molecule is free to cross the barrier at $\ph{\mathrm{C_\beta SSC_\beta}}\simeq 180^\circ$. Strong steric repulsion between hydrogen atoms connected to the adjacent $\mathrm{C_\beta}$ atoms still prevents the molecule from adopting a conformation with $\ph{\mathrm{C_\beta SSC_\beta}}\simeq 0^\circ$, but this does not hinder equilibration. Side Chains {#sec-sidechain} ----------- We performed simulations on the cyclic CNWKRGDC(-5,15)[(-1,0)[65]{}]{} (-5,10)[(0,1)[5]{}]{} (-70,10)[(0,1)[5]{}]{} molecule to test various side chain regrowth methods. This medically-relevant molecule has long and bulky side chains. Simulations were done both on a fixed backbone scaffold and on a backbone equilibrated with rebridging and parallel tempering. First, we fixed the backbone and chose side chains at random to regrow, using the method without look-ahead and the SLA method. We tested the dependence of the equilibration on the number of trial rotations $n_1$. The backbone was fixed throughout this simulation. Figure \[fig:old\_and\_new\] shows the energy as a function of CPU time during the equilibration period. Starting from a high energy configuration, the SLA method with $n_1=100$ or $n_1=10$ reaches equilibrium rapidly. The non-look-ahead method, however, had difficulty in finding low energy regions. It took the system with $n_1 =10$ more than 50 minutes to reach low energy configurations. The system with $n_1=100$, however, never reached equilibrium during the simulation. Although the associated acceptance probabilities are not small, the use of $n=100$ results in essentially non-ergodic sampling. We point out that the non-look-ahead method equilibrates the system faster with $n_1=1$ than with $n_1=10$ or $n_1=100$, although non-look-ahead is always slower than SLA. Figure \[fig:old\_and\_new\] may prompt the following question: How do we determine the optimal value for $n_1$? For short side chains, we expect that a small $n_1$ will work well. For longer side chains, we expect that a larger value of $n_1$ will help to explore the torsional space. The optimal value, therefore, will differ for each side chain. We next performed parallel tempering simulations with five systems, using the SLA, LA, and LARC methods for side chain regrowth. The backbone moves were performed by the WJ biasing method. The system temperatures were 298 K, 450 K, 780 K, 1700 K, and 5000 K. The simulation consisted of 100000 Monte Carlo cycles, except in the cases of $n_1=1$ and $n_1=30$ for SLA and $n_1\times n_2=20\times 10$ for LA, for which the number of cycles were 200000, 200000, and 60000, respectively. Each cycle proposed five swapping or updating moves, chosen at random. The probabilities for proposing swapping moves, backbone moves, and side chain moves were 0.1, 0.45, and 0.45, respectively. When a swapping move was proposed, two adjacent systems were chosen randomly, with the probability of picking system 1, 2, 3, and 4 equal to $\frac{3}{7}$, $\frac{2}{7}$, $\frac{1}{7}$, and $\frac{1}{7}$, respectively. When an updating move, either for backbones or for side chains, was proposed, we chose system 1, 2, 3, 4, and 5 with the probabilities $\frac{3}{8}$, $\frac{2}{8}$, $\frac{1}{8}$, $\frac{1}{8}$, and $\frac{1}{8}$, respectively. We focused on the sampling efficiencies for the tryptophan, lysine, and arginine residues. The lysine residue has a large number, 5, of rigid units. The tryptophan residue has an indole group. The arginine residue has a guanidine group. Both groups are bulky and tend to have low acceptance probabilities. For each side chain, we used the total torsional displacement per computation time, $\Delta\ph{}$/CPU, as an index to the efficiency. Both side chain moves and swapping moves contributed to $\Delta\ph{}$. We define the acceptance probability $P_\acc$ to be the ratio of successful moves to trial moves in a side chain. The results are summarized in table \[tab:residues\]. Among the four simulations with SLA, the choice $n_1=10$ yields the best efficiency for lysine, and $n_1=30$ yields the best efficiency for tryptophan and arginine. Among the four simulations with LA, the best efficiency for tryptophan is produced when $n_1\times n_2=10\times 5$. Lysine and arginine are equilibrated most efficiently with $n_1\times n_2=10\times 10$. With LARC the efficiency for tryptophan is the best when $n_1=5$. The efficiency for lysine is the best when $n_1=10$. Interestingly, arginine is so difficult to equilibrate, typically having such a low acceptance probability, that the efficiency was best with $n_1=15$. In general, LARC is more efficient than LA. Comparing the results from various methods, we find that tryptophan is equilibrated most efficiently by SLA, and lysine and arginine are equilibrated most efficiently by LARC. Discussion {#sec-discuss} ========== Among the five rebridging methods listed in table \[tab:comp4\], WJO gives the highest acceptance probability, where $P_\acc$ in WJO is defined to be the probability of accepting a solution other than the old one. WJO also produces the highest $\Delta\ph{\mathrm{avg}}$. The distribution generated by WJO is the most smooth among the curves, which shows that it is efficient in sampling local conformations. However, the CPU time per move for WJO is slightly higher than that for WJ or NJ, since there is no early rejection in WJO. We performed simulations with WJM using different $\Delta\ph{\max}$ and found the optimal value to be $\Delta\ph{\max}=30^\circ$. Simulations with $\Delta\ph{\max}<30^\circ$ were dominated by smaller moves that lead to infrequent barrier-crossing. The computation cost per WJM move is roughly proportional to the number of trial rotations. It is seen in table \[tab:comp4\] that each WJM move takes more than twice the time of a WJ move. Therefore, WJM is less efficient than the first three schemes in table \[tab:comp4\]. As expected, MT yields a fairly low acceptance probability. Taking the CPU cost into consideration, the efficiency of WJ is close to that of WJO. The efficiency of NJ is less than WJ and WJO. The WJM method is less efficient than the previous three schemes. The MT method is the least efficient. Our rebridging scheme is capable of overcoming energy barriers and promoting the frequency of barrier crossing. The fourth column in table \[tab:comp4\] lists the barrier-crossing frequency. The WJ method yields the highest barrier-crossing frequency, and WJO yields the lowest. This is due to the predominance of local moves in WJO. An accepted move in WJO can be a move that reconfigures six degrees of freedom only, which is less likely to lead to a barrier-crossing event. Barrier-crossing is a rare event in a simulation of the $\mathrm{CG_6C}$(-5,15)[(-1,0)[23]{}]{} (-5,10)[(0,1)[5]{}]{} (-28,10)[(0,1)[5]{}]{}  peptide. According to the potential of mean force determined by umbrella sampling, the potential at $\ph{\mathrm{C_\beta SSC_\beta}}=90^\circ$ is less than that at $\ph{\mathrm{C_\beta SSC_\beta}}=270^\circ$ by roughly 1 Kcal$\cdot$mol$\mathrm{^{-1}}$ [@I_Deem1]. We, therefore, expect the left peak to be substantially higher than the right one. Our results with biased rebridging moves are consistent with the potential of mean force, but the statistics are not good enough. Because steric repulsions are severe in our system, the correlation time for other degrees of freedom is also long, and these degrees of freedom also slow down the barrier-crossing. We suspect that there is a set of low energy conformations, separated by low-energy barriers, near $\ph{\mathrm{C_\beta SSC_\beta}}=270^\circ$. We have found that parallel tempering is an efficient and automatic means to overcome these barriers. The overlap of energy histograms guaranteed reasonable acceptance probabilities of the swapping moves. These swapping moves transfer configurations encountered at high temperatures to systems with low temperatures, thereby helping the low-temperature systems to escape from local energy minima. Such escape from local minima is important for efficient sampling, especially in glassy systems with high energy barriers. Cyclic peptides fall in this category, because of the torsional barriers and steric repulsions associated with the cyclic constraint. Our results provide additional evidence that parallel tempering is a powerful tool for studying glassy systems. Linear peptides, on the other hand, have a fairly simple free energy landscape, and so they do not benefit substantially from the parallel tempering approach [@Hansmann]. For equilibration of side chains, we tested whether the inclusion of torsional interaction energy in the internal potential is effective. We find that the acceptance probability is lower and the simulation time is increased through the use of internal biasing. Presumably this is because $\mathrm{U^{int}}$ is only a small fraction of the total interaction energy, and so biasing the torsional angles according to this term does not lead to better sampling. CBMC without any look-ahead does not equilibrate long or bulky side chains as well as does CBMC with look-ahead. The key difference is that without look-ahead, the head atoms of succeeding units are not included. Without look-ahead, a chosen rotation, though probably a low energy configuration for the local atoms, may implicitly put adjacent head atoms in high energy positions and thereby fail to find the lowest energy region. Using fewer twigs in the non-look-ahead method resulted in better equilibration, as shown by the results for $n_1=1$ in figure \[fig:old\_and\_new\]. This occurs because with $n_1=1$ the regrowing units have a better chance to miss the incorrectly identified low energy regions. Increasing the number of twigs raises the acceptance probability in SLA, LA, and LARC, but at an increased CPU cost. The optimal $n_1$ is attained when these competing effects are balanced. We know that for rougher energy landscapes more trial rotations need to be generated. From the first row of table \[tab:residues\], all three residues were poorly equilibrated by SLA with $n_1=1$. The torsional displacement $\Delta\ph{}$ in this case comes mainly from the swapping moves. Equilibration is improved by using a greater $n_1$, which increases the acceptance probabilities significantly. However, the acceptance probability for arginine with $n_1=100$ is lower than that with $n_1=30$. This means that improving the local sampling does not always lead to better global sampling, and this in turns implies the necessity of more significant look-ahead sampling. The arginine residue is both long, with four rigid units, and big, with a guanidine group at the end. Therefore, look-ahead is crucial to bypass high energy regions. Comparing the results for SLA with $n_1=10$, LA with $n_1\times n_2=10\times 10$, and LARC with $n_1=10$, we see both LA and LARC enhance the acceptance probabilities. The only exception is the shortest residue, tryptophan, for which LA yields a acceptance probability slightly lower than that from SLA. Clearly, LARC is superior to LA, because LARC costs less computation time while yielding higher acceptance probabilities. For the long residues, lysine and arginine, LARC yields the highest efficiencies among these three methods. The results suggest that, for long and bulky side chains, significant look-ahead is necessary. It is not necessary to use the same regrowth method for all side chains. Indeed, the optimal approach is to use a different regrowth method for side chains of different identity. For short side chains, SLA appears to be optimal. For longer side chains, LARC is the best method to use. We believe there may be some cases in which look-ahead is the only efficient approach for equilibration. Likely cases are those where there is substantial crowding and steric overlap, such as docking of a drug or signaling molecule to a protein receptor site or binding of antigen by the hypervariable region of antibodies. Conclusion {#sec-conclude} ========== Peptide function comes primarily from the chemical functionality of the side chains atoms, although the side chains themselves are positioned by the backbone atoms. For cyclic peptides, both backbone and side chain atoms are difficult to equilibrate with standard simulation techniques. We have described a new and efficient Monte Carlo simulation method for complex cyclic peptides. The combination of biased, look-ahead Monte Carlo and parallel tempering leads to rapid and accurate sampling of the relevant room- or body-temperature conformations. Specifically, the look-ahead biasing is helpful for equilibrating long or bulky side chains, and the parallel tempering is essential for crossing torsional-angle free-energy barriers at a rapid rate. A variety of details, such as prescreening, improved Jacobian biasing, semi-look-ahead, and look-ahead, are important components of the method. We believe that parallel tempering will prove to be a generally useful method for simulation of ‘glassy’ atomic systems with multiple, important conformations separated by large and unpredictable free energy barriers. Explicit atom models, which are more accurate but which also increase the ruggedness of the potential energy landscape, are naturally treated within this approach. We expect that application of our peptide simulation method to high-density or crowded situations, such as peptide-receptor or antibody-antigen binding events, will further demonstrate the efficiency and power of our approach. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Marco Falcioni for many useful discussions. This research was supported by the National Science Foundation through grants CTS–9702403 and CHE-9705165. The Jacobian in the Rebridging Scheme {#AppA} ===================================== In the rebridging scheme, each solution should be weighted by a Jacobian to correct for the non-uniform distribution of the angles $\ph{1},\ldots,\ph{6}$ generated by the non-linear solution of the geometrical problem. We derive the Jacobian here from the classical partition function. We initially consider a simple cyclic molecule with only N backbone atoms and N backbone torsional degrees of freedom. This assumption is relaxed at the end to accommodate the complicated backbone and side chain geometry of a real peptide. The momentum part of the partition function can be integrated out if we assume that the bond length and angle constraints are enforced by springs with infinite force constants. We, thus, focus on the configurational part. The configurational partition function is $$\begin{aligned} \label{eqn:fixend} Z &\equiv & \int \drs{N}\boltz{\mathrm{U}} \nonumber \\ & = &\int \drs{N}\drv{N+1}\drv{N+2}\drv{N+3} \delta^{3}(\rv{N+1}-\rv{1})\delta^{3}(\rv{N+2}-\rv{2}) \delta^{3}(\rv{N+3}-\rv{3})\boltz{\mathrm{U}}\; , \nonumber \\\end{aligned}$$ where we have introduced three vector delta functions to account for the cyclic constraint. The choice of fixed-end constraints is not unique. We will discuss an alternative form later in this section. We start the derivation by performing a transformation from $\rv{}^N$ to $\yv{}^N$ $$\begin{aligned} \yv{1} & = & \rv{1} \nonumber \\ \yv{i} & = & \rv{i} - \rv{i-1} \mbox{, }i=2,\ldots, N+3\ .\end{aligned}$$ The Jacobian of this transformation is unity. We transform again from $\yv{}^N$ to local coordinates. We define $l_i=|\yv{i}|$ and to be the angle formed by and . We transform from to $l_2$ and , where $$\begin{aligned} \uv{2} &\equiv & \frac{\yv{2}}{|\yv{2}|}\ . \nonumber \\\end{aligned}$$ Then we transform from to $l_3,\ \thet{2},$ and $\gamma_2$, where $\gamma_2$ is the azimuthal angle of $\hat {\bf u}_3$ in a spherical coordinate system defined with $\hat {\bf u}_2$ as the $z$-axis. The angle is measured with respect to the plane defined by $\hat {\bf u}_2$ and $\hat {\bf e}_3$, the fixed laboratory $z$-axis. We further transform to a spherical coordinate system $l_i,\ \thet{i-1},$ and , $i=4,\ldots,N+3$. With this transformation, we obtain $$\begin{aligned} \label{eqn:car_to_int} Z &= &\int \dyv{1}{l_2}^2 dl_2 d\uv{2}{l_3}^2 dl_3 d\thet{2}\sin\thet{2}d\gamma_2\int \prod_{i=3}^{N+2}dl_{i+1} d\thet{i} d\ph{i} \nonumber\\ & \times& \delta^3\left(\sum_{j=2}^{N+1}\yv{j}\right)\, \delta^3(\yv{N+2}-\yv{2}) \, \delta^{3}(\yv{N+3}-\yv{3}) \nonumber\\ & \times & \mathrm{J} \left(\frac{\yv{4},\ldots,\yv{N+3}}{l_4,\ldots, l_{N+3},\ \thet{3},\ldots,\thet{N+2},\ \ph{3},\ldots,\ph{N+2}}\right) \boltz{\mathrm{U}}\; .\end{aligned}$$ The Jacobian is simply $\mathrm{J}=\prod_{i=3}^{N+2} {l_{i+1}}^2 \sin\thet{i}$. The fast coordinates $l_i$ and are fixed due to the strong harmonic potentials. We denote the equilibrium values of $l_i$ and by $l_i^0$ and $\thet{i}^0$, respectively. With very large spring constants, the dependence of the integrand along these coordinates can effectively be replaced with delta functions. Therefore $$\begin{aligned} Z & = &\mathrm{C'''}\int \dyv{1}{l_2}^2dl_2d\uv{2} {l_3}^2dl_3d\thet{2}\sin\thet{2}d\gamma_2\ \delta(l_2-l_2^0) \delta(l_3-l_3^0)\delta(\thet{2}-\thet{2}^0) \nonumber \\ &\times & \int \prod_{i=3}^{N+2}dl_{i+1} d\thet{i} d\ph{i}\ \mathrm{J}(l_4,\ldots, l_{N+3},\ \thet{3},\ldots,\thet{N+2}) \boltz{\mathrm{U}_0} \nonumber \\ &\times & \delta^3\left(\sum_{j=2}^{N+1}\yv{j}\right) \delta^3(\yv{N+2}-\yv{2})\delta^3(\yv{N+3}-\yv{3})\nonumber\\ &\times & \prod_{k=4}^{N}\delta(l_k-l_k^0)\prod_{k'=3}^{N-1}\delta(\thet{k'}-\thet{k'}^0) \delta(|\sum_{k''=2}^{N}\yv{k''}|-l_1^0) \nonumber \\ &\times & \delta\left(\cos^{-1}\frac{\yv{2}\cdot(-\sum_{i'=2}^{N}\yv{i'})} {|\yv{2}||\sum_{i''=2}^{N}\yv{i''}|}-\thet{1}^0\right) \delta\left(\cos^{-1}\frac{\yv{N}\cdot(-\sum_{j'=2}^{N}\yv{j'})} {|\yv{N}||\sum_{j''=2}^{N}\yv{j''}|}-\thet{N}^0\right)\; ,\end{aligned}$$ where $\mathrm{U}_0$ is the potential energy measured at the ground configuration of these hard coordinates. We now integrate over the hard coordinates $l_i$ and . The Jacobian is simply a constant and can be taken out of the integral. Since the constraint $\delta^3(\sum_{i=2}^{N+1}\yv{i})$ holds, we can replace every $-\sum_{i=2}^N\yv{i}$ with . Similarly, we can replace with . Replacing the arguments in the last two delta functions with and , respectively, we obtain $$\begin{aligned} Z & = &\mathrm{C''}\int \dyv{1}d\uv{2}d\gamma_2 \int\prod_{i=N}^{N+2}dl_{i+1} d\thet{i} d\ph{i}\ \boltz{\mathrm{U}_0} \nonumber \\ &\times & \delta^3\left(\sum_{j=2}^{N+1}\yv{j}\right) \delta^3(\yv{N+2}-\yv{2})\delta^3(\yv{N+3}-\yv{3}) \delta(|\yv{N+1}|-l_1^0) \nonumber \\ &\times & \delta\left(\thet{N+1}-\thet{1}^0\right) \delta\left(\thet{N}-\thet{N}^0\right)\; .\end{aligned}$$ We use the equalities $$\begin{aligned} \delta^3(\yv{N+2}-\yv{2}) &= &\delta(l_{N+2}-l_2)\delta^2(\uv{N+2}-\uv{2})/{l_{N+2}}^2 \nonumber \\ \delta^3(\yv{N+3}-\yv{3}) &= &\delta(l_{N+3}-l_3)\delta^2(\uv{N+3}-\uv{3})/{l_{N+3}}^2 \nonumber\end{aligned}$$ to integrate over $l_{N+1}$, $l_{N+2}$, $l_{N+3}$, , and to obtain $$\begin{aligned} Z & = &\mathrm{C'} \int\dyv{1}d\uv{2}d\gamma_2\int\prod_{i=3}^{N+2}d\ph{i}\int d\thet{N+2} \;\boltz{\mathrm{U}_0} \nonumber \\ & \times & \delta^3\left(\sum_{j=2}^{N+1}\yv{j}\right) \delta^2(\uv{N+2}-\uv{2})\delta^2(\uv{N+3}-\uv{3})\ .\end{aligned}$$ Note that $$\begin{aligned} \label{eqn:phi2_thet2} \int d\thet{N+2}\delta^2(\uv{N+3}-\uv{3}) & = & \int d\thet{N+2}\delta(\gamma_{N+2}-{\gamma_2}) \delta(\thet{N+2}-\thet{2})/\sin\thet{2} \nonumber \\ & = & \delta(\gamma_{N+2}-{\gamma_2})/\sin\thet{2} \; .\end{aligned}$$ where $$\begin{aligned} \thet{2}& = & \left|\cos^{-1}\frac{\yv{3}\cdot\yv{N+2}} {|\yv{3}||\yv{N+2}|}\right|\ ,\end{aligned}$$ and $\gamma_{N+2}$ and $\gamma_2$ are the azimuthal angles of $\hat {\bf u}_{N+3}$ and $\hat {\bf u}_3$ in a spherical coordinate system defined with $\hat {\bf u}_{N+2}=\uv{2}$ as the $z$-axis. The angles are measured with respect to the plane defined by $\hat {\bf u}_2$ and $\hat {\bf e}_3$. Integrating over , we obtain $$\begin{aligned} \label{eqn:cyclic} Z & = \mathrm{C}&\int\dyv{1}d\uv{2}d\gamma_2\int\prod_{i=3}^{N+2}d\ph{i} \nonumber \\ &\times & \boltz{\mathrm{U}_{0}} \delta^3\left(\sum_{j=2}^{N+1}\yv{j}\right) \delta^2(\uv{N+2}-\uv{2})\delta(\gamma_{N+2}-\gamma_2)\; .\end{aligned}$$ This is the partition function of a classical, cyclic molecule. We see that it is an integral over torsional space with delta function constraints. These constraints cause an intrinsically non-uniform distribution of every torsional angle, even in the absence of any energy of interaction. It is convenient to transform the last six torsional coordinates to the variables , , and $\ph{N+2}$ and to integrate over these six coordinates. Then $$\begin{aligned} \label{eqn:6deltapar} Z & = & \mathrm{C}\int\dyv{1}d\uv{2}d\gamma_2 \int\prod_{i=3}^{N-4}d\ph{i}\int d\rv{N+1}d\uv{N+2}d\gamma_{N+2} \nonumber \\ & \times &\sum_{k=1}^{k_\mathrm{s}} \left\{\mathrm{J}_k\left(\frac{\ph{N-3},\ldots,\ph{N+2}} {\rv{N+1},\uv{N+2},\gamma_{N+2}}\right)\boltzm{\mathrm{U}_0(k)}\right\} \delta^3\left(\sum_{j=2}^{N+1}\yv{j}\right) \delta^2(\uv{N+2}-\uv{2})\delta(\gamma_{N+2}-\gamma_2) \nonumber \\ & = & \mathrm{C}\int\dyv{1}d\uv{2}d\gamma_2 \int\prod_{i=3}^{N-4}d\ph{i}\ \sum_{k=1}^{k_\mathrm{s}} \left\{\mathrm{J}_k\left.\left(\frac{\ph{N-3}\ldots\ph{N+2}}{\rv{N+1},\uv{N+2},\gamma_{N+2}} \right)\right|_{\rv{1},\uv{2},\gamma_2} \boltzm{\mathrm{U}_0(k)}\right\}\ .\end{aligned}$$ The index $k$ labels the solutions $\{\ph{N-3},\ldots,\ph{N+2}\}$ that satisfy the fixed-end constraints. The summation accounts for the fact that multiple solutions are possible. In the rebridging scheme, we always relabel $\ph{N-3},\ldots,\ph{N+2}$ as $\ph{1},\ldots,\ph{6}$ and $\rv{1},\uv{2},\mbox{ and }\gamma_2$ as $\rv{5},\uv{6},\mbox{ and }\gamma_6$. From eq. (\[eqn:6deltapar\]) it is clear that each solution must be given a weight, which is the Jacobian. The $6\times6$ Jacobian is actually the determinant of a $5\times5$ matrix, since the last torsional angle does not affect or . Therefore, $$\frac{\partial \rv{5}}{\partial \ph{6}} = \frac{\partial \uv{6}}{\partial \ph{6}}=0\ .$$ We also know that $$\frac{\partial \gamma_6}{\partial \ph{6}}=1\ .$$ So we obtain $$\begin{aligned} \label{eqn:jac_RBsim} \mathrm{J}\left(\frac{\ph{1},\ph{2},\ph{3},\ph{4},\ph{5},\ph{6}} {\rv{5},\ \uv{6},\ \gamma_6}\right) & = & \frac{\uv{6}\cdot\hat{\mathbf{e}}_{3}}{\det|B|}\nonumber \\ B_{ij} & = & [\uv{j}\times(\rv{5}-\rv{j})]_i \mbox{, if } j\leq3 \nonumber \\ & = & [\uv{j}\times\uv{6}]_{j-3} \mbox{, if } j=4\mbox{ or }5.\end{aligned}$$ Since the Jacobian is independent of , we might conjecture that it is also independent of . The reason is that the Jacobian should not depend on the direction that we choose for the labeling of the rigid units. Hoffmann and Knapp derived a $4\times 4$ Jacobian depending only on , , , and for case 6 of table \[tab:0pro\] [@Hoffmann]. We will show that, with suitable choice of end-constraint variables, a $4\times 4$ matrix can be derived in all cases. The idea is to choose a set of end coordinates that are almost independent of . Integrating eq. (\[eqn:cyclic\]) over , we obtain $$\begin{aligned} \label{eqn:cyclic5} Z & = \mathrm{C}&\int\dyv{1}d\uv{2}d\gamma_2\int\prod_{i=3}^{N+1}d\ph{i}\; \boltz{\mathrm{U}_0} \delta^3\left(\sum_{j=2}^{N+1}\yv{j}\right) \delta^2(\uv{N+2}-\uv{2})\; .\end{aligned}$$ Let $\Delta\rv{}= \rv{N+1} - \rv{N-3}$ and introduce the following end coordinates $$\begin{aligned} R & = &|\Delta\rv{}| \nonumber \\ \thet{ b}& = & \left|\cos^{-1}\left(\frac{\Delta\rv{}\cdot\uv{N-3}}{R}\right)\right| \nonumber \\ \ph{ b} & = & \mbox{the torsional angle of $\Delta$\rv{} in local coordinates of unit $N-3$} \nonumber \\ \thet{\mathrm{e}} & = & \left|\cos^{-1}\left(\frac{\Delta\rv{}\cdot\uv{N+2}}{R}\right)\right| \nonumber \\ \ph{\mathrm{e}} & = & \mbox{the torsional angle defined by \uv{N-3}, $\Delta$\rv{}, and \uv{N+2}}\ .\end{aligned}$$ Note that $R$, , , and are independent of and that is linear in . Substituting these coordinates into eq. (\[eqn:cyclic5\]), we obtain $$\begin{aligned} \label{eqn:cyclic5A} Z & = \mathrm{C}&\int\dyv{1}d\uv{2}d\gamma_2\int\prod_{i=3}^{N+1}d\ph{i}\boltz{\mathrm{U}_0} \frac{1}{R^2\sin\thet{ b}\sin\thet{\mathrm{e}}} \nonumber \\ &\times& \delta(R - |\rv{1} - \rv{N-3}|)\delta(\ph{ b}-\ph{ b}^0) \delta(\thet{ b}-\thet{ b}^0) \delta(\thet{\mathrm{e}}-\thet{\mathrm{e}}^0) \delta(\ph{\mathrm{e}}-\ph{\mathrm{e}}^0)\; .\end{aligned}$$ Here $$\begin{aligned} \thet{ b}^0 & = & \left|\cos^{-1}\left( \frac{(\rv{1} - \rv{N-3})\cdot\uv{N-3}}{|\rv{1} - \rv{N-3}|}\right)\right| \nonumber \\ \ph{ b}^0 & = & \mbox{the torsional angle of $\rv{1}-\rv{N-3}$ in local coordinates of unit $N-3$} \nonumber \\ \thet{\mathrm{e}}^0 & = & \left|\cos^{-1}\left(\frac{(\rv{1} - \rv{N-3})\cdot\uv{2}}{|\rv{1} - \rv{N-3}|}\right)\right| \nonumber \\ \ph{\mathrm{e}}^0 & = & \mbox{the torsional angle defined by \uv{N-3}, $\Delta$\rv{}, and \uv{2}} \ .\end{aligned}$$ Transforming coordinates from $\ph{N-3},\ldots,\ph{N+1}$ to $R$, , , $\thet{\mathrm{e}}$, and $\ph{\mathrm{e}}$, we obtain $$\begin{aligned} \label{eqn:5by5jac} Z & = & \mathrm{C}\int\dyv{1}d\uv{2}d\gamma_2 \int\prod_{i=3}^{N-4}d\ph{i} \int dRd\thet{ b}d\ph{ b} d\thet{\mathrm{e}}d\ph{\mathrm{e}}\ \frac{1}{R^2\sin\thet{ b}\sin\thet{\mathrm{e}}} \nonumber \\ & \times & \delta(R - |\rv{1} - \rv{N-3}|) \delta(\thet{ b}-\thet{ b}^0)\delta(\ph{ b}-\ph{ b}^0) \delta(\thet{\mathrm{e}}-\thet{\mathrm{e}}^0)\delta(\ph{\mathrm{e}}-\ph{\mathrm{e}}^0) \nonumber \\ &\times & \sum_{k=1}^{k_\mathrm{s}} \left\{\mathrm{J}_k\left.\left(\frac{\ph{N-3},\ldots,\ph{N+1}} {R,\ \thet{ b},\ \ph{ b},\ \thet{\mathrm{e}},\ \ph{\mathrm{e}}} \right)\right|_{|\rv{1} - \rv{N-3}|,\thet{ b}^0,\ph{ b}^0, \thet{ b}^0,\ph{\mathrm{e}}^0}\boltzm{\mathrm{U}_0(k)}\right\} \nonumber \\ & = & \mathrm{C}\int\dyv{1}d\uv{2}d\gamma_2 \int\prod_{i=3}^{N-4}d\ph{i}\; \frac{1}{R^2\sin\thet{ b}\sin\thet{\mathrm{e}}}\nonumber \\ & \times & \sum_{k=1}^{k_\mathrm{s}} \left\{\mathrm{J}_k\left.\left(\frac{\ph{N-3},\ldots,\ph{N+2}} {R,\ \thet{ b},\ \ph{ b},\ \thet{\mathrm{e}},\ \ph{\mathrm{e}}} \right)\right|_{|\rv{1} - \rv{N-3}|,\thet{ b}^0,\ph{ b}^0,\thet{\mathrm{e}}^0,\ph{\mathrm{e}}^0} \boltzm{\mathrm{U}_0(k)}\right\}\; .\end{aligned}$$ The Jacobian can be rewritten as $$\mathrm{J} = \frac{1}{R^2\sin\thet{ b}\sin\thet{\mathrm{e}} |\det(\mathrm{B''})|}\; ,$$ where $$\begin{aligned} \label{eqn:jacb''} \mathrm{B''}_{1j}& =& \frac{\partial R}{\partial \ph{N-4+j}} \, , \mathrm{B''}_{2j} = \frac{\partial \thet{ b}}{\partial \ph{N-4+j}} \, , \mathrm{B''}_{3j} = \frac{\partial \ph{ b}}{\partial \ph{N-4+j}}\, , \nonumber \\ \mathrm{B''}_{4j}& =& \frac{\partial \thet{\mathrm{e}}}{\partial \ph{N-4+j}} \, , \mathrm{B''}_{5j} = \frac{\partial \ph{\mathrm{e}}}{\partial \ph{N-4+j}} \;\;\mathrm{, } \;\;j = 1,\ldots, 5\; .\end{aligned}$$ The first column of $\mathrm{B''}$ has only one non-zero element, which is $\mathrm{B''}_{31}=1$. Taking the cofactor of $\mathrm{B'}_{21}$, the determinant can be replaced with that of a $4\times 4$ matrix $$\begin{aligned} \label{eqn:jacb'} \mathrm{B'}_{1j}& =& \frac{\partial R}{\partial \ph{N-3+j}} \, , \mathrm{B'}_{2j} = \frac{\partial \thet{ b}}{\partial \ph{N-3+j}} \, , \mathrm{B'}_{3j} = \frac{\partial \thet{\mathrm{e}}}{\partial \ph{N-3+j}} \, , \nonumber \\ \mathrm{B'}_{4j} &= &\frac{\partial \ph{\mathrm{e}}}{\partial \ph{N-3+j}} \;\;\mathrm{, } \;\;j = 1,\ldots, 4\; . \end{aligned}$$ It is easy to extend our approach to include side chains and constrained torsional angles. Following an approach parallel to eqs. (\[eqn:fixend\])–(\[eqn:car\_to\_int\]), we obtain an integral with additional degrees of freedom contributed by side chains. These degrees of freedom are not constrained, and they can be integrated out first. Therefore, we can simply replace $\mathrm{U}$ with an influence functional. The final form of the Jacobian is unaffected. For peptides, rotation about the C-N bond in the amide group is governed by a large force constant. In our simulation, we constrain these torsional degrees of freedom as well. Each constrained bond adds a delta function to eq. (\[eqn:cyclic\]). Let $A$ be the set of that are constrained. Then $$\begin{aligned} \label{eqn:cyclic_pi} Z & = \mathrm{C}&\int\dyv{1}d\uv{2}d\gamma_2\int\prod_{i=3}^{N+2}d\ph{i}\ \boltz{\mathrm{U}_0} \delta^3\left(\sum_{j=2}^{N+1}\yv{j}\right) \delta^2(\uv{N+2}-\uv{2})\delta(\gamma_{N+2}-\gamma_2) \nonumber \\ & \times&\prod_{k\in A}\delta(\ph{k}-\ph{k}^0)\; .\end{aligned}$$ Let $G(l,N+2)$ denote the last $l$ flexible torsional angles from to . Integrating out the other degrees of freedom, we obtain $$\begin{aligned} \label{eqn:6deltapar_pi} Z & = & \mathrm{C}\int\dyv{1}d\uv{2}d\gamma_2 \int\left(\prod_{i\notin \left[A\cup G(6,N+2)\right]}d\ph{i}\right) \nonumber \\ &\times & \sum_{k=1}^{k_{s}} \left\{\mathrm{J}_k\left.\left(\frac{G(6,N+2)}{\rv{N+1},\uv{N+2},\gamma_{N+2}} \right)\right|_{\rv{1},\uv{2},\gamma_2} \boltzm{\mathrm{U}_0(k)}\right\}\; .\end{aligned}$$ If is constrained, , , , and define a rigid unit. The corresponding fixed-end coordinates in our algorithm are chosen to be , , and $\gamma_{N+1}$, instead of , , and $\gamma_{N+2}$. This apparent difference causes no ambiguity, since both sets define the same rigid unit. The Jacobian between these two sets is unity. Relabeling the torsional angles in $G(6,\ N+2)$ by $\ph{1},\ldots,\ph{6}$, we recover the Jacobian in eq. (\[eqn:jac\_RB\]). The $4\times4$ Jacobian in this case can be derived analogously. The final result, which is numerically equal to eq. (\[eqn:jac\_RB\]), is $$\begin{aligned} \label{eqn:jac_4x4} \mathrm{J} & = & \frac{1}{R^2\sin\thet{ b}\sin\thet{\mathrm{e}} |\det(\mathrm{B'})|}\; . \nonumber \end{aligned}$$ The components of $\mathrm{B'}$ are given below: $$\begin{aligned} \mathrm{B'}_{1j} & = &\frac{\partial R}{\partial \ph{j}} = \frac{1}{R}\frac{\partial\Delta\rv{}}{\partial \ph{j}}\cdot\Delta\rv{} \nonumber \\ \mathrm{B'}_{2j} & = &\frac{\partial\thet{ b}}{\partial\ph{j}} =\frac{-1}{R\sin\thet{ b}} \left[-\frac{1}{R}\mathrm{B'}_{1j}\Delta\rv{}\cdot\uv{1}+ \frac{\partial \Delta\rv{}}{\partial \ph{j}}\cdot\uv{1}\right] \nonumber \\ \mathrm{B'}_{3j} & = &\frac{\partial\thet{\mathrm{e}}}{\partial\ph{j}} = \frac{-1}{R\sin\thet{\mathrm{e}}}\left[\frac{-1}{R}\mathrm{B'}_{1j}\Delta\rv{}\cdot\uv{6} +\frac{\partial\Delta\rv{}}{\partial\ph{j}}\cdot\uv{6} +\Delta\rv{}\cdot(\uv{j}\times\uv{6})\right] \nonumber \\ \mathrm{B'}_{4j} & = & \frac{\partial\ph{ b}}{\partial\ph{j}} \nonumber \\ &= &\frac{-1}{R^2\sin\ph{\mathrm{e}}\sin\thet{ b}\sin\thet{\mathrm{e}}} \left\{\frac{(\uv{1}\times\Delta\rv{})\cdot(\Delta\rv{}\times\uv{6})} {R^2\sin\ph{\mathrm{e}}\sin\thet{ b}\sin\thet{\mathrm{e}}}\right. \nonumber \\ & & \times\left(2\frac{\partial \Delta\rv{}}{\partial \ph{j}}\cdot\Delta\rv{} \sin\thet{ b}\sin\thet{\mathrm{e}} +R^2\cos\thet{ b}\sin\thet{\mathrm{e}}\mathrm{B'}_{2j} +R^2\sin\thet{ b}\cos\thet{\mathrm{e}}\mathrm{B'}_{3j}\right) \nonumber \\ & &\left. +\left[(\uv{1}\times\frac{\partial \Delta\rv{}}{\partial\ph{j}}) \cdot(\Delta\rv{}\times\uv{6}) +(\uv{1}\times\Delta\rv{}) \cdot\left(\frac{\partial\Delta\rv{}}{\partial \ph{j}}\times\uv{6} +\Delta\rv{}\times(\uv{j}\times\uv{6})\right)\right] \right\}\; , \nonumber \\\end{aligned}$$ where $$\begin{aligned} \frac{\partial\Delta\rv{}}{\partial \ph{j}}&=& \uv{j}\times(\rv{5\mathrm{t}}-\rv{j\mathrm{h}})\; . \nonumber\end{aligned}$$ The quantities needed to calculate $\mathrm{B'}$ are , $\Delta\rv{}$, , , and $\sin\ph{\mathrm{e}}$. Detailed Balance for LA {#AppB} ======================= In this appendix we prove that the LA method satisfies detailed balance. The proof for LARC can be done analogously and is not presented here. In our algorithm, the old Rosenbluth factor ${\mathrm W^\Ost}$ is not evaluated until all units have been given new positions. In fact, ${\mathrm W^\Ost}$ can be calculated at any time. In the proof, we calculate the partial old Rosenbluth factor $\mathrm{w}_i^\Ost$ of unit $i$ once a new proposed move for unit $i$ is made. We first derive the probability for proposing a forward move of the first unit. By analogy, we derive the probability for proposing a reverse move of the first unit. Since we generate both the old Rosenbluth factor and the new Rosenbluth factor in a random way, their probabilities should be included. This is the so-called super detailed balance condition [@Frenkel]. We will show that LA satisfies super detailed balance. Let $\alpha_{1}\left(\mathrm{o\rightarrow n};\; \{\ph{1}^\Nst(\alpha)\}, \{\ph{2}^\Nst(\alpha,\ \gamma)\}, \{\ph{1}^\Ost(\alpha')\},\{\ph{2}^\Ost(\alpha',\ \gamma')\}\right)$ be the probability of proposing a move from $\ph{1}^\Ost(1)$ to $\ph{1}^\Nst(1)$, given $\{\ph{1}^\Nst(\alpha)\}, \{\ph{2}^\Nst(\alpha,\ \gamma)\}, \{\ph{1}^\Ost(\alpha')\}$, and $\{\ph{2}^\Ost(\alpha',\ \gamma')\}$. Consider the following three events: 1. Generating $n_1 n_2$ new twigs, which has the probability $$\prod_{\alpha=1}^{n_1}p_1^\Nst(\alpha)\prod_{\gamma=1}^{n_2}p_2^\Nst(\alpha,\ \gamma)\ .$$ 2. Picking a new twig, which has the probability $q_1^\Nst(1)$. 3. Generating $n_1 n_2$ old twigs, which has the probability is $$\prod_{\gamma'=1}^{n_2}p_2^\Ost(1,\ \gamma') \prod_{\alpha'=2}^{n_1}\left(p_1^\Ost(\alpha') \prod_{\gamma'=1}^{n_2}p_2^\Ost(\alpha',\ \gamma')\right)\ .$$ The probability of the whole event, $\alpha_{1}\left(\mathrm{o\rightarrow n};\; \{\ph{1}^\Nst(\alpha)\}, \{\ph{2}^\Nst(\alpha,\ j)\}, \{\ph{1}^\Ost(\alpha')\},\{\ph{2}^\Ost(\alpha',\ \gamma')\}\right)$, is the product of these three probabilities. Multiplying the three terms together, we obtain $$\begin{aligned} \label{eqn:dbal_on} \prod_{\alpha=1}^{n_1}\left(p_1^\Nst(\alpha) \prod_{\gamma=1}^{n_2}p_2^\Nst(\alpha,\ \gamma)\right)\times q_1^\Nst(1)\times\prod_{\gamma'=1}^{n_2}p_2^\Ost(1,\ \gamma') \prod_{\alpha'=2}^{n_1}\left(p_1^\Ost(\alpha') \prod_{\gamma'=1}^{n_2}p_2^\Ost(\alpha',\ \gamma')\right)\; .\end{aligned}$$ Similarly, the probability of proposing the reverse move,$\alpha_1\left(\mathrm{n\rightarrow o};\; \{\ph{1}^\Ost(\alpha')\}, \{\ph{2}^\Ost(\alpha',\ \gamma')\}, \{\ph{1}^\Nst(\alpha)\},\{\ph{2}^\Nst(\alpha,\ \gamma)\}\right)$, is $$\begin{aligned} \label{eqn:dbal_no} \prod_{\alpha'=1}^{n_1}\left(p_1^\Ost(\alpha') \prod_{\gamma'=1}^{n_2}p_2^\Ost(\alpha',\ \gamma')\right)\times q_1^\Ost(1)\times\prod_{\gamma=1}^{n_2}p_2^\Nst(1,\ \gamma) \prod_{\alpha=2}^{n_1}\left(p_1^\Nst(\alpha) \prod_{\gamma=1}^{n_2}p_2^\Nst(\alpha,\ \gamma)\right)\ .\end{aligned}$$ We define $\mathrm{{U'}_1^{ext}}$ as the external energy for unit 1 in the old configuration and $\mathrm{U_1^{ext}}$ as the external energy for unit 1 in the new configuration. Taking the ratio of eq. (\[eqn:dbal\_on\]) and eq. (\[eqn:dbal\_no\]), most of the probabilities for generating the twigs cancel. Replacing $q_1^\Nst(1)$ and $q_1^\Ost(1)$ with eq. (\[eqn:pick\_LA\]), we obtain $$\begin{aligned} \label{eqn:dtbalance_1} \frac{ \alpha_1(\mathrm{o\rightarrow n};\; \{\ph{1}^\Nst(\alpha)\}, \{\ph{2}^\Nst(\alpha,\ \gamma)\}, \{\ph{1}^\Ost(\alpha')\},\{\ph{2}^\Ost(\alpha',\ \gamma')\})} {\alpha_1(\mathrm{n\rightarrow o};\; \{\ph{1}^\Ost(\alpha')\}, \{\ph{2}^\Ost(\alpha',\ \gamma')\}, \{\ph{1}^\Nst(\alpha)\}, \{\ph{2}^\Nst(\alpha,\ \gamma)\})} \nonumber \\ =\frac{p_1^\Nst(1)}{p_1^\Ost(1)} \frac{\boltzm{\mathrm{U_1^{ext,n}}(1)}}{\boltzm{\mathrm{{U}_1^{ext,o}}(1)}} \mathrm{\frac{w_2^\Nst(1)}{w_2^\Ost(1)}} \mathrm{\frac{w_1^\Ost}{w_1^\Nst}} \nonumber \\ = \frac{\boltz{\mathrm{U}_1^\Nst}}{\boltz{\mathrm{U}_1^\Ost}} \mathrm{\frac{w_2^\Nst(1)}{w_2^\Ost(1)} \frac{w_1^\Ost}{w_1^\Nst}}\; ,\end{aligned}$$ where we have used eq. (\[eqn:p\_int\_1\]) to obtain the last line. Similarly, we can obtain the ratio of probabilities for subsequent units. The ratio of the transition probabilities is the product of these ratios and the ratios of the acceptance probabilities. Multiplying eq. (\[eqn:dtbalance\_1\]) for each unit and using eqs. (\[eqn:acc\_LA\]) and (\[eqn:globalW\]), we find that super detailed balance is satisfied: $$\begin{aligned} \frac{\alpha(\mathrm{o\rightarrow n})\acc(\mathrm{o\rightarrow n)}} {\alpha(\mathrm{n\rightarrow o})\acc(\mathrm{n\rightarrow o)}} &= &\frac{\prod_{i=1}^{N}\boltz{\mathrm{U}_i^\Nst}} {\prod_{i'=1}^{N}\boltz{\mathrm{U}_{i'}^\Ost}}\times \frac{\prod_{j=2}^N\mathrm{w}_j^\Nst(1)} {\prod_{j'=2}^N\mathrm{w}_{j'}^\Ost(1)} \frac{\prod_{k=1}^{N}\mathrm{w}_k^\Ost} {\prod_{k'=1}^{N}\mathrm{w}_{k'}^\Nst} \frac{\mathrm{W}^\Nst}{\mathrm{W}^\Ost} \nonumber \\ &= & \frac{\boltz{\mathrm{U}^\Nst}}{\boltz{\mathrm{U}^\Ost}}\ .\end{aligned}$$ ---------------------------------------------------------------------------------------------------------------------------------------------------------- Case Units(1-5) Geometrical Constraints Classification ------ ------------ ------------------------------------------------------------------------------------------- ---------------- ---------- -------------- 1 AABAB $|\rv{4}(\ph{6}) - \rv{2}(\ph{1})|^2 - {l_{2,4}}^2 = 0 $ dist $\ph{6}$ $f(\ph{1})$ $\uv{3}(\ph{1},\ph{2})\cdot\uv{6}- \cos\thet{4} = 0 $ dot1 $\ph{2}$ $f(\ph{1})$ $\left| \rv{4}\left[\ph{6}(\ph{1})\right] - target 0 \rv{3\mathrm{h}}\left[\ph{2}(\ph{1})\right]\right|^2- {l_{3\mathrm{h},4}}^2$ 2 AAAAB $| \rv{4}(\ph{6}) - \rv{2}(\ph{1}) |^2 - {l_{2,4}}^2 = 0 $ dist $\ph{6}$ $f(\ph{1})$ $\left[\rv{5\mathrm{t}}-\rv{3}(\ph{1},\ph{2})\right]\cdot\uv{6} dot $\ph{2}$ $f(\ph{1})$ -l_{3,4}\cos\thet{4}-l_{4,5\mathrm{h}}- l_{5\mathrm{h},5\mathrm{t}}\cos\thet{5\mathrm{t}}= 0 $ $\left| \rv{4}\left[\ph{6}(\ph{1})\right] - \rv{3}\left[\ph{2}(\ph{1})\right] \right|^2- target $0$ {l_{3,4}}^2=0 $ 3 BABAB $| \rv{4}(\ph{6}) - \rv{2}(\ph{1}) |^2 - {l_{2,4}}^2 = 0 $ dist $\ph{6}$ $f(\ph{1})$ $\uv{3}(\ph{2})\cdot\uv{6}-\cos\thet{4} = 0 $ quad $\left| \rv{4}\left[\ph{6}(\ph{1})\right] - target $0$ \rv{3\mathrm{h}}(\ph{1},\ph{2}) \right|^2- {l_{3\mathrm{h},4}}^2 = 0 $ 4 BABAA $| \rv{4}(\ph{6}) - \rv{2}(\ph{1}) |^2 - {l_{2,4}}^2 = 0 $ dist $\ph{1}$ $f(\ph{6})$ $\uv{4}(\ph{6},\ph{5})\cdot\uv{1} - \cos\thet{2} = 0 $ dot1 $\ph{5}$ $f(\ph{6})$ $| \rv{2}\left[\ph{1}(\ph{6})\right] - \rv{3\mathrm{t}} target $0$ \left[\ph{5}(\ph{6})\right] |^2- {l_{2,3\mathrm{t}}}^2 = 0 $ 5 BAAAA $| \rv{4}(\ph{6}) - \rv{2}(\ph{1}) |^2 - {l_{2,4}}^2 = 0 $ dist $\ph{1}$ $f(\ph{6})$ $\left[\rv{1\mathrm{h}} - \rv{3}(\ph{6},\ph{5})\right]\cdot\uv{1} + dot $\ph{5}$ $f(\ph{6})$ l_{2,3}\cos\thet{2}+l_{1\mathrm{h},1\mathrm{t}}\cos\thet{1\mathrm{h}} +l_{1\mathrm{t},2\mathrm{h}}= 0 $ $\left| \rv{2}\left[\ph{1}(\ph{6})\right] - target $0$ \rv{3}\left[\ph{5}(\ph{6})\right] \right|^2- {l_{2,3}}^2 = 0 $ 6 AXAXA $| \rv{3}(\ph{3}') - \rv{2\mathrm{h}}(\ph{1}) |^2 dist $\ph{1}$ $f(\ph{3}')$ - {l_{2\mathrm{h},3}}^2=0 $ $| \rv{3}(\ph{3}') - \rv{4\mathrm{t}}(\ph{6})|^2 dist $\ph{6}$ $f(\ph{3}')$ - {l_{3,4\mathrm{t}}}^2=0$ $\left|\rv{2\mathrm{t}}\left[\ph{1}(\ph{3}'),\ph{3}'\right] target $0$ $f(\ph{3}')$ - \rv{4\mathrm{h}}\left[\ph{6}(\ph{3}'), \ph{3}'\right]\right|^2 - {l_{2\mathrm{t},4\mathrm{h}}}^2 = 0 $ ---------------------------------------------------------------------------------------------------------------------------------------------------------- : Constraint equations and target functions. In case 6, X can stand for either A or B, and $\ph{3}'$ is defined by eqs. (\[eqn:locnew\]) and (\[eqn:locmtxn\]).\[tab:0pro\] [lll]{}\ & & =0\ & ()-d = 0 &\ & | () - ()|\^2-d=0 & (| -|\^2+[l\_[i’ a,i a]{}]{}\^2 + [l\_[j’ b,j b]{}]{}\^2-d)\ & & -2 +2 (-)\ & & -2\ & - d = 0 & -()\ & & -\ & (,)- d = 0 & -d+\ $\\$ [lll]{} &&( [ccc]{} 1 &0 &0\ 0 &0 &0\ 0 &0 &0 ), () ( [ccc]{} &0 &0\ &0 &0\ &0 &0 )\ &&{ [lll]{} ( [ccc]{} \_x &0 &0\ 0&\_y& \_z\ 0&\_z&-\_y ) & &0\ & &=0 .\ &&l\_[i’,i’]{}+ l\_[i’ a\^\*,i a]{}\ [l\_[i’ a,i a]{}]{}\^2& &[l\_[i’,i]{}]{}\^2+[l\_[i’ a\^\*,i a]{}]{}\^2+ 2l\_[i’,i’]{}l\_[i’ a\^\*,i a]{}\ Method $P_\acc$ $P_{\mathrm{cross}}$ Number of steps CPU time (hrs) -------- ---- ---------- ---------------------- ----------------- ---------------- ---- NJ 6 662 0.162 0.000303 $4\times10^5$ 32 WJ 7 291 0.172 0.000414 $4\times10^5$ 32 WJO 10 006 0.177 0.000167 $4\times10^5$ 34 WJM 8 525 0.111 0.000468 $2\times10^5$ 40 MT 2 699 0.051 0.000222 $4\times10^5$ 30 : Comparison of simulation results with different rebridging methods at 298 K. For all simulations $\Delta\ph{\max}=10^\circ$, except for WJM, in which $\Delta\ph{\max}=30^\circ$. \[tab:comp4\] $P_\acc$ --------- ---------- ------- 298 K   500 K 0.147 500 K   1000 K 0.113 1000 K   3000 K 0.136 : Acceptance probability observed for swapping moves in the parallel tempering simulation. \[tab:acc\_para\] Method $n_1$ $n_2$ Residue $\Delta\ph{}$/CPU (deg$\cdot$min$^{-1}$) CPU (min) -------- ------- ------- --------- ------------------------------------------ ------- ----------- ------ -- -- SLA 1 0 Trp 142 0 0317 2940 Lys 82 0 00229 Arg 90 0 00150 SLA 10 0 Trp 158 0 0846 1620 Lys 575 0 187 Arg 152 0 0325 SLA 30 0 Trp 315 0 313 3910 Lys 526 0 214 Arg 290 0 131 SLA 100 0 Trp 153 0 373 3220 Lys 445 0 304 Arg 130 0 0821 LA 5 5 Trp 131 0 0660 2040 Lys 271 0 0947 Arg 157 0 0529 LA 10 5 Trp 147 0 0686 2260 Lys 375 0 169 Arg 205 0 0792 LA 10 10 Trp 120 0 0841 2800 Lys 397 0 220 Arg 225 0 118 LA 20 10 Trp 28 0 120 2500 Lys 355 0 348 Arg 55 0 0811 LARC 5 5 Trp 213 0 116 1690 Lys 673 0 221 Arg 327 0 100 LARC 10 10 Trp 211 0 206 2560 Lys 749 0 421 Arg 248 0 146 LARC 15 15 Trp 151 0 334 4250 Lys 548 0 497 Arg 346 0 357 LARC 20 20 Trp 104 0 415 6420 Lys 394 0 571 Arg 212 0 350 : Simulation data for tryptophan, lysine, and arginine residues of the CNWKRGDC(-5,15)[(-1,0)[65]{}]{} (-5,10)[(0,1)[5]{}]{} (-70,10)[(0,1)[5]{}]{} peptide. \[tab:residues\] \ [Figure \[fig:CNWKRGDC\]: Wu and Deem, ‘Efficient Monte Carlo…’]{} \ [Figure \[fig:clsunit\]: Wu and Deem, ‘Efficient Monte Carlo…’]{} \ [Figure \[fig:babaa\]: Wu and Deem, ‘Efficient Monte Carlo…’]{} \ [Figure \[fig:ababa\]: Wu and Deem, ‘Efficient Monte Carlo…’]{} \ [Figure \[fig:2-2var\]: Wu and Deem, ‘Efficient Monte Carlo…’]{} \ [Figure \[fig:sidechai\]: Wu and Deem, ‘Efficient Monte Carlo…’]{} \ [Figure \[fig:lookahead\]: Wu and Deem, ‘Efficient Monte Carlo…’]{} \ [Figure \[fig:CSSC\]: Wu and Deem, ‘Efficient Monte Carlo…’]{} \ [Figure \[fig:e\_histo\]: Wu and Deem, ‘Efficient Monte Carlo…’]{} \ [Figure \[fig:csscpara\]: Wu and Deem, ‘Efficient Monte Carlo…’]{} \ [Figure \[fig:old\_and\_new\]: and Deem, ‘Efficient Monte Carlo…’]{}
--- abstract: | The Frame Problem (FP) is a puzzle in philosophy of mind and epistemology, articulated by the Stanford Encyclopedia of Philosophy as follows: *“How do we account for our apparent ability to make decisions on the basis only of what is relevant to an ongoing situation without having explicitly to consider all that is not relevant?"* In this work, we focus on the *causal* variant of the FP, the Causal Frame Problem (CFP). Assuming that a reasoner’s mental causal model can be (implicitly) represented by a causal Bayes net, we first introduce a notion called Potential Level (PL). PL, in essence, encodes the relative position of a node with respect to its neighbors in a causal Bayes net. Drawing on the psychological literature on causal judgment, we substantiate the claim that PL may bear on how *time* is encoded in the mind. Using PL, we propose an inference framework, called the PL-based Inference Framework (PLIF), which permits a boundedly-rational approach to the CFP to be formally articulated at Marr’s algorithmic level of analysis. We show that our proposed framework, PLIF, is consistent with a wide range of findings in causal judgment literature, and that PL and PLIF make a number of predictions, some of which are already supported by existing findings. **Keywords:** Causal Frame Problem; Time and Causality; Bounded Rationality; Algorithmic Level Analysis author: - | [**Ardavan S. Nobandegani$^{1,2}$ Ioannis N. Psaromiligkos$^{1}$**]{}\ {ardavan.salehinobandegani@mail.mcgill.ca, yannis@ece.mcgill.ca}\ $^{1}$Department of Electrical & Computer Engineering, McGill University\ $^{2}$Department of Psychology, McGill University bibliography: - 'ref.bib' nocite: - '[@mahoney1998constructing]' - '[@fodor1987modules]' - '[@icard2015]' - '[@gopnik2004theory]' - '[@shachter1988probabilistic]' - '[@baddeley2003working]' - '[@ericsson1995long]' - '[@pearl2014probabilistic]' - '[@geiger1989d]' - '[@simon1957models]' - '[@marr1982vision]' - '[@glymour1987android]' - '[@baddeley1974working]' - '[@Rehder2015]' title: 'The Causal Frame Problem: An Algorithmic Perspective' --- Introduction ============ At the core of any decision-making or reasoning task, resides an innocent-looking yet challenging question: Given an inconceivably large body of knowledge available to the reasoner, what constitutes the relevant for the task and what the irrelevant? The question, as it is posed, echoes the well-known Frame Problem (FP) in epistemology and philosophy of mind, articulated by Glymour (1987) as follows: *“Given an enormous amount of stuff, and some task to be done using some of the stuff, what is the relevant stuff for the task?"* Fodor (1987) comments: *“The frame problem goes very deep; it goes as deep as the analysis of rationality."* The question posed above perfectly captures what is really at the core of the FP, yet, it may suggest an unsatisfying approach to the FP at the algorithmic level of analysis (Marr, 1982). Indeed, the question may suggest the following two-step methodology: In the first step, out of all the body of knowledge available to the reasoner (termed, the model), she has to identify what is relevant to the task (termed, the relevant submodel); it is *only then* that she advances to the second step by performing *reasoning* or *inference* on the identified submodel. There is something fundamentally wrong with this methodology (which we term, *sequential* approach to reasoning) which bears on the following understanding: The relevant submodel, i.e., the portion of the reasoner’s knowledge deemed relevant to the task, oftentimes is so enormous (or even infinitely large) that the reasoner—inevitably bounded in time and computational resources—would never get to the second step, had she adhered to such a methodology. In other words, in line with the notion of bounded rationality (Simon, 1957), a boundedly-rational reasoner must have the option, if need be, to merely consult a fraction of the potentially large—if not infinitely so—relevant submodel. Recent work by elegantly promotes this insight when they write: *“Somehow the mind must focus in on some “submodel" of the “full“ model (including all possibly relevant variables) that suffices for the task at hand and is not too costly to use.”*[^1] They then ask the following question: *“what kind of simpler model should a reasoner consult for a given task?"* This is an inspiring question hinting to an interesting line of inquiry as to how to formally articulate a boundedly-rational approach to the FP at Marr’s algorithmic level of analysis (1982). In this work, we focus on the causal variant of the FP, the Causal Frame Problem (CFP), stated as follows: Upon being presented with a causal query, how does the reasoner manage to attend to her causal knowledge relevant to the derivation of the query while rightfully dismissing the irrelevant? We adopt Causal Bayesian Networks (CBNs) (Pearl, 1988; Gopnik et al., 2004, *inter alia*) as a normative model to represent how the reasoner’s *internal* causal model of the world is structured (i.e., reasoner’s mental model). First, we introduce the notion of Potential Level (PL). PL, in essence, encodes the relative position of a node (representing a propositional variable or a concept) with respect to its neighbors in a CBN. Drawing on the psychological literature on causal judgment, we substantiate the claim that PL may bear on how *time* is encoded in the mind. Equipped with PL, we embark on investigating the CFP at Marr’s algorithmic level of analysis. We propose an inference framework, termed PL-based Inference Framework (PLIF), which aims at empowering the boundedly-rational reasoner to consult (or retrieve[^2]) parts of the underlying CBN deemed relevant for the derivation of the posed query (the relevant submodel) in a *local*, *bottom-up* fashion until the submodel is fully retrieved. PLIF allows the reasoner to carry out inference at *intermediate* stages of the retrieval process over the thus-far retrieved parts, thereby obtaining lower and upper bounds on the posed causal query. We show, in the Discussion section, that our proposed framework, PLIF, is consistent with a wide range of findings in causal judgment literature, and that PL and PLIF make a number of predictions, some of which are already supported by the findings in the psychology literature. In their work, Icard and Goodman (2015) articulate a boundedly-rational approach to the CFP at Marr’s computational level of analysis, which, as they point out, is from a “god’s eye" point of view. In sharp contrast, our proposed framework PLIF is *not* from a “god’s eye" point of view and hence could be regarded, potentially, as a psychologically plausible proposal at Marr’s algorithmic level of analysis as to how the mind both retrieves and, at the same time, carries out inference over the retrieved submodel to derive bounds on a causal query. We term this *concurrent* approach to reasoning, as opposed to the flawed sequential approach stated earlier.[^3] The retrieval process progresses in a local, bottom-up fashion, hence the submodel is retrieved *incrementally*, in a *nested* manner.[^4] Our analysis (Sec. \[sec\_case\_study\]) confirms Icard and Goodman’s insight (2015) that even in the extreme case of having an infinitely large relevant submodel, the portion of which the reasoner has to consult so as to obtain a “sufficiently good" answer to a query could indeed be very small. Potential Level and Time {#sec:PL_vs_time} ======================== Before proceeding further, let us introduce some preliminary notations. Random Variables (RVs) are denoted by lower-case bold-faced letters, e.g., ${\textbf{x}}$, and their realizations by non-bold lower-case letters, e.g., $x$. Likewise, sets of RVs are denoted by upper-case bold-faced letters, e.g., ${\textbf{X}}$, and their corresponding realizations by upper-case non-bold letters, e.g., $X$. $\textit{Val}(\cdot)$ denotes the set of possible values a random quantity can take on. Random quantities are assumed to be discrete unless stated otherwise. The joint probability distribution over ${\textbf{x}}_1,\cdots,{\textbf{x}}_n$ is denoted by ${\mathbb P}({\textbf{x}}_1,\cdots,{\textbf{x}}_n)$. We will use the notation ${\textbf{x}}_{1:n}$ to denote the sequence of $n$ RVs ${\textbf{x}}_1,\cdots,{\textbf{x}}_n$, hence ${\mathbb P}({\textbf{x}}_1,\cdots,{\textbf{x}}_n)={\mathbb P}({\textbf{x}}_{1:n})$. The terms “node" and “variable" will be used interchangeably throughout. To simplify presentation, we adopt the following notation: We denote the probability ${\mathbb P}({\textbf{x}}=x)$ by ${\mathbb P}(x)$ for some RV ${\textbf{x}}$ and its realization $x\in \textit{Val}({\textbf{x}})$. For conditional probabilities, we will use the notation ${\mathbb P}(x|y)$ instead of ${\mathbb P}({\textbf{x}}={x}|{\textbf{y}}={y})$. Likewise, ${\mathbb P}(X|Y)={\mathbb P}({\textbf{X}}= X|{\textbf{Y}}= Y)$ for $X \in {\textit Val}({\textbf{X}})$ and $Y \in {\textit Val}({\textbf{Y}})$. A generic conditional independence relationship is denoted by $({\textbf{A}} {\perp\hspace*{-5pt}\perp}{\textbf{B}}|{\textbf{C}})$ where ${\textbf{A}}, {\textbf{B}}$, and ${\textbf{C}}$ represent three mutually disjoint sets of variables belonging to a CBN. Furthermore, throughout the paper, we assume that $\epsilon$ is some negligibly small positive real-valued quantity. Whenever we subtract $\epsilon$ from a quantity, we simply imply a quantity less than but arbitrarily close to the original quantity. The rationale behind adopting such a notation will become clearer in Sec. \[sec\_PLIF\_main\]. Before formally introducing the notion of PL (unavoidably, with some mathematical jargon), we articulate in simple terms what the idea behind PL is. PL simply induces a *chronological order* on the nodes of a CBN, allowing the reasoner to encode the timing between cause and effect.[^5] As we will see, PL plays an important role in guiding the retrieval process used in our proposed framework. Next, PL is formally defined, followed by two clarifying examples. **Def. 1. (Potential Level (PL))** Let $par({\textbf{x}})$ and $child({\textbf{x}})$ denote, respectively, the sets of parents (i.e., immediate causes) and children (i.e., immediate effects) of ${\textbf{x}}$. Also let $T_0\in\mathbb{R}\cup\{-\infty\}$. The PL of ${\textbf{x}}$, denoted by $p_l({\textbf{x}})$, is defined as follows: (i) If $par({\textbf{x}})=\varnothing$, $p_l({\textbf{x}})=T_0$, and (ii) If $par({\textbf{x}})\neq\varnothing$, $p_l({\textbf{x}})$ is a real-valued quantity selected from the interval $(\max_{{\textbf{y}}\in par({\textbf{x}})}p_l({\textbf{y}}),\min_{{\textbf{z}}\in child({\textbf{x}})}p_l({\textbf{z}}))$ such that $p_l({\textbf{x}})-\max_{{\textbf{y}}\in par({\textbf{x}})}p_l({\textbf{y}})$ indicates the amount of time which elapses between intervening simultaneously on all the RVs in $par({\textbf{x}})$ (i.e., $do(par({\textbf{x}})=par_x)$) and ${\textbf{x}}$ taking its value $x$ in accord with the distribution ${\mathbb P}(x|par_x)$. If $child({\textbf{x}})=\varnothing$, substitute the upper bound of the given interval by $+\infty$. $\blacksquare$ Parameter $T_0$ symbolizes the origin of time, as perceived by the reasoner. $T_0=0$ is a natural choice, unless the reasoner believes that time continues indefinitely into the past, in which case $T_0=-\infty$. The next two examples further clarify the idea behind PL. In both examples we assume $T_0=0$. ![Relation between PL and time: Example.[]{data-label="fig_example_1"}](fig_PL_and_Time.pdf){width="40.00000%"} For the first example, let us consider the CBN depicted in Fig. \[fig\_example\_1\](a) containing the RVs ${\textbf{x}}, {\textbf{y}},$ and ${\textbf{z}}$ with $p_l({\textbf{x}})=4, p_l({\textbf{y}})=4.7,$ and $p_l({\textbf{z}})=5$. According to Def. 1, the given PLs can be construed in terms of the relative time between the occurrence of cause and effect as articulated next. Upon intervening on ${\textbf{x}}$ (i.e., $do({\textbf{x}}=x)$), after the elapse of $p_l({\textbf{y}})- p_l({\textbf{x}})=0.7$ units of time, the RV ${\textbf{y}}$ takes its value $y$ in accord with the distribution ${\mathbb P}(y|x)$. Likewise, upon intervening on ${\textbf{y}}$ (i.e., $do({\textbf{y}}=y)$), after the elapse of $p_l({\textbf{z}})- p_l({\textbf{y}})=0.3$ units of time, ${\textbf{z}}$ takes its value $z$ according to ${\mathbb P}(z|y)$. For the second example, consider the CBN depicted in Fig. \[fig\_example\_1\](b) containing the RVs ${\textbf{x}}, {\textbf{y}}, {\textbf{z}}$, and ${\textbf{t}}$ with $p_l({\textbf{x}})=4, p_l({\textbf{y}})=4.7, p_l({\textbf{z}})=5$, and $p_l({\textbf{t}})=5.6$. Upon intervening on ${\textbf{x}}$ (i.e., $do({\textbf{x}}=x)$) the following happens: (i) after the elapse of $p_l({\textbf{y}})- p_l({\textbf{x}})=0.7$ units of time, ${\textbf{y}}$ takes its value $y$ according to ${\mathbb P}(y|x)$, and (ii) after the elapse of $p_l({\textbf{z}})- p_l({\textbf{x}})=1$ unit of time, ${\textbf{z}}$ takes its value $z$ according to ${\mathbb P}(z|x)$. Also, upon intervening simultaneously on RVs ${\textbf{y}}, {\textbf{z}}$ (i.e., $do({\textbf{y}}=y,{\textbf{z}}=z)$), after the elapse of $p_l({\textbf{t}})-\max_{{\textbf{r}}\in par({\textbf{t}})}p_l({\textbf{r}})=0.6$ units of time, ${\textbf{t}}$ takes its value $t$ according to ${\mathbb P}(t|y,z)$. In sum, the notion of PL bears on the underlying time-grid upon which a CBN is constructed, and adheres to Hume’s principle of temporal precedence of cause to effect [@hume1975inquiry]. A growing body of work in psychology literature corroborates Hume’s centuries-old insight, suggesting that the timing and temporal order between events strongly influences how humans induce causal structure over them [@bramley2014order; @lagnado2006time]. The introduced notion of PL is based on the following hypothesis: *When learning the underlying causal structure of a domain, humans may as well encode the temporal patterns (or some estimates thereof) on which they rely to infer the causal structure.* This hypothesis is supported by recent findings suggesting that people have expectations about the delay length between cause and effect [@greville2010temporal; @buehner2004abolishing; @schlottmann1999seeing]. It is worth noting that we could have defined PL in terms of relative *expected time* between cause and effect, rather than relative absolute time. Under such an interpretation, the time which elapses between the intervention on a cause and the occurrence of its effect would be modeled by a probability distribution, and PL would be defined in terms of the expected value of that distribution. Our proposed framework, PLIF, is indifferent as to whether PL should be construed in terms of absolute or expected time. show that causal relations with fixed temporal intervals are consistently judged as stronger compared to those with variable temporal intervals. This finding, therefore, seems to suggest that people expect, to a greater extent, fixed temporal intervals between cause and effect, rather than variable ones—an interpretation which, at least to a first approximation, favors construing PL in terms of relative absolute time (see Def. 1).[^6] Informative Example {#Sec_toy_example_I} =================== To develop our intuition, and before formally articulating our proposed framework, let us present a simple yet informative example which demonstrates: (i) how the retrieval process can be carried out in a local, bottom-up fashion, allowing for retrieving the relevant submodel incrementally, and (ii) how adopting PL allows the reasoner to obtain bounds on a given causal query at intermediate stages of the retrieval process. Let us assume that the posed causal query is ${\mathbb P}(x|y)$ where ${\textbf{x}}, {\textbf{y}}$ are two RVs in the CBN depicted in Fig. \[fig\_motive\](a) with PLs $p_l({\textbf{x}}),p_l({\textbf{y}})$, and let $p_l({\textbf{x}})>p_l({\textbf{y}})$. The relevant information for the derivation of the posed query (i.e., the relevant submodel) is depicted in Fig. \[fig\_motive\](e). ![Example. Query variables are shown in orange.[]{data-label="fig_motive"}](fig_informative_example.pdf){width="40.00000%"} Starting from the target RV ${\textbf{x}}$ in the original CBN (Fig. \[fig\_motive\](a)) and moving one step backwards,[^7] ${\textbf{t}}_1$ is reached (Fig. \[fig\_motive\](b)). Since $p_l({\textbf{y}})< p_l({\textbf{t}}_1)$, ${\textbf{y}}$ must be a non-descendant of ${\textbf{t}}_1$, and therefore, of ${\textbf{x}}$. Hence, conditioning on ${\textbf{t}}_1$ $d$-separates ${\textbf{x}}$ from ${\textbf{y}}$ [@pearl2014probabilistic], yielding $({\textbf{x}}{\perp\hspace*{-5pt}\perp}{\textbf{y}}|{\textbf{t}}_1)$. Thus ${\mathbb P}(x|y)=\sum_{t_1\in{\textbf{V}}al(t_1)}{\mathbb P}(x|y,t_1){\mathbb P}(t_1|y)=\sum_{t_1\in{\textbf{V}}al(t_1)}{\mathbb P}(x|t_1){\mathbb P}(t_1|y)$ implying: $\min_{t_1\in Val({\textbf{t}}_1)}{\mathbb P}(x|t_1)\leq {\mathbb P}(x|y)\leq \max_{t_1\in Val({\textbf{t}}_1)}{\mathbb P}(x|t_1)$. It is crucial to note that the given bounds can be computed using the information thus-far retrieved, i.e., the information encoded in the submodel shown in Fig. \[fig\_motive\](b). Taking a step backwards from ${\textbf{t}}_1$, ${\textbf{t}}_2$ is reached (Fig. \[fig\_motive\](c)). Using a similar line of reasoning to the one presented for ${\textbf{t}}_1$, having $p_l({\textbf{y}})< p_l({\textbf{t}}_2)$ ensures $({\textbf{x}}{\perp\hspace*{-5pt}\perp}{\textbf{y}}|{\textbf{t}}_2)$. Therefore, the following bounds on the posed query can be derived, which, crucially, can be computed using the information thus-far retrieved: $\min_{t_2\in Val({\textbf{t}}_2)}{\mathbb P}(x|t_2)\leq {\mathbb P}(x|y) \leq \max_{t_2\in Val({\textbf{t}}_2)}{\mathbb P}(x|t_2)$. It is straightforward to show that the bounds derived in terms of ${\textbf{t}}_2$ are tighter than the bounds derived in terms of ${\textbf{t}}_1$.[^8] Finally, taking one step backward from ${\textbf{t}}_2$, ${\textbf{y}}$ is reached (Fig. \[fig\_motive\](d)) and the exact value for ${\mathbb P}(x|y)$ can be derived, again using only the submodel thus-far retrieved (Fig. \[fig\_motive\](d)). We are now well-positioned to present our proposed framework, PLIF. PL-based Inference Framework (PLIF) {#sec_PLIF_main} =================================== In this section, we intend to elaborate on how, equipped with the notion of PL, a generic causal query of the form[^9] ${\mathbb P}({\textbf{O}}=O|{\textbf{E}}=E)$ can be derived where ${\textbf{O}}$ and ${\textbf{E}}$ denote, respectively, the disjoint sets of target (or *objective*) and observed (or *evidence*) variables. In other words, we intend to formalize how inference over a CBN whose nodes are endowed with PL as an attribute should be carried out. Before we present the main result, a few definitions are in order. **Def. 2. (Critical Potential Level (CPL))** The target variable with the least PL is denoted by ${\textbf{o}}^\ast$ and its PL is referred to as the CPL. More formally, $p_l^{\ast}:\triangleq\min_{{\textbf{o}}\in {\textbf{O}}} p_l({\textbf{o}})$ and ${\textbf{o}}^\ast:\triangleq\arg\min_{{\textbf{o}}\in {\textbf{O}}} p_l({\textbf{o}})$. E.g., for the setting given in Fig. \[fig\_motive\](a), ${\textbf{o}}^\ast={\textbf{x}}$, and $p_l^{\ast}=p_l({\textbf{x}})$. Viewed through the lens of time, ${\textbf{o}}^\ast$ is the furthest target variable into the past, with PL $p_l^{\ast}$. There are two possibilities: (a) $p_l^\ast>T_0$, or (b) $p_l^\ast=T_0$, with $T_0$ denoting the origin of time; cf. Sec. \[sec:PL\_vs\_time\]. In the sequel we assume that (a) holds. For a discussion on the special case (b), the reader is referred to the Supplementary Information. **Def. 3. (Inference Threshold (IT) and IT Root Set (IT-RS))** To any real-valued quantity, ${\mathcal{T}}$, corresponds a unique set, ${\textbf{R}}_{{\mathcal{T}}}$, obtained as follows: Start at every variable ${\textbf{x}}\in {\textbf{O}}\cup{\textbf{E}}$ with PL $\geq {\mathcal{T}}$ and backtrack along all paths terminating at ${\textbf{x}}$. Backtracking along each path stops as soon as a node with PL less than ${\mathcal{T}}$ is encountered. Such nodes, together, compose the set ${\textbf{R}}_{{\mathcal{T}}}$. It follows from the definition that: $\max_{{\textbf{t}}\in{\textbf{R}}_{{\mathcal{T}}}}p_l({\textbf{t}})<{\mathcal{T}}$. ${\mathcal{T}}$ and ${\textbf{R}}_{{\mathcal{T}}}$ are termed, respectively, Inference Threshold (IT) and the IT Root Set (IT-RS) for ${\mathcal{T}}$. For example, the set of variables circled at the stages depicted in Figs. \[fig\_motive\](b-d) are, the IT-RSs for ${\mathcal{T}}=p_l({\textbf{x}})-\epsilon$, ${\mathcal{T}}=p_l({\textbf{t}}_1)-\epsilon$, and ${\mathcal{T}}=p_l({\textbf{t}}_2)-\epsilon$, respectively. Note that instead of, say ${\mathcal{T}}=p_l({\textbf{x}})-\epsilon$, we could have said: for any ${\mathcal{T}}\in(p_l({\textbf{t}}_1),p_l({\textbf{x}}))$. However, expressing ITs in terms of $\epsilon$ liberates us from having to express them in terms of intervals thereby simplifying the exposition in the sole hope that the reader finds it easier to follow the work. We would like to emphasize that the adopted notation should *not* be construed as implying that the assignment of values to ITs is such a sensitive task that everything would have collapsed, had IT not been chosen in such a fine-tuned manner. To recap, in simple terms, ${\mathcal{T}}$ bears on how far into the past a reasoner is consulting her mental model in the process of answering a query, and ${\textbf{R}}_{{\mathcal{T}}}$ characterizes the furthest-into-the-past concepts entertained by the reasoner in that process. Next, we formally present the main idea behind PLIF, followed by its interpretation in simple terms. **Lemma 1.** For any chosen IT ${\mathcal{T}}<p_l^\ast$ and its corresponding ${\textbf{R}}_{{\mathcal{T}}}$, define ${\textbf{S}}:\triangleq{\textbf{R}}_{{\mathcal{T}}}\setminus{\textbf{E}}$. Then the following holds: $$\begin{aligned} \label{eq_PLIF} \min_{S\in Val({\textbf{S}})}{\mathbb P}(O|S,E)\leq {\mathbb P}(O|E) \leq\max_{S\in Val({\textbf{S}})}{\mathbb P}(O|S,E).\end{aligned}$$ Crucially, the provided bounds can be computed using the information encoded in the submodel retrieved in the very process of obtaining the ${\textbf{R}}_{{\mathcal{T}}}$. $\square$ For a formal proof of Lemma 1, the reader is referred to the Supplementary Information. Mathematical jargon aside, the message of Lemma 1 is quite simple: For any chosen inference threshold ${\mathcal{T}}$ which is further into the past than ${\textbf{o}}^\ast$, Lemma 1 ensures that the reasoner can condition on ${\textbf{S}}$ and obtain the reported lower and upper bounds on the query by using *only* the information encoded in the retrieved submodel. It is natural to ask under what conditions the exact value to the posed query can be derived using the thus-far retrieved submodel (i.e., the submodel obtained during the identification of ${\textbf{R}}_{{\mathcal{T}}}$). The following remark bears on that. **Remark 1.** If for IT ${\mathcal{T}}$, ${\textbf{R}}_{{\mathcal{T}}}$ satisfies either: (i) ${\textbf{R}}_{{\mathcal{T}}}\subseteq {\textbf{E}}$, or (ii) for all ${\textbf{r}}\in{\textbf{R}}_{{\mathcal{T}}},\hspace*{3pt} p_l({\textbf{r}})=T_0$, and $\min_{{\textbf{e}}\in {\textbf{E}}}p_l({\textbf{e}})> {\mathcal{T}}$, or (iii) the lower and upper bound given in (\[eq\_PLIF\]) are identical, then the exact value of the posed query can be derived using the submodel retrieved in the process of obtaining ${\textbf{R}}_{{\mathcal{T}}}$. Fig. \[fig\_motive\](d) shows a setting wherein conditions (i) and (iii) are both met. The rationale behind Remark 1 is provided in the Supplementary Information. Case Study {#sec_case_study} ---------- Next, we intend to cast the Hidden Markov Model (HMM) studied in (Icard & Goodman, 2015, p. 2) into our framework. The setting is shown in Fig. \[fig\_HMM\_Icard\_Goodman\](left). We adhere to the same parametrization and query adopted therein. All RVs in this section are binary, taking on values from the set $\{0,1\}$; ${\textbf{x}}=x$ indicates the event wherein ${\textbf{x}}$ takes the value 1, and ${\textbf{x}}=\bar{x}$ implies the event wherein ${\textbf{x}}$ takes the value 0. We assume $p_l({\textbf{x}}_{t+i})=i-2$.[^10] We should note that the assignment of the PLs for the variables in $\{{\textbf{y}}_{t-i}\}_{i=0}^{+\infty}$ does not affect the presented results in any way. The query of interest is ${\mathbb P}(x_{t+1}|y_{-\infty:t})$. Notice that after performing three steps of the sort discussed in the example presented in Sec. \[Sec\_toy\_example\_I\] (for the IT ${\mathcal{T}}=-3-\epsilon$), the lower bound on the posed query exceeds 0.5 (shown by the red dashed line in Fig. \[fig\_HMM\_Icard\_Goodman\](right)). This observation has the following intriguing implication. Assume, for the sake of argument, that we were presented with the following Maximum A-Posterior (MAP) inference problem: Upon observing all the variables in $\{{\textbf{y}}_{t-i}\}_{i=0}^{+\infty}$ taking on the value 1, what would be the most likely state for the variable ${\textbf{x}}_{t+1}$? Interestingly, we would be able to answer this MAP inference problem simply after three backward moves (corresponding to the IT ${\mathcal{T}}=-3-\epsilon$). In Fig. \[fig\_HMM\_Icard\_Goodman\](right), the intervals within which the posed query falls (due to Lemma 1) in terms of the adopted IT ${\mathcal{T}}$ are depicted. Our analysis confirms Icard and Goodman’s insight (2015) that even in the extreme case of having infinite-sized relevant submodel (Fig. \[fig\_HMM\_Icard\_Goodman\](left)), the portion of which the reasoner has to consult so as to obtain a “sufficiently good" answer to the posed query could happen to be very small (Fig. \[fig\_HMM\_Icard\_Goodman\](right)). Discussion {#sec_discussion} ========== To our knowledge, PLIF is the first inference framework proposed that capitalizes on *time* to constrain the scope of causal reasoning over CBNs, where the term scope refers to the portion of a CBN on which inference is carried out. PLIF does not restrict itself to any particular inference scheme. The claim of PLIF is that inference should be confined within and carried out over retrieved submodels of the kind suggested by Lemma 1 so as to obtain the reported bounds therein. In this light, PLIF can accommodate all sorts of inference schemes, including Belief Propagation (BP), and sample-based inference methods using Markov Chain Monte Carlo (MCMC), as two prominent classes of inference schemes proposed in the literature.[^11] For example, to cast BP into PLIF amounts to restricting BP’s message-passing within submodels of the kind suggested by Lemma 1. In other words, assuming that BP is to be adopted as the inference scheme, upon being presented with a causal query, an IT according to Lemma 1 will be selected—at the *meta-level*—by the reasoner and the corresponding submodel, as suggested by Lemma 1, will be retrieved, over which inference will be carried out using BP. This will lead to obtaining lower and upper bounds on the query, as reported in Lemma 1. If time permits, the reasoner builds up *incrementally* on the thus-far retrieved submodel so as to obtain tighter bounds on the query.[^12] MCMC-based inference methods can be cast, in a similar fashion, into PLIF. The problem of what parts of a CBN are relevant and what are irrelevant for a given query, according to (Geiger, Verma, & Pearl, 1989), was first addressed by Shachter (1988). The approaches proposed for identifying the relevant submodel for a given query fall into two broad categories (cf. (Mahoney & Laskey, 1998) and references therein): (i) top-down approaches, and (ii) bottom-up approaches. Top-down approaches start with the full knowledge of the underlying CBN and, depending on the posed query, gradually *prune* the irrelevant parts of the CBN. In this respect, top-down approaches are inevitably from “god’s eye" point of view—a characteristic which undermines their cognitive-plausibility. Bottom-up approaches, on the other hand, start at the variables involved in the posed query and move backwards till the *boundaries* of the underlying CBN are finally reached, only then they start to prune the parts of the constructed submodel—if any—which can be safely removed without jeopardizing the exact computation of the posed query. It is important to note that bottom-up approaches cannot stop at *intermediate* steps during the backward move and run inference on the thus-far constructed submodel without running the risk of compromising some of the (in)dependence relations structurally encoded in the CBN, which would yield erroneous inferences. This observation is due to the fact that there exists no local signal revealing how the thus-far retrieved nodes are positioned relative to each other and to the to-be-retrieved nodes—a shortcoming circumvented in the case of PLIF by introducing PL. Another pitfall shared by both top-down and bottom-up approaches is their *sequential* methodology towards the task of inference, according to which the relevant submodel for the posed query should be first constructed, and only then inference is carried out to compute the posed query.[^13] On the contrary, PLIF submits to what we call the *concurrent* approach to reasoning, whereby retrieval and inference take place *in tandem*. The HMM example analyzed in Sec. \[sec\_case\_study\], shows the efficacy of the concurrent approach. Work on causal judgment provides support for the so-called alternative neglect, according to which subjects tend to neglect alternative causes to a much greater extent in predictive reasoning than in diagnostic reasoning [@fernbach2013cognitive; @fernbach2011asymmetries]. Alternative neglect, therefore, implies that subjects would tend to ignore parts of the relevant submodel while constructing it. Recent findings, however, seem to cast doubt on alternative neglect [@cummins2014impact; @meder2014structure]. Meder et al. (2014), Experiment 1 demonstrates that subjects appropriately take into account alternative causes in predictive reasoning. Also, Cummins (2014) substantiates a two-part explanation of alternative neglect according to which: (i) subjects interpret predictive queries as requests to estimate the probability of the effect when only the focal cause is present, an interpretation which renders alternative causes irrelevant, and (ii) the influence of inhabitory causes (i.e., disablers) on predictive judgment is underestimated, and this underestimation is incorrectly interpreted as neglecting of alternative causes. Cummins (2014), Experiment 2 shows that when predictive inference is queried in a manner that more accurately expresses the meaning of noisy-OR Bayes net (i.e., the normative model adopted by ) likelihood estimates approached normative estimates. , Experiment 4 shows that the impact of disablers on predictive judgments is far greater than that of alternative causes, while having little impact on diagnostic judgments. PLIF commits to the retrieval of enablers as well as disablers. As mentioned earlier, PLIF abstracts away from the inference algorithm operating on the retrieved submodel, and, hence, leaves it to the inference algorithm to decide how the retrieved enablers and disablers should be integrated. In this light, PLIF is consistent with the results of Experiment 4. In an attempt to explain violations of screening-off reported in the literature, find strong support for the contradiction hypothesis followed by the mediating mechanism hypothesis, and finally conclude that people do conform to screening-off once the causal structure they are using is correctly specified. PLIF is consistent with these findings, as it adheres to the assumption that reasoners carry out inference on their *internal* causal model (including all possible mediating variables and disablers), not the potentially incomplete one presented in the cover story; see also [@Rehder2015; @sloman2015causality]. Experiment 5 in [@cummins2014impact], consistent with [@fernbach2013cognitive], shows that causal judgments are strongly influenced by memory retrieval/activation processes, and that both number of disablers and order of disabler retrieval matter in causal judgments. These findings suggest that the CFP and memory retrieval/activation are intimately linked. In that light, next, we intend to elaborate on the rationale behind adopting the term “retrieve" and using it interchangeably with the term “consult" throughout the paper; this is where we relate PLIF to the concepts of Long Term Memory (LTM) and Working Memory (WM) in psychology and neurophysiology. Next, we elaborate on how PLIF could be interpreted through the lenses of two influential models of WM, namely, Baddeley and Hitch’s (1974) Multi-component model of WM (M-WM) and Ericsson and Kintsch’s Long-term Working Memory (LTWM) model (1995). The M-WM postulates that *“long-term information is downloaded into a separate temporary store, rather than simply activated in LTM"*, a mechanism which permits WM to *“manipulate and create new representations, rather than simply activating old memories"* (Baddeley, 2003). Interpreting PLIF through the lens of the M-WM model amounts to the value for IT being chosen (and, if time permits, updated so as to obtain tighter bounds) by the central executive in the M-WM and the submodel being incrementally “retrieved" from LTM into M-WM’s episodic buffer. Interpreting PLIF through the lens of the LTWM model amounts to having no retrieval from LTM into WM and the submodel suggested by Lemma 1 being merely “activated in LTM" and, in that sense, being simply “consulted" in LTM. In sum, PLIF is compatible with both of the narratives provided by the M-WM and LTWM models. A number of predictions follow from PL and PLIF. For instance, PLIF makes the following prediction: Prompted with a predictive or a diagnostic query (i.e., ${\mathbb P}({\textbf{e}}|{\textbf{c}})$ and ${\mathbb P}({\textbf{c}}|{\textbf{e}})$, respectively), subjects should not retrieve any of the effects of ${\textbf{e}}$. Introspectively, this prediction seems plausible, and can be tested, using a similar approach to [@cummins2014impact; @de2003inference], by asking subjects to “think aloud" while engaging in predictive or diagnostic reasoning. Also, PL yields the following prediction: Upon intervening on cause ${\textbf{c}}$, subjects should be sensitive to *when* effect ${\textbf{e}}$ will occur, even in settings where they are not particularly instructed to attend to such temporal patterns. This prediction is supported by recent findings suggesting that people do have expectations about the delay length between cause and effect [@greville2010temporal; @buehner2004abolishing]. There is a growing acknowledgment in the literature that, not only time and causality are intimately linked, but that they *mutually constrain* each other in human cognition [@buehner2014time]. In line with this view, we see our work also as an attempt to formally articulate how time could guide and constrain causal reasoning in cognition. While many questions remain open, we hope to have made some progress towards better understanding of the CFP at the algorithmic level. Acknowledgments {#acknowledgments .unnumbered} =============== We are grateful to Thomas Icard for valuable discussions. We would also like to thank Marcel Montrey and Peter Helfer for helpful comments on an earlier draft of this work. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under grant RGPIN 262017. Supplementary Information {#supplementary-information .unnumbered} ========================= S-I Proof of Lemma 1: {#s-i-proof-of-lemma-1 .unnumbered} --------------------- Simple use of the total probability lemma yields: $$\tag{S1} {\mathbb P}(O|E)=\sum_{S\in Val({\textbf{S}})}{\mathbb P}(O|S,E){\mathbb P}(S|E).$$ Equation (S1) immediately reveals a simple fact, namely, that ${\mathbb P}(O|E)$ is a linear combination of the members of the set $\{{\mathbb P}(O|S,E)\}_{S\in Val({\textbf{S}})}$, an observation which grants the validity of the expression given in (\[eq\_PLIF\]) in the main text. The key point which is left to be shown is the following: (Q.1) Why can the bounds given in (\[eq\_PLIF\]) be computed using the submodel retrieved in the process of obtaining the corresponding ${\textbf{R}}_{{\mathcal{T}}}$ for the adopted IT ${\mathcal{T}}<p_l^\ast$? This is where the notion of PL comes into play. To articulate the intended line of reasoning let us introduce some notations first. According to Def. 3, any chosen IT ${\mathcal{T}}$ induces an IT-RS ${\textbf{R}}_{{\mathcal{T}}}$. Let us partition the set of evidence variables ${\textbf{E}}$ into three mutually disjoint sets ${\textbf{E}}_{T}^+, {\textbf{E}}_{T},$ and ${\textbf{E}}_{T}^-$, where ${\textbf{E}}_{T}$ denotes the set of variables in ${\textbf{E}}$ which belong to the IT-RS ${\textbf{R}}_{{\mathcal{T}}}$ (i.e., ${\textbf{E}}_{T}:\triangleq{\textbf{E}}\cap{\textbf{R}}_{{\mathcal{T}}}$), ${\textbf{E}}_{T}^+$ denotes the set of variables in ${\textbf{E}}$ with PLs $\geq {\mathcal{T}}$, and finally, ${\textbf{E}}_{T}^-$ denotes the set of variables in ${\textbf{E}}$ which are neither in ${\textbf{E}}_{T}$ nor in ${\textbf{E}}_{T}^+$ (i.e., ${\textbf{E}}_{T}^-:\triangleq{\textbf{E}}\setminus({\textbf{E}}_{T}\cup{\textbf{E}}_{T}^+)$). Note that, by construction, the PLs of the variables in ${\textbf{E}}_{T}^-$ are less than the adopted IT ${\mathcal{T}}$, hence the adopted notation. For example, for the setting depicted in Fig. \[fig\_motive\](b) (corresponding to the IT ${\mathcal{T}}=p_l({\textbf{x}})-\epsilon$), ${\textbf{E}}_{T}=\varnothing, {\textbf{E}}_{T}^+=\varnothing,$ and ${\textbf{E}}_{T}^-=\{{\textbf{y}}\}$. Also, for the setting depicted in Fig. \[fig\_motive\](d) (corresponding to the IT ${\mathcal{T}}=p_l({\textbf{t}}_2)-\epsilon$), ${\textbf{E}}_{T}=\{{\textbf{y}}\}, {\textbf{E}}_{T}^+=\varnothing,$ and ${\textbf{E}}_{T}^-=\varnothing$. Next, we present a key result as a lemma. **Lemma S.1.** Let ${\mathbb P}(O|E)$ denote the posed causal query. For any chosen IT ${\mathcal{T}}<p_l^\ast$ and its corresponding IT-RS ${\textbf{R}}_{{\mathcal{T}}}$, the following conditional independence relation holds: $$\tag{S2} ({\textbf{O}}{\perp\hspace*{-5pt}\perp}{\textbf{E}}_{T}^-|{\textbf{R}}_{{\mathcal{T}}}\cup{\textbf{E}}_{{\mathcal{T}}}^+).$$ **Proof.** The relations between the PLs of the variables involved in the statement (S2) ensures that, according to $d$-separation criterion (Pearl, 1988), conditioning on the variables in ${\textbf{R}}_{{\mathcal{T}}}\cup{\textbf{E}}_{{\mathcal{T}}}^+$ blocks all the paths between the variables in ${\textbf{O}}$ and ${\textbf{E}}_{T}^-$, hence follows (S2). The following two-part argument responds to the question posed in (Q.1) in the affirmative. First, notice that: $$\begin{aligned} {\mathbb P}(O|S,E)\overset{}{=} \quad &{\mathbb P}(O|S,E_{{\mathcal{T}}},E_{{\mathcal{T}}}^-,E_{{\mathcal{T}}}^+) \nonumber \\ \overset{}{=} \quad &{\mathbb P}(O|R_{{\mathcal{T}}}, E_{{\mathcal{T}}}^-,E_{{\mathcal{T}}}^+) \nonumber \\ \overset{(S2)}{=} \quad &{\mathbb P}(O|R_{{\mathcal{T}}},E_{{\mathcal{T}}}^+). \tag{S3}\end{aligned}$$ Second, note that the process of obtaining ${\textbf{R}}_{{\mathcal{T}}}$, namely, moving backwards from the variables in ${\textbf{O}}\cup{\textbf{E}}_{{\mathcal{T}}}^+$ until ${\textbf{R}}_{{\mathcal{T}}}$ is reached, ensures that the submodel retrieved in this process suffices for the derivations of ${\mathbb P}(O| R_{{\mathcal{T}}}, E_{{\mathcal{T}}}^+)$. Using the approach introduced in [@geiger1989d] for identifying the relevant information for the derivation of a query in a Bayesian network, this follows from the following fact: Conditioned on ${\textbf{R}}_{{\mathcal{T}}}\cup{\textbf{E}}_{{\mathcal{T}}}^+$, the set ${\textbf{O}}$ is $d$-separated from all the nodes in the set $An({\textbf{O}}\cup{\textbf{E}})\setminus {\textbf{R}}_{{\mathcal{T}}}$ whose PLs are less than the adopted IT ${\mathcal{T}}$. Note that $An({\textbf{O}}\cup{\textbf{E}})$ denotes the ancesteral graph for the nodes in ${\textbf{O}}\cup{\textbf{E}}$. This completes the proof. $\blacksquare$ S-II The Rationale behind Remark 1: {#s-ii-the-rationale-behind-remark-1 .unnumbered} ----------------------------------- Case (i) and Case (iii) immediately follow from Lemma 1 in the main text. Case (ii) implies that all the ancestors of variables in ${\textbf{O}}\cup{\textbf{E}}$ are retrieved, hence the sufficiency of the retrieved submodel for the exact derivation of the query; see also Sec. S-III. S-III On the Special Case of Having $p_l^{\ast}=T_0$: {#s-iii-on-the-special-case-of-having-p_lastt_0 .unnumbered} ----------------------------------------------------- In such circumstances, to derive ${\mathbb P}(O|E)$, the set of all the ancestors of variables in ${\textbf{O}}\cup{\textbf{E}}$ should be retrieved and then inference should be carried out on the retrieved submodel. [^1]: In an informative example on Hidden Markov Models (HMMs), Icard & Goodman (2015) present a setting wherein the relevant submodel is infinitely large—an example which makes it pronounced what is wrong with the sequential approach stated earlier. [^2]: The terms “consult" and “retrieve" will be used interchangeably. We elaborate on the rationale behind that in Sec. \[sec\_discussion\], where we connect our work to Long Term Memory and Working Memory. [^3]: We elaborate more on this in the Discussion section. [^4]: The term “nested" implies that the thus-far retrieved submodel is subsumed by every later submodel (should the reasoner proceeds with the retrieval process). [^5]: More precisely, PL induces a topological order on the nodes of a CBN, with temporal interpretations suggested in Def. 1. [^6]: There are cases, however, that, despite the precedence of cause to effect, quantifying the amount of time between their occurrences may bear no meaning, e.g., when dealing with hypothetical constructs. In such cases, PL should be simply construed as a topological ordering. From a purely computational perspective, PL is a generalization of *topological sorting* in computer science. [^7]: Taking one step backwards from variable $\bm q$ amounts to retrieving all the parents of $\bm q$. [^8]: Here we are implicitly making the assumption that the CPDs involved in the parameterization of the underlying CBN are non-degenerate. Dropping this assumption yields the following result: The bounds derived in terms of ${\textbf{t}}_2$ are equally-tight or tighter than the bounds derived in terms of ${\textbf{t}}_1$. [^9]: We do not consider interventions in this work. However, with some modifications, the presented analysis/results can be extended to handle a generic causal query of the form ${\mathbb P}({\textbf{O}}=O| {\textbf{E}}=E, do({\textbf{Z}}=Z))$ where ${\textbf{Z}}$ denotes the set of intervened variables. [^10]: Note that the trend of the upper- and lower-bound curve as well as the size of the intervals shown in Fig. \[fig\_HMM\_Icard\_Goodman\](right) are insensitive with regard to the choice of PLs for variables $\{{\textbf{x}}_{t-i}\}_{i=-1}^{+\infty}$. [^11]: [MCMC-based methods have been successful in simulating important aspects of a wide range of cognitive phenomena, and giving accounts for many cognitive biases; cf. [@sanborn2016bayesian]. Also, work in theoretical neuroscience has suggested mechanisms for how BP and MCMC-based methods could be realized in neural circuits; cf. [@gershman2016complex; @lochmann2011neural].]{} [^12]: The very property that the submodel gets constructed incrementally in a nested fashion guarantees that the obtained lower and upper bounds get tighter as the reasoner adopts smaller ITs; cf. Fig. \[fig\_HMM\_Icard\_Goodman\](left). [^13]: The computation can be carried out to obtain either the exact value or simply an approximation to the query. Nonetheless, what both top-down and bottom-up approaches agree on is that the relevant submodel is to be first identified, should the reasoner intend to compute exactly or approximately the posed query.
--- abstract: 'Decoherence free subspaces (DFS) is a theoretical tool towards experimental implementation of quantum information storage and processing. However, they represent an experimental challenge, since conditions for their existence are very stringent. This work explores the situation in which a system of $N$ oscillators coupled to a bath of harmonic oscillators is close to satisfy the conditions for the existence of DFS. We show, in the Born-Markov limit and for small deviations from separability and degeneracy conditions, that there are [*weak decoherence subspaces*]{} which resemble the original notion of DFS.' author: - 'K. M.' - 'S. G. Mokarzel' - 'M. O.' - 'M. C. Nemes' title: Realistic Decoherence Free Subspaces --- Introduction ============ The very same mechanism responsible for the potential improvements on computation speed using quantum mechanics, is the one which greatly hinders immediate technical implementation. [*Entanglement*]{} between different subsystems is essential for the production of the states used in information processing[@NC]; at the same time, as these qubits can not be completely isolated from its environment, entanglement with the environmental degrees of freedom is a general feature. The deleterious effect of this coupling is usually called [*decoherence*]{}[@Dec]. Therefore much effort has been devoted to finding ways around decoherence in quantum computation, such as error correcting codes[@codes], dynamical decoupling[@dynadeco] and computation in decoherence free subspaces[@DFS; @DFS2; @Zanardi]. Experimental observations of decoherence free evolution have been reported[@Science1; @Science2]. Many physical implementations have been proposed including cavity QED[@CQED], ions traps[@Ion], nuclear magnetic resonance[@NMR] and semiconductor quantum dots[@QDots]. From the theoretical point of view, recent work has been mainly on proving the existence of DF subspaces, in general related to symmetries of the system which are preserved by the interaction with the environment, on searching for mechanisms of dynamical creation of DFS[@WL], and on the analysis of their robustness[@BLW]. More realistic models[@Zanardi] are scarce, and fail to provide insight on the effects of slightly relaxing the conditions necessary for the existence of DFS. In the present work we consider the case of N independent oscillators linearly coupled to a single environment, and show that the existence of strict decoherence free subspaces can be obtained under the following two conditions: degeneracy of the oscillators, and separability of the coupling with the environment. Both can be viewed as consequences of symmetries: the first involving only the system, and the second the interaction. The exact form of the spectral density and the temperature of the environment are immaterial in what concerns the existence of DFS, since they really decouples from the environment. Master equations for the evolution of the reduced density matrix of the system are derived with and without the Born-Markov approximation, and only the coefficients vary from one case to the other. For two independent oscillators we solve the dynamics of the reduced density and explicit the decoherence free subspace. Also in the case of two harmonic oscillators we study the effect of relaxing the degeneracy and separability conditions. We verify that there is no more DFS, but there remains a long leaved mode, which we call [*weak decoherence mode*]{}, and its counterbalance, a [*strong decoherence mode*]{} also appear. The time scales for the duration of these components are derived in terms of the appropriate parameters. We analyze these findings in the context of the robustness proof for DFS presented in Ref. [@BLW]. This contribution is organized as follows: in section \[squick\] an introduction to the concept of decoherence free subspaces is given, in section \[soscillators\] the model with oscillators is described, and the decoherence free modes are exhibited. A short discussion of the classical counterpart of decoherence free modes is made. Section \[smaster\] is devoted to the derivation of master equations. In section \[s2osc\] we discuss in details the case of two harmonic oscillators. As a simplifying tool, we introduce the notion of [*superoperator*]{}. The section \[sreal\] discuss the case of small departures from the degeneracy and separability conditions, and shows that despite the fact that the concept of DFS is no more applicable, there remains a [*weak decoherence mode*]{} which can be useful for quantum information storing. We give some concluding remarks. Some intermediate calculations have been relegated to the appendix. A quick way to decoherence free subspaces {#squick} ========================================= The notion of decoherence free subspaces (DFS) can be easily captured by considering a special coupling between a system and its environment. Consider a system with its autonomous Hamiltonian ${{\bf \hat{H}}}_S$, an environment described by ${{\bf \hat{H}}}_E$, and the interaction between them given by ${{\bf \hat{H}}}_I$. The complete Hamiltonian is then $${{\bf \hat{H}}} = {{\bf \hat{H}}}_S + {{\bf \hat{H}}}_E + {{\bf \hat{H}}}_I.$$ Suppose the interaction term can be written in the form ${{\bf \hat{H}}}_I = {{\bf \hat{A}}}_S {{\bf \hat{B}}}_E$, with ${{\bf \hat{A}}}_S$ (resp. ${{\bf \hat{B}}}_E$) acting only on the system (resp. environment) degrees of freedom. We will call this a [*separability condition*]{}. Any common eigenvector ${\left| a \right\rangle}$ of ${{\bf \hat{A}}}_S$ and ${{\bf \hat{H}}}_S$ (with eigenvalues $a$ and $h_a$) does not get entangled with the environment, since in this case $$\begin{split} {{\bf \hat{H}}}{\left| a \right\rangle}\otimes {\left| \epsilon \right\rangle} &= \left( {{\bf \hat{H}}}_S {\left| a \right\rangle}\right) \otimes {\left| \epsilon \right\rangle} + {\left| a \right\rangle} \otimes \left( {{\bf \hat{H}}}_E {\left| \epsilon \right\rangle}\right) + \left( {{\bf \hat{A}}}_S {\left| a \right\rangle}\right) \otimes \left( {{\bf \hat{B}}}_E {\left| \epsilon \right\rangle}\right) \\ &= {\left| a \right\rangle}\otimes \left( h_a + {{\bf \hat{H}}}_E + a{{\bf \hat{B}}}_E\right) {\left| \epsilon \right\rangle}. \end{split}$$ By linearity, any common eigenspace of ${{\bf \hat{A}}}_S$ and ${{\bf \hat{H}}}_S$ is a DFS of the system. Degeneracy is generally originated by symmetry. Therefore, if one finds a symmetric system where interaction with the environment preserves this symmetry, then any eigenspace of the system is a DFS. In the language of Ref. [@DFS2], ${{\bf \hat{A}}}_S$ is the only *error generator*, and (common) eigenspaces of (all) error generators are DFS. An important distinction that we make is to consider as DFS only the common eigenspaces of ${{\bf \hat{A}}}_S$ and ${{\bf \hat{H}}}_S$, *i.e.:* the system Hamiltonian should not take the state out of a DFS. A model with oscillators {#soscillators} ======================== We now present a different situation in which DFS can be achieved. The system will consist of $N$ identical harmonic oscillators (frequency $\omega$, annihilation operators ${{\bf \hat{a}}}_i$, we use $\hbar = 1$). The environment will be modelled as a huge set of harmonic oscillators (frequencies $\omega _k$, annihilation operators ${{\bf \hat{b}}}_k$). Linear coupling will be considered and the rotating wave approximation applied. This model is both: simple enough to be studied in details and general enough to keep the characteristic behaviour of the problem. It is also adequated to make the link with experimental implementations: one can consider vibronic states of $N$ ions trapped together, a system in which (approximate) DFS has already been demonstrated[@Science2; @comm], or modes of distinct cavities[@Switch], or even, for $N=2$, two degenerated modes of one cavity. The Hamiltonian to be considered is $${{{\bf \hat{H}}}}=\omega \sum_{i=1}^N {{{\bf \hat{a}}}_i^\dagger}{{{\bf \hat{a}}}_i} + \sum_k \omega_k{{{\bf \hat{b}}}_k^\dagger}{{{\bf \hat{b}}}_k} + \sum_{i,k} (g_{ik}^* {{{\bf \hat{a}}}_i^\dagger}{{{\bf \hat{b}}}_k} + g_{ik} {{{\bf \hat{a}}}_i}{{{\bf \hat{b}}}_k^\dagger}). \label{ham}$$ As in the previous model, we need an additional assumption on the form of the interaction. Assume the coupling constants $g_{ik}$ can be factorized as $G_iD_k$. This can be interpreted as supposing that all oscillators feel the environment in the same way, possibly with only a difference in strength, which depends only on the oscillator itself, not on the environment (the most usual models consider $G_i = G$, which is a special form of the here proposed model). With this factorizability hypothesis, one can rewrite the Hamiltonian (\[ham\]) as $${{{\bf \hat{H}}}}= \omega \sum_{i=1}^N {{{\bf \hat{a}}}_i^\dagger}{{{\bf \hat{a}}}_i} + \sum_k \omega_k {{{\bf \hat{b}}}_k^\dagger {{\bf \hat{b}}}_k} + \sum_{k} \left(D_{k}^* (\sum_i G_i^* {{{\bf \hat{a}}}_i^\dagger}){{{\bf \hat{b}}}_k} + D_{k}(\sum_i G_i {{{\bf \hat{a}}}_i}) {{{\bf \hat{b}}}_k^\dagger}\right).$$ By defining the collective operators $${{{\bf \hat{ A}}}^\dagger_1} = \frac{\sum_i G_i^*{{{\bf \hat{a}}}_i^\dagger}}{\sum_i |G_i|^2}, \quad {{{\bf \hat{A}}}_1} = \frac{\sum_i G_i{{{\bf \hat{a}}}_i}}{\sum_i |G_i|^2},$$ it takes the form $${{{\bf \hat{H}}}}=\omega \sum_{i=1}^N {{{\bf \hat{a}}}_i^\dagger}{{{\bf \hat{a}}}_i} + \sum_k \omega_k{{{\bf \hat{b}}}_k^\dagger {{\bf \hat{b}}}_k} + {{{\bf \hat{A}}}_1^\dagger}\sum_{k} c_{k}{{{\bf \hat{b}}}_k} + {{{\bf \hat{A}}}_1}\sum_{k} c_{k}^*{{{\bf \hat{b}}}_k^\dagger},$$ where it is clear that only the collective mode ${{\bf \hat{A}}}_1$ is coupled to the environment (in the above formula $c_k = \sum_i |G_i|^2 D_k^*$). One can consider the mode ${{\bf \hat{A}}}_1$ as the first of a new set of normal modes $\left\{ {{\bf \hat{A}}}_i\right\}$, and the remaining modes thus constitute an infinite dimensional DFS. With this new set of variables, the Hamiltonian is finally written as $${{{\bf \hat{H}}}}=\omega \sum_{i=1}^N {{{\bf \hat{A}}}_i^\dagger}{{{\bf \hat{A}}}_i} + \sum_k \omega_k{{{\bf \hat{b}}}_k^\dagger {{\bf \hat{b}}}_k} + {{{\bf \hat{A}}}_1^\dagger}\sum_{k} c_{k}{{{\bf \hat{b}}}_k} + {{{\bf \hat{A}}}_1}\sum_{k} c_{k}^*{{{\bf \hat{b}}}_k^\dagger}. \label{finham}$$ While the situation in the previous section is completely quantum mechanical, the present model does have a classical analog, because the manipulation above can also be done with classical oscillators. In fact, there is a very old classical situation to which this analysis can be applied: synchronization of pendular clocks. It is known that two clocks in the same wall tend to synchronize in anti-phase. Each clock can be considered as an oscillator, and their coupling to the environment can be considered in terms of the two normal modes (phase and anti-phase modes). The in phase mode couples (much more) strongly to the environment, and this causes the synchronization. Another consistent analogy of the above model is with the superradiance in the Dicke model[@Dicke]. In this case, $N$ two-level atoms are coupled to one field mode. In the regime in which the atoms are collectively coupled to the field ([*[i.e.]{}*]{}: the field does not distinguish which atom has emitted), the radiative process can be stronger compared to individual emissions (due to interference). The counterpart of this process is the subradiance, [*[i.e.]{}*]{}: other collective states with weaker emission than the individual contributions (destructive interference). DFS can thus be compared to subradiant states of the system. Master equation {#smaster} =============== As one usually does not have control on the environment degrees of freedom, the natural approach to this problem is to study the reduced dynamics of the $N$ oscillators. A long but straightforward procedure[@alamos] can be applied to derive the master equation $$\label{Eq:Master} \begin{split} \frac{d{{{\bf \hat{\rho}}}}}{dt} = &\frac{1}{i\hbar}\left[{{{\bf \hat{H}}}_0},{{{\bf \hat{\rho}}}} \right] +\left(\lambda+\epsilon \right) \left( 2{{{\bf \hat{A}}}_1{{\bf \hat{\rho}}}{{\bf \hat{A}}}_1^\dagger}- {{{\bf \hat{A}}}_1^\dagger {{\bf \hat{A}}}_1{{\bf \hat{\rho}}}} -{\bf {{\bf \hat{\rho}}} {{\bf \hat{A}}}_1^\dagger{{\bf \hat{A}}}_1}\right)\\ & +\epsilon \left( 2{{{\bf \hat{A}}}_1^\dagger{{\bf \hat{\rho}}}{{\bf \hat{A}}}_1}-{{{\bf \hat{A}}}_1 {{\bf \hat{A}}}_1^\dagger{{\bf \hat{\rho}}}} -{{\bf \hat{\rho}}}{{{\bf \hat{A}}}_1 {{\bf \hat{A}}}_1^\dagger}\right), \end{split}$$ where $${{{\bf \hat{H}}}_0}=\hbar\omega \sum_{i=2}^N{{{\bf \hat{A}}}_i^\dagger{{\bf \hat{A}}}_i}+ \hbar(\omega+\delta){{{\bf \hat{A}}}_1^\dagger{{\bf \hat{A}}}_1}.$$ The real functions $\lambda,\delta,\epsilon$ are implicitly defined in terms of the auxiliary function $\eta(t)$ $$\label{eta} \eta(t)=\exp\left( -\int_0^t \lambda(t') dt'-iwt -i\int_0^t \delta(t')dt' \right),$$ which satisfies the integrodifferential equation $$\label{Eq:Integrodif} \dot{\eta}+i\omega \eta + \int_0^t d\tau \sum_k |c_k|^2 {\rm e}^{i\omega_k(t-\tau)} \eta(\tau)=0,$$ subject to the initial condition $\eta(0)=1$. Moreover, considering the environment in thermal equilibrium, we have $$\epsilon(t) = \frac{|\eta(t)|^2}{2 }\frac{d}{dt} \left( \sum_k \frac {|c_k|^2 n_k (\beta)}{|\eta(t)|^2} \left| \int_0^t d\tau e^{-i\omega_k (t-\tau)} \eta(\tau) \right|^2 \right),$$ where $n_k(\beta)$ is the mean excitation number for the $k^{\text{th}}$ mode of the environment at inverse temperature $\beta= 1/k_B T$. If the usual Born-Markov approximations hold, then $\delta(t) =0$, $\lambda(t)= \sum_i k_i := k$, and $\epsilon= k\bar{n}$, where $k_i$ characterize the Markovian evolution when only the $i^{\text{th}}$ original oscillator is coupled to the bath, and $\bar{n}$ is the environment mean number of thermal excitations at frequency $\omega$. In this case the master equation simplifies to $$\label{Eq:BMMaster} \begin{split} \frac{d{{{\bf \hat{\rho}}}}}{dt} = & -i\omega\sum_{i=1}^N \left[{{{\bf \hat{A}}}_i^\dagger{{\bf \hat{A}}}_i},{{{\bf \hat{\rho}}}} \right] + k \left( \bar{n}+1\right) \left( 2{{{\bf \hat{A}}}_1{{\bf \hat{\rho}}} {{\bf \hat{A}}}_1^\dagger}-{{{\bf \hat{A}}}_1^\dagger {{\bf \hat{A}}}_1{{\bf \hat{\rho}}}} -{{\bf \hat{\rho}}}{{{\bf \hat{A}}}_1^\dagger {{\bf \hat{A}}}_1}\right)\\ & + k \bar{n}\left( 2{{{\bf \hat{A}}}_1^\dagger{{\bf \hat{\rho}}}{{\bf \hat{A}}}_1}-{{{\bf \hat{A}}}_1 {{\bf \hat{A}}}_1^\dagger{{\bf \hat{\rho}}}} -{{\bf \hat{\rho}}}{{{\bf \hat{A}}}_1{{\bf \hat{A}}}_1^\dagger}\right). \end{split}$$ As one should expect by eq. (\[finham\]), the master equation above describes the dissipative evolution of the collective mode ${{\bf \hat{A}}}_1$, and the independent unitary evolution of the remaining modes ${{\bf \hat{A}}}_i$, $i\geq 2$. It also should be noted that the damping constant $k$ of mode ${{\bf \hat{A}}}_1$ is larger (in general, much larger, for large $N$) than the individual constants $k_i$ of the modes ${{\bf \hat{a}}}_i$. This should be compared to the superradiance analogy discussed at section \[soscillators\]. Two oscillators in a dissipative environment {#s2osc} ============================================ From now on, we especialize to the case of two harmonic oscillators ($N=2$). The collective modes can thus be written as $${{{\bf \hat{A}}}_1^{\dagger}} = \cos (\theta) {{{\bf \hat{a}}}_1^{\dagger}} + \sin(\theta) {{{\bf \hat{a}}}_2^{\dagger}},\quad {{{\bf \hat{A}}}_2^{\dagger}} = -\sin (\theta) {{{\bf \hat{a}}}_1^{\dagger}} + \cos(\theta) {{{\bf \hat{a}}}_2^{\dagger}}, \label{AiRai}$$ with $\tan \theta = G_2/G_1$ (this quotient can be taken as a positive number, if necessary, by redefining the mode ${{\bf \hat{a}}}_2$). If Markovian approximation is made, this relation takes the form $\tan \theta = \sqrt{k_2/k_1}$. The expression (\[AiRai\]) can be considered as giving $\left\{ {{\bf \hat{A}}}_i^{\dagger}\right\}$ by applying a rotation operator ${{\mathcal{R}}}\left( \theta \right)$ on the set $\left\{ {{\bf \hat{a}}}_i^{\dagger}\right\}$. It is usual to call an operator which acts on operators a [*superoperator*]{}. Thus, ${{\mathcal{R}}}\left( \theta \right)$ is a [*rotation superoperator*]{}. It is convenient to represent superoperators using the algebraic relations among the operators in which they act on, and a useful notation is to introduce a dot ($\bullet$) in the position where the operator to be acted on must be placed. For example, $\left[ {{\bf \hat{A}}}, \bullet \right] {{\bf \hat{B}}} = \left[ {{\bf \hat{A}}}, {{\bf \hat{B}}}\right]$, and $\left[ {{\bf \hat{A}}}, \bullet \right]^2 {{\bf \hat{B}}} = \left[ {{\bf \hat{A}}}, \left[ {{\bf \hat{A}}}, {{\bf \hat{B}}}\right] \right]$. With this convention in mind one can verify that $${{\mathcal{R}}}\left( \theta \right) = \exp{\left\{{\theta \left[ {{\bf \hat{a}}}_1{{\bf \hat{a}}}_2^{\dagger} - {{\bf \hat{a}}}_2{{\bf \hat{a}}}_1^{\dagger}, \bullet \right]}\right\}}.$$ Superoperators are very useful to study time evolution. As one can define an evolution operator ${{\bf \hat{U}}}\left( t\right)$ by ${{\bf \hat{U}}}\left( t\right) {\left| \psi \left( 0\right) \right\rangle} = {\left| \psi \left( t\right) \right\rangle}$, the [*evolution superoperator*]{} is defined by ${{\mathcal{U}}}\left( t\right) {{\bf \hat{\rho}}}\left( 0\right) = {{\bf \hat{\rho}}}\left( t\right)$. One completely solves the time evolution of a system by writing its evolution superoperator. The equation (\[Eq:Master\]) can be solved by the superoperator $$\label{SuperUA} {{\mathcal{U}}}\left( t\right) = e^{-iwt\left[{{{\bf \hat{A}}}_2^\dagger {{\bf \hat{A}}}_2} ,\bullet \right]} v e^{\left( 1-v\right) {{{\bf \hat{A}}}^\dagger_1 \bullet {{\bf \hat{A}}}_1}} e^{x {{{\bf \hat{A}}}^\dagger_1 {{\bf \hat{A}}}_1 \bullet}} e^{x^*{\bullet {{\bf \hat{A}}}^\dagger_1 {{\bf \hat{A}}}_1}} e^{z {{{\bf \hat{A}}}_1\bullet {{\bf \hat{A}}}^\dagger_1}}$$ where the coefficients $v(t)$, $x(t)$ and $z(t)$ can be given in terms of the functions $\eta(t)$, eq. (\[eta\]), and $\mathcal{N}(t)$, $${\cal N}(t)= \int_0^t d\tau \epsilon(\tau) \left|\frac{\eta(\tau)}{\eta(t)}\right|^2,$$ as follows $$\nonumber v(t) = \frac{1}{1+{\cal N}(t)}, \quad x(t) = \ln \frac{\eta(t)}{\sqrt{1+{\cal N}(t)}}, \quad z(t) = 1-\frac{\left|\eta(t)\right|^{-2}}{1+{\cal N}(t)}.$$ If the Markovian limit is applied, the preceding formulas reduce to $$v=\frac{1}{1+\bar{n} (1-e^{-2(k_1+k_2)t})},\quad x= \ln \frac{e^{(-i\omega-k_1-k_2) t}}{\sqrt{1+\bar{n}(1-e^{-2( k_1 +k_2)t})}},\quad z=\frac{(\bar{n}+1)(1-e^{-2(k_1+k_2)t})}{1+\bar{n}(1-e^{-2(k_1+k_2)t)})}.$$ The evolution superoperator ${{\mathcal{U}}}\left( t\right)$ can be expressed in terms of the original mode operators ${{\bf \hat{a}}}_i$ by using the rotation superoperator ${{\mathcal{R}}}\left( \theta \right)$ in the following way $$\label{SuperU} \mathcal{U}(t) = e^{\theta \left[{{{\bf \hat{a}}}_1 {{\bf \hat{a}}}_2^\dagger}-{{{\bf \hat{a}}}_2 {{\bf \hat{a}}}_1^\dagger},\bullet \right]} e^{-i\omega t\left[{{{\bf \hat{a}}}_2^\dagger {{\bf \hat{a}}}_2} ,\bullet \right]} v e^{\left( 1-v\right) {{{\bf \hat{a}}}^\dagger_1} \bullet {{{\bf \hat{a}}}_1}} e^{x {{{\bf \hat{a}}}^\dagger_1 {{\bf \hat{a}}}_1} \bullet} e^{x^*{\bullet {{\bf \hat{a}}}^\dagger_1 {{\bf \hat{a}}}_1}} e^{z {{{\bf \hat{a}}}_1}\bullet {{{\bf \hat{a}}}^\dagger_1}} e^{-\theta \left[ {{{\bf \hat{a}}}_1 {{\bf \hat{a}}}_2^\dagger}-{{{\bf \hat{a}}}_2 {{\bf \hat{a}}}_1^\dagger},\bullet \right]}.$$ Since the second collective mode is effectively decoupled from the environment, any density operator in the Hilbert space of this mode, times the asymptotic density operator of the coupled collective mode, provided it exists, will experience a unitary evolution. For simplicity we will further on restrict ourselves to the zero temperature case. Any density operator of the form (kets ${\left| m,n \right\rangle}$ refer to the original modes ${{\bf \hat{a}}}_i$): $$\begin{split} {{{\bf \hat{\rho}}}} & = \sum_{n,m} \frac{{{\rho}}_{n,m}}{n!m!} \left( {{\bf \hat{A}}}_2^{\dagger}\right)^n {\left| 0,0 \right\rangle}{\left\langle 0,0 \right|} \left( {{\bf \hat{A}}}_2\right)^m \\ & = \sum_{n,m} \frac{{{\rho}}_{n,m}}{n!m!} \left(-{{{\bf \hat{a}}}_1^\dagger} \sin{\theta}+{{{\bf \hat{a}}}_2^\dagger} \cos{\theta}\right)^n {\left| 0,0 \right\rangle}{\left\langle 0,0 \right|} \left(-{{{\bf \hat{a}}}_1} \sin{\theta}+{{{\bf \hat{a}}}_2} \cos{\theta}\right)^m \\ & = \sum_{n,m,n_1,m_1} {{\rho}}_{n,m} \frac{\sqrt{n!m!}(-\sin\theta)^{n+m-n_1-m_1}(\cos\theta)^{n_1+m_1}} {\sqrt{(n-n_1)!n_1!(m-m_1)!m_1!}} \\ & \qquad \qquad \quad \times {\left| n_1,n-n_1 \right\rangle}{\left\langle m_1,m-m_1 \right|}, \end{split}$$ will be protected against dissipation and decoherence. In fact, applying the evolution superoperator to an initial density matrix of this form, we obtain $$\begin{split} {{{\bf \hat{\rho}}}}(t) & = \sum_{n,m,n_1,m_1} e^{-i\omega t (n-m)}{{\rho}}_{n,m} \frac{\sqrt{n!m!}(-\sin\theta)^{n+m-n_1-m_1}(\cos\theta)^{n_1+m_1}} {\sqrt{(n-n_1)!n_1!(m-m_1)!m_1!}} \\ & \qquad \qquad\qquad\qquad \qquad \times {\left| n_1,n-n_1 \right\rangle}{\left\langle m_1,m-m_1 \right|}, \end{split}$$ as one must expect. Now, we use the evolution superoperator on the initial operator density $$\label{initialcondition} {{{\bf \hat{\rho}}}}\left( 0\right)=\left( \cos{\alpha} {\left| 1,0 \right\rangle}+ e^{i\phi}\sin{\alpha}{\left| 0,1 \right\rangle}\right) \left( \cos{\alpha}{\left\langle 1,0 \right|}+e^{-i\phi}\sin{\alpha}{\left\langle 0,1 \right|}\right),$$ which can be viewed as a one photon Fock state of the mode given by the creation operator $$\label{mode} {{\bf \hat{A}}}^{\dagger} \left( \alpha ,\phi \right) = \cos{\alpha}\ {{\bf \hat{a}}}_1^{\dagger} + e^{i\phi}\sin{\alpha}\ {{\bf \hat{a}}}_2^{\dagger},$$ where $\alpha$ and $\phi$ can be compared to Stokes parameters describing polarization. We will obtain how the dissipative properties of the mode $\left( \alpha, \phi \right)$ depend on these parameters. As this is a natural way for experimentally test DFS[@Science2], in the next section a more realistic situation is discussed. The above state asymptotically approaches the rank $2$ density operator given by $$\label{asympdens} {{{\bf \hat{\rho}}}}_{t\rightarrow\infty} = P {\left| \psi \right\rangle}{\left\langle \psi \right|} +\left( 1-P\right){\left| 0,0 \right\rangle}{\left\langle 0,0 \right|}$$ where the state ${\left| \psi \right\rangle}$ depends only on the individual decay rates $k_i$, $${\left| \psi \right\rangle} =\frac{\sqrt{k_2} {\left| 1,0 \right\rangle} -\sqrt{k_1} {\left| 0,1 \right\rangle}}{\sqrt{k_1+k_2}}$$ and the weight $P$ of this state is given by $$P = {\left\langle \psi \right|}{{\bf \hat{\rho}}}\left( 0\right) {\left| \psi \right\rangle} = \left| \frac{\sqrt{k_2}\cos{\alpha}-\sqrt{k_1}e^{i\phi}\sin{\alpha}} {\sqrt{k_1+k_2}} \right|^2.$$ Observe that varying $\alpha$ and $\phi$ we can go from total preservation to total leakage. For example, if we set $\tan(\alpha) = \sqrt{k_1/k_2}$, and $\phi=0$ then the full state will leak to the ground state ${\left| 0,0 \right\rangle}$, since in this case $\alpha = \theta$ and the initial photon was in the “superradiant” mode ${{\bf \hat{A}}}_1$. On the other hand, if we set $\tan(\alpha) = -\sqrt{k_2/k_1}$, and $\phi=0$ then the initial state will be exactly equal to ${\left| \psi \right\rangle}$ (one photon in mode ${{\bf \hat{A}}}_2$), and will persist at all times with probability 1 (aside for an unimportant global phase). All other combinations will go to the density operator (\[asympdens\]), which can be considered as an ensemble of pure state ${\left| \psi \right\rangle}$ with probability $P$ and the ground state ${\left| 0,0 \right\rangle}$ with probability $1-P$. One can define the asymptotic fidelity, $F_{\infty}\left( \alpha ,\phi \right)$, which is the overlap between the initial and asymptotic density matrices. In the above example it is given by $$F_{\infty}\left( \alpha ,\phi \right) = \left| \frac{(\sqrt{k_2}\cos(\alpha)-\sqrt{k_1}e^{i\phi}\sin(\alpha)) (\sqrt{k_2}\cos(\alpha)-\sqrt{k_1}e^{-i\phi}\sin(\alpha))} {k_1+k_2}\right|^2.$$ Effects of more Realistic Modeling {#sreal} ================================== We remark that the results above were obtained under a number of assumptions, which will be relaxed below. Notice that the use of the rotating wave approximation (RWA) is not essential in obtaining the decoupled mode: any interaction linear in the field operators would be as good (provided the other assumptions hold). Had we chosen an interaction linear in the identical oscillators but nonlinear on the environmental operators we would have obtained also a decoupled collective mode. In these cases, however, the complication would be only of technical nature leading to (much) more complex dynamics. Another important hypothesis to obtain DFS is that of identical frequencies of the original main oscillators. Of course, any interaction between them would destroy the symmetry upon which the existence of DFS rests. On the other hand, we have assumed that the oscillator-environment coupling satisfies $g_{ik} = G_i D_k$, which amounts to a separable coupling. It is not an easy task to find realizations of such interactions in nature given its nonlocal character. However, it might be a good approximation in special circumstances, as *e.g.* optical cavities. A particular consequence of the separability hypothesis can be seen writing the master equation, in the zero temperature limit, in terms of the original oscillators (with different frequencies for generality) $$\begin{aligned} \label{MasterdaSonia} {\cal L}_0 & = &\left(-i\omega_{1} - {k_1}\right) {{\bf \hat{a}}}^{\dagger}_1{{\bf \hat{a}}}_1\bullet +\left( i\omega_{1}-{k_1}\right) \bullet {{\bf \hat{a}}}^{\dagger}_1{{\bf \hat{a}}}_1 +2{k_1}{{\bf \hat{a}}}_1 \bullet {{\bf \hat{a}}}^{\dagger}_1 + \nonumber\\ & & \left(-i\omega_2-{k_{2}}\right) {{\bf \hat{a}}}^{\dagger}_2{{\bf \hat{a}}}_2\bullet +\left( i\omega_{2}-{k_{2}}\right) \bullet {{\bf \hat{a}}}^{\dagger}_2{{\bf \hat{a}}}_2 +2{k_{2}}{{\bf \hat{a}}}_2 \bullet {{\bf \hat{a}}}^{\dagger}_2 + \nonumber\\ &&{k_{3}}\left(2{{\bf \hat{a}}}_1\bullet {{\bf \hat{a}}}^{\dagger}_2-{{\bf \hat{a}}}^{\dagger}_2{{\bf \hat{a}}}_1\bullet - \bullet {{\bf \hat{a}}}^{\dagger}_2{{\bf \hat{a}}}_1\right) +{k_{3}}^*\left(2{{\bf \hat{a}}}_2\bullet {{\bf \hat{a}}}^{\dagger}_1-{{\bf \hat{a}}}^{\dagger}_1{{\bf \hat{a}}}_2\bullet - \bullet {{\bf \hat{a}}}^{\dagger}_1{{\bf \hat{a}}}_2\right).\end{aligned}$$ The new quantity $k_3$ appears since we consider the same environment interacting with both oscillators. These terms can be considered as an interaction between the oscillators mediated by the environment. If the separability condition is fulfilled, $\left|k_3\right| ^2 = k_1k_2$, and if the oscillators are identical, eq. (\[MasterdaSonia\]) is the same as eq. (\[Eq:BMMaster\]). The other limit case is to consider both oscillators interacting independently with the environment. In this situation, the independence of the phases of the interaction coefficients $g_{ik}$ will make their net effect on $k_3$ null, and the eq. (\[MasterdaSonia\]) will just describe two independent damped harmonic oscillators. Our interest is to study the above equation when the conditions for the existence of DFS are almost satisfied, [*[i.e.]{}*]{} $\left|k_3\right| ^2 \approx k_1k_2$ and $\omega _1 \approx \omega _2$. It must be noted that, by Cauchy-Schwarz inequality,$\left|k_3\right| ^2 \leq k_1k_2$. We should point out that we are not using the usual approach of perturbation theory, of adding a small perturbation $\varepsilon {{\bf \hat{H}}}'$. In Ref. [@BLW], the authors have shown that DFS are robust up to order $\varepsilon$ in the perturbation, and all orders in time, so our approach must be connected to second order (in $\varepsilon$) perturbation. The explicit solution to this problem is given in the appendix. As is expected, there is no DFS without the separability and degeneracy assumptions, but if we are close to this conditions, we can obtain states much more robust to decoherence and dissipation than others. As in the previous section, consider the one photon states of eq. (\[initialcondition\]). Then we can define a [*weak decoherence mode*]{} (WD), which tends to the DFS when degeneracy and separability are approximated, and a [*strong decoherence mode*]{} (SD) which is analogous to the superradiant mode ${{\bf \hat{A}}}_1$. We want to explore the slight deviations from separability and degeneracy, so we define $\delta k$ and $\delta \omega$ by $$\delta k = \sqrt{k_1k_2} - \left| k_3\right|, \quad 2\delta \omega = \omega _2 - \omega _1,$$ and consider $\delta \omega \ll \omega _i$ and $\delta k \ll k_i$. As in the previous section, varying the parameters of the initial state (\[initialcondition\]) can be interpreted as varying the mode of the initial photon. In the regime discussed above we obtain: $$\label{wsmodes} {{\bf \hat{A}}}^{\dagger}_{{\stackrel{\scriptstyle{\mbox{\tiny{SD}}}}{\scriptstyle{\mbox{\tiny{WD}}}}}} = \frac{1}{\sqrt{k_1 + k_2}} \left( \sqrt{k_{{\stackrel{\scriptstyle{1}}{\scriptstyle{2}}}}} {{\bf \hat{a}}}^{\dagger}_1 \pm \sqrt{k_{{\stackrel{\scriptstyle{2}}{\scriptstyle{1}}}}}e^{\pm i\frac{\delta \omega}{k}}{{\bf \hat{a}}}^{\dagger}_2 \right),$$ which must be compared to eq. (\[mode\]). The new parameter $k$ is a kind of effective mean damping, and in the regime here discussed can be considered as $k \approx \left( k_1 + k_2\right) /2$. Each mode has its own damping constant, and this two are the extrema. Explictly we have $$k_{WD} = \frac{2\delta k \sqrt{k_1k_2}}{k_1 + k_2} \approx \delta k, \label{kwd}$$ for the weak decoherence mode, and $$k_{SD} = k_1 + k_2 \approx 2k \label{ksd}$$ for the strong decoherence mode. One must note that while $k_{SD}$ is of the same order as the individual damping constants $k_i$, the value of $k_{WD}$ can be much lower. In the experiment with ions[@Science2] it was exactly this lowering of the damping constant that was exhibited as an evidence of decoherence “free” subspaces. In the same experiment, one can see that the difference in damping constants is much larger in the situation with an engineered noise applied, since in this case the interaction with the environment is much closer to the separability condition. Concluding Remarks {#scremarks} ================== We have studied decoherence and dissipation free modes for systems of harmonic oscillators. We discussed sufficient conditions for their existence. Although theoretically simple, these conditions are very difficult to be implemented in practical experiments. So we studied the slight deviations of this conditions, and instead of decoherence free subspaces, we obtained [*weak decoherence modes*]{}. This suggests that weak decoherence subspaces can be used to store quantum information for times much larger than individual carriers would be able to, even without being rigorous DFS. We compare DFS to the so called super and subradiance effects of a maser. In fact, when in the last section we compare the damping constants for weak and strong decoherence (eqs. (\[kwd\]) and (\[ksd\])), this effect mimmimics interference problem, where we are comparing the maximum and the minimum of a certain quantity in which interference effects are recorded (in this case, the damping constant). It is important to stress that larger deviations of the rigorous conditions for DFS preclude the existence of even weakly decoherence subspaces, by making the decoherence time scales for such states smaller. However, one can conjecture that this kind of mechanism is so general that whenever an experiment obtain quantum mechanical results, it is testing some kind of DFS (*e.g.* the fullerenes experiment[@Arnetal]). We gratefully acknowledge comments from A. N. Salgueiro. This work was partly funded by FAPESP, CNPq and PRONEX (Brazil), and Colciencias, DINAIN (Colombia). K.M.F.R. gratefully acknowledges the Instituto de Física, Universidade de São Paulo, for their hospitality and PRONEX for partial support. Appendix: Realistic model of two oscillators in details {#appendix-realistic-model-of-two-oscillators-in-details .unnumbered} ======================================================= The evolution superoperator for eq.(\[MasterdaSonia\]) can be expressed as[@Sonia] $$\begin{aligned} \label{rot} {{\mathcal{U}}}{\left( t \right)} & = & e^{{{ j}_{1}{(t)}}{{{\bf \hat{a}}_{1}\cdot}}{{{\bf \hat{a}}_{1}^{\dag}}}}e^{{{ j}_{2}{(t)}}{{{\bf \hat{a}}_{2}\cdot}}{{{\bf \hat{a}}_{2}^{\dag}}}}e^{{{ z}{{(t)}}}{{{\bf \hat{a}}_{2}\cdot}}{{{\bf \hat{a}}_{1}^{\dag}}}} e^{{{ z}^{*}{{(t)}}}{{{\bf \hat{a}}_{1}\cdot}}{{{\bf \hat{a}}_{2}^{\dag}}}} e^{{{ q}{{(t)}}}{{{\bf \hat{a}}}_{1}}{{{\bf \hat{a}}_{2}^{\dag}\cdot}}} e^{{{ q}^{*}{{(t)}}}{{{\bf \cdot\hat{a}}_{1}^{\dag}}}{{{\bf \hat{a}}}_{2}}}e^{{{ m}_{2}{(t)}}{{{\bf \hat{a}}_{2}^{\dag}}}{{{\bf \hat{a}}_{2}\cdot}}} e^{{{ m}^{*}_{2}{(t)}}{{{\bf \cdot\hat{a}}_{2}^{\dag}}}{{{\bf \hat{a}}}_{2}}} \odot\nonumber\\ &&\odot e^{{{ m}_{1}{(t)}}{{{\bf \hat{a}}_{1}^{\dag}}}{{{\bf \hat{a}}_{1}\cdot}}} e^{{{ m}^{*}_{1}{(t)}}{{{\bf \cdot\hat{a}}_{1}^{\dag}}}{{{\bf \hat{a}}}_{1}}} e^{{{ q}{{(t)}}}{{{\bf \hat{a}}_{1}^{\dag}}}{{{\bf \hat{a}}_{2}\cdot}}} e^{{{ q}^{*}{{(t)}}}{{{\bf \cdot\hat{a}}_{1}^{\dag}}}{{\bf \hat{a}}}_2}\end{aligned}$$ where $$\begin{aligned} &&R=\frac{ k_2+ k_1}{2}+\frac{i\left(\omega_2+\omega_1\right)}{2},\quad c= k_2- k_1 +i\left(\omega_2-\omega_1\right),\quad r=\sqrt{c^2+4k_3^{2}},\quad \Delta_{\pm}= c\pm r \\ &&{{ q}{{(t)}}}= 2k_3\left(1- e^{r\;t}\right)\left( \Delta_{+}e^{r\;t}-\Delta_{-}\right)^{-1}\quad {\textnormal{for}}\quad r\not=0 \\ &&e^{{{ m}_{1}{(t)}}}=\frac{e^{-R\;t}}{2r} e^{-\frac{r\;t}{2}}\left(\Delta_{+}e^{r\;t}-\Delta_{-}\right), \quad e^{{{ m}_{2}{(t)}}}=e^{-2R\;t}e^{-{{ m}_{1}{(t)}}} \\ &&{{ j}_{2}{(t)}}=\left(1+|{{ q}{{(t)}}}|^{2}\right) \left(\left|e^{{{ m}_{2}{(t)}}}\right|^{-2}\right)-1\\ &&{{ j}_{1}{(t)}}=\left|e^{-{{ m}_{1}{(t)}}}+{{ q}{{(t)}}}^2 e^{-{{ m}_{2}{(t)}}}\right|^{2} +\left(|{{ q}{{(t)}}}|^2\right)\left(\left| e^{{{ m}_{2}{(t)}}}\right|^{-2}\right)-1\\ &&{{ z}{{(t)}}}=-{{ q}{{(t)}}}e^{-\left({{ m}^{*}_{1}{(t)}}+{{ m}_{2}{(t)}}\right)}-{{ q}^{*}{{(t)}}}\left( 1+|{{ q}{{(t)}}}|^{2}\right)\left|e^{{{ m}_{2}{(t)}}}\right|^{-2}.\end{aligned}$$ In this calculation neither the separability nor the degeneracy (even approximated) conditions have been used so far. For the sake of comparison we use the same initial condition of section \[s2osc\] (eq. (\[initialcondition\])). In the general case its time evolution is also given by $${{{\bf \hat{\rho}}}}(t) = P(t) {\left| \psi(t) \right\rangle}{\left\langle \psi(t) \right|} +(1-P(t)){\left| 0,0 \right\rangle}{\left\langle 0,0 \right|}$$ but now the state ${\left| \psi(t) \right\rangle}$ is, $${\left| \psi(t) \right\rangle} = \frac{(\cos(\theta)M_-(t)+\sin(\theta)e^{i\phi} Q(t)) {\left| 1,0 \right\rangle} +(\sin(\theta)e^{i\phi} M_+(t)+\cos(\theta)Q(t)){\left| 0,1 \right\rangle}}{\sqrt{P(t)}},$$ and its coefficient is given by $$P(t) = \left| \cos(\theta)M_-(t)+\sin(\theta)e^{i\phi} Q(t) \right|^2+ \left| \sin(\theta)e^{i\phi} M_+(t)+\cos(\theta)Q(t) \right|^2,$$ where the functions $M_{\pm}(t)$ and $Q(t)$ are given by $$M_{\pm}(t) = \frac{e^{-R t}}{2} \left( e^{- r t/2} (1\mp\frac{c}{r}) +e^{r t/2} (1\pm\frac{c}{r}) \right) , \quad Q(t) = \frac{k_3}{r} e^{-R t}\left(e^{- r t/2} - e^{r t/2}\right).$$ Now, we assume slight deviations from degeneracy and separability, that is, $\omega_1 = \omega-\delta \omega$, $\omega_2 = \omega+\delta \omega$, $k_3 = \sqrt{k_1 k_2} -\delta k$, with $\delta \omega\ll \omega$, $\delta k\ll \sqrt{k_1 k_2}$. Then, the state ${\left| \psi(t) \right\rangle}$ can be approximated as $${\left| \psi(t) \right\rangle} = e^{-i\omega t} \left( \frac{\zeta_1{\left| 1,0 \right\rangle} +\xi_1{\left| 0,1 \right\rangle}}{\sqrt{P(t)}} e^{-(k_1+k_2)t} +\frac{\zeta_2 {\left| 1,0 \right\rangle} +\xi_2 {\left| 0,1 \right\rangle}}{\sqrt{P(t)}} e^{-\frac{2\delta k \sqrt{k_1 k_2}}{k_1+k_2}t} \right), \label{genstate}$$ where $\zeta_i,\xi_i$ do not have any temporal dependence and are given by $$\begin{aligned} \zeta_{{\stackrel{\scriptstyle{1}}{\scriptstyle{2}}}} & = & \frac{(k_{{\stackrel{\scriptstyle{1}}{\scriptstyle{2}}}}\mp i\delta \omega)\cos(\alpha) \pm\sqrt{k_1k_2}\sin(\alpha)e^{i\phi}}{k_1+k_2}, \\ \xi_{{\stackrel{\scriptstyle{1}}{\scriptstyle{2}}}} & = & \frac{(k_{{\stackrel{\scriptstyle{2}}{\scriptstyle{1}}}}\pm i\delta \omega) e^{i\phi}\sin(\alpha) \pm\sqrt{k_1k_2}\cos(\alpha)}{k_1+k_2}.\end{aligned}$$ In the general case it is not possible to find initial conditions which are completely decoherence free. Nor it is possible to find two orthogonal subspaces with very different characters in what decoherence is concerned. However, we can choose the initial condition as to have a minimal component either in a strong decoherence (SD) or in a weak decoherence (WD) subspaces, by choosing, e.g. $$tan(\alpha)_{{\stackrel{\scriptstyle{\mbox{\tiny{SD}}}}{\scriptstyle{\mbox{\tiny{WD}}}}}} = \pm \sqrt{\frac{k_{{\stackrel{\scriptstyle{2}}{\scriptstyle{1}}}}}{k_{{\stackrel{\scriptstyle{1}}{\scriptstyle{2}}}}}}, \quad \phi_{{\stackrel{\scriptstyle{\mbox{\tiny{SD}}}}{\scriptstyle{\mbox{\tiny{WD}}}}}} = \pm \frac{\delta\omega}{k},$$ where $k$ is some average dissipation constant. The corresponding states, apart from a phase, can be written as $$\label{psiwd} {\left| \psi_{{\stackrel{\scriptstyle{{\mbox{\tiny{SD}}}}}{\scriptstyle{\mbox{\tiny{WD}}}}}} \right\rangle} = \frac{1}{\sqrt{k_1+k_2}} \left(\sqrt{k_{{\stackrel{\scriptstyle{1}}{\scriptstyle{2}}}}}{\left| 1,0 \right\rangle} \pm\sqrt{k_{{\stackrel{\scriptstyle{2}}{\scriptstyle{1}}}}}e^{\pm i \delta \omega/k}{\left| 0,1 \right\rangle} \right).$$ If $\delta \omega\ll k_1, k_2$ the phase can be ignored. Moreover, if $k_1=k_2$ then we have $k=k_1=k_2$ and the phase can be unaunambiguously determined. The weak decoherence wavefunction defines a mode which is robust against decoherence. The damping constant of this mode can be read from eq. (\[genstate\]) as the value given in eq. (\[kwd\]). Analogously for the strong decoherence mode, with damping constant given by eq. (\[ksd\]). [99]{} M. A. Nielsen and I. L. Chuang, *Quantum computation and quantum information* (Cambridge University Press, Cambridge, 2000). W. H. Zurek, *Physics Today* **44** (Oct.), 36 (1991). D. Giulini *et al.*, *Decoherence and the appearance of a Classical World in Quantum Theory*, (Springer, 1996). P. W. Shor, **52**, 2493 (1995); A. Ekert and C. Macchiavello, **77**, 2585 (1996); D. Gottesman, **54**, 1862 (1996); A. R. Calderbank *et. al.*, **78**, 405 (1997). L. Viola, E. Knill, and S. Lloyd. **82**, 2417 (1999). D. Vitali, and P. Tombesi. **65**, 012305 (2001). G. M. Palma, K.-A. Suominen, A. K. Ekert. *Proc. R. Soc. London Ser. A*, **452**, 567 (1996); L.-M. Duan, G.-C. Guo, **79**, 1953 (1997); A. Beige, D. Braun, B. Tregenna, P. L. Knight. **85**, 1762 (2000). D. A. Lidar, I. L. Chuang, K. B. Whaley, **81**, 2594 (1998). P. Zanardi, M. Rasetti. **79**, 3306 (1997); P. Zanardi, **60**, R729 (1999). P. G. Kwiat *et. al.*, *Science*, **290**, 498 (2000). D. Kielpinski *et. al.*, *Science*, **291**, 1013 (2001). Q. A. Turchette *et. al.*, **75**, 4710 (1995); A. Imamoglu, *et. al.*, *ibid.*, **83**, 4204 (1999); A. Rauschenbeutel *et. al.*, *ibid.*, **83**, 5166 (1999). J. I. Cirac, and P. Zoller. *Nature*, **404**, 579, (2000), **74**, 4091 (1995); C. Monroe *et. al.*, *ibid.*, **75**, 4714 (1995); K. Molmer, and A. Sorensen,*ibid.*, **82**, 1835 (1999); C. A. Sackett *et. al.*, *Nature*, **393**, 133, (2000). N. A. Garshenfeld, and I. L. Chuang, *Science*, **275**, 350 (1997); E. Knill *et. al.*,**57**, 3348 (1998); J. A. Jones*et. al.* *Nature*, **393**, 344, (1998); B. E. Kane,*Nature*, **393**, 133, (1998). A. Barenco [*et al.*]{}, **74**, 4083 (1995); D. Loss, and D.P. DiVincenzo, **57** 120 (1998); G. Burkard [*et al.*]{}, **59**, 2070 (1999); L. Quiroga and N.F. Johnson, **83**, 2270 (1999); J.H. Reina [*et al.*]{}, **62**, 12305 (2000); J.H. Reina [*et al.*]{}, **62**, R2267 (2000), F. Troiani [*et al.*]{}, **62**, R2263 (2000); E. Biolatti [*et al.*]{}, **85**, 5647 (2000). L.-A. Wu, D. A. Lidar. **88**, 207902 (2002). D. Bacon, D. A. Lidar, K. B. Whaley, **60**, 1944 (1999). As must be expected, the experimental verification could not find a completely isolated subspace as the simple theory foresee, but just a sensible reduction on decoherence rate as compared to other subspaces. This can also be viewed as a motivation to our work: to study the effects of small deviations on the ideal situation. L. Davidovich *et. al*, **71**, 2360 (1993). R. H. Dicke, *Phys. Rev.* **93**, 99 (1954). K. M. Fonseca-Romero and M. C. Nemes, Report No. quant-ph/0201107. R. P. Feynman, R. B. Leighton, and M. Sands, [*[The Feynman Lectures on Physics]{}*]{} vol 3, Addison-Wesley (1963). S. G. Mokarzel, *Ph.D. Thesis*, State University of São Paulo (2000). M. Arndt *et al.*, *Nature* **401**, 680 (1999).
--- abstract: 'We study the quantum phase transition between a normal Bose superfluid to one that breaks additional $Z_2$ Ising symmetry. Using the recent shaken optical lattice experiment as an example, we first show that at mean-field level atomic interaction can significantly shift the critical point. Near the critical point, bosons can condense into a momentum state with high or even locally maximum kinetic energies due to interaction effect. Then, we present a general low-energy effective field theory that treats both the superfluid transition and the Ising transition in a uniform framework, and identify a quantum tricritical point separating normal superfluid, $Z_2$ superfluid and Mott insulator. Using perturbative renormalization group method, we find that the quantum phase transition belongs to a unique universality class that is different from that of a dilute Bose gas.' author: - Wei Zheng - Boyang Liu - Jiao Miao - Cheng Chin - Hui Zhai title: Strong Interaction Effects in Superfluid Ising Quantum Phase Transition --- Critical phenomena lie in the center of modern many-body physics. Near the phase transition, the many-body system can develop universal and unconventional behaviors. Two of the most paradigmatic phase transitions are Ising transition and superfluid transition. Across an Ising transition a discrete $Z_2$ symmetry is spontaneously broken, while a $U(1)$ gauge symmetry is spontaneously broken across a superfluid transition. If a system can exhibit phase transitions of both two types of symmetry breaking, their interplay can lead to novel critical phenomena. Such system is not known in real materials, to the best of our knowledge, but has been recently demonstrated in cold atom systems. So far there are at least three approaches to realize such a transition in cold atom experiments: a) Bose condensates with spin-orbit coupling induced by Raman transitions, where the transition is driven by changing the Raman coupling strength [@NIST; @USTC]; b) Bose condensates in an optical lattice with staggered magnetic field, where the transition is driven by changing the ratio of two hopping amplitudes along two different spatial directions [@mag2]; and c) Bose condensates in a shaking optical lattice, where the transition is driven by tuning the shaking frequency [@Sengstock1] or shaking amplitude [@Cheng]. These systems share the following common feature. Let us consider a single particle energy-momentum dispersion along one spatial direction, say, $\epsilon(k_x)$ along $\hat{x}$. As schematically illustrated in Fig. \[Ising\], initially, $\epsilon(k_x)$ is a quadratic function around its unique minimum at $k_x=0$. In this regime, bosons condense into $k_x=0$ state and form a normal superfluid. As one changes a tunable parameter, at a critical point, $\epsilon(k_x)$ becomes a quartic function around $k_x=0$ and across this point, $\epsilon(k_x)$ will display two degenerate minima at $k_\pm$. In this regime, without loss of generality, one should assume the condensate wave function as a superposition of $\varphi(k_{+})$ and $\varphi(k_{-})$, and it is up to the interaction between bosons to determine the superposition coefficients. There exist a class of systems where with weak interaction the condensate wave function will favor either purely $\varphi(k_+)$ or purely $\varphi(k_{-})$, and therefore the superfluid will break the $Z_2$ symmetry. ![Schematic of single-particle dispersion $\epsilon(k_x)$, changing from single minimum to double minima. An Ising type $Z_2$ quantum phase transition can be driven by shaking a lattice above a critical amplitude $f^0_\text{c}$.\[Ising\] ](Ising){width="3.0in"} To bring out the novel physics of this quantum phase transition, in this letter we show that interactions can strongly modify the above picture. Our methods include both mean-field theory and a low-energy effective theory approach. This effective theory also allows us to treat both $U(1)$ and $Z_2$ symmetry breaking in a uniform framework and beyond mean-field level by perturbative renormalization group method. With these two methods, we have reached the following two results: 1\) The location of the normal superfluid (SF) to superfluid that breaks an additional $Z_2$ symmetry ($Z_2$ SF) quantum critical point has a strong dependence on the interaction between particles. Bosons can condense to momentum state with high or locally maximum kinetic energy near the quantum critical point. 2\) There exists a quantum tricritical point between Mott insulator (MI), SF and $Z_2$ SF phases. Interactions between atoms dictate the universal behavior and yield new universal critical exponents. *Shaken Lattice Model.* Here we first introduce the shaking lattice model which represents a concrete realization of the superfluid Ising transition [@Cheng]. As one time-periodically modulates the relative phase $\theta$ between two counter-propagating lasers, it will result in a time-dependent lattice potential [@Chu; @Arimondo] $$H(t)=\frac{\hat{k}_x^{2}}{2m}+V\cos^2\left( k_{0}x+\frac{\theta \left( t\right)}{2}\right) ,$$where $\theta (t)=f\cos \left( \omega t\right) $, and $f$ is the shaking amplitude, $\Delta x=f/(2k_0)$ is the maximum lattice displacement. By employing the Bessel function expansion, the lattice potential can be expressed as $$\frac{V}{2}\sum_{n=-\infty }^{\infty }i^{n}J_{n}(f)\frac{% e^{i2k_{0}x}+(-1)^{n}e^{-i2k_{0}x}}{2}e^{in\omega t}. \label{expanding}$$The $n=0$ term gives rise to a static lattice potential $VJ_{0}\left( f\right) \cos^2(k_{0}x)$, which gives a static band structure $\varepsilon _{\lambda }\left( k_x\right) $ and the Bloch wave function $\varphi _{\lambda ,k_x}(x)$. ($\lambda $ is the band index and $k_x$ is the quasi-momentum.) ![Band structure of a shaken lattice. (a) Band structure before shaking. The red solid line on the top is the dressed $s$-band with energy shifted by a phonon energy $\hbar\protect\omega$. (b) Solid line is the dispersion for small shaking amplitude $f<f^0_\text{c}$ and dashed line is the dispersion for large shaking amplitude $f>f^0_\text{c}$. Energy is plotted in unit of lattice recoil energy $E_\text{r}=\hbar^2 k^2_0/(2m)$ []{data-label="non-int"}](shaking){width="3.2in"} Denoting $\Delta$ as the separation between bottom of $s$-band and top of $p$-band, we consider the experimental situation $\omega \gtrsim \Delta$, as shown in Fig. \[non-int\](a). Here we only need to keep the most dominant $n=\pm 1$ processes in Eq. \[expanding\] and only $s$- and $p_x$- bands, since all $|n|>1$ processes and higher bands will be generically off-resonance [@higher]. The time-dependent potential is now given by $V(t)=-VJ_{1}\left( f\right) \sin \left( 2k_{0}x\right) \cos \left( \omega t\right)$, which couples $s$- and $p_x$-bands. In the two-band bases, and upon a rotating wave approximation, it is straightforward to show that the eigen-energies are given by $$\begin{aligned} \epsilon _{\pm }\left( k_x\right) =A_{k_x}/2\pm \sqrt{\Delta _{k_x}^{2}/4+\left\vert \Omega _{k_x}\right\vert ^{2}}, \label{Ek}\end{aligned}$$ where $\Omega _{k_x}=-VJ_{1}\left( f\right) \left\langle \varphi _{p,k_x}\right\vert \sin \left( 2k_{0}x\right) \left\vert \varphi _{s,k_x}\right\rangle/2$, $A_{k_x}=\varepsilon _{p}\left( k_x\right) +\varepsilon _{s}\left( k_x\right) +\omega $, $\Delta _{k_x}=\varepsilon _{p}\left( k_x\right) -\varepsilon _{s}\left( k_x\right) -\omega $. Two eigen wave functions are denoted by $\varphi _{+,k_x}(x)$ and $\varphi _{-,k_x}(x)$, respectively. ![Interaction shifts of SF-$Z_2$ SF quantum critical point. (a,c) Deep lattice with $V/E_\text{r}=16$ and $\hbar\omega/E_\text{r}=7.1$; (b,d) Shallow lattice with $V/E_\text{r}=4$ and $\hbar\omega/E_\text{r}=4.4$. (a and b) Condensate momentum $k_{\text{c}}$ as a function of $f$. Blue dashed line is for non-interacting and red solid line is for interacting case with $gn/E_\text{r}=1$. (c and d) Interaction($gn$)-shaking amplitude($f$) phase diagram for a fixed frequency $\protect\omega$. Blue shaded areas show regions where atoms condense to states with finite kinetic energy. []{data-label="transition"}](transition){width="3.5in"} In experiment, if one adiabatically turns on the shaking, bosons will remain in the $% \epsilon _{+}(k_x)$ band since it is adiabatically connected to the $s$-band as $f\rightarrow 0$. We show in Fig. \[non-int\](b) that there exists a critical shaking amplitude $f^0_\text{c}$, across which $\epsilon _{+}(k_x)$ exhibits a transition from single minimum at zero-momentum to double minima at finite momentum $\pm k_\text{min}$. For $f>f^0_\text{c}$, without loss of generality, we can assume the condensate wave function to be a linear superposition as $\Psi(x)=\sin\alpha \psi_{+,k_\text{min}}(x)+\cos\alpha% \psi_{+,-k_\text{min}}(x)$. Whereas in this case the interaction energy is always minimized by choosing $\alpha$ equalling to zero or $\pi/2$ [@stripe]. That is to say, the condensate will break the $Z_2$ symmetry across $f^0_\text{c}$. In fact, such a transition, as well as domain wall formation in the symmetry breaking phase, has been observed in a recent experiment [@Cheng]. *Quantum Critical Point in an Interacting System.* Since now bosons all condense in a single momentum state, at mean-field level the interaction energy can be simplified as [@supp] $$\begin{aligned} \epsilon_\text{int}(k_x)=U^{ss}_{k_x}n^2_{s,k_x}+4U^{sp}_{k_x}n_{s,k_x}n_{p,k_x}+U^{pp}_{k_x}n^2_{p,k_x}, \label{Eint}\end{aligned}$$ where $U_{k_x}^{\lambda ^{\prime }\lambda }=g\int dx|\varphi _{\lambda ^{\prime },k_x}|^{2}|\varphi _{\lambda ,k_x}|^{2}$, $g$ is the interaction constant, $\lambda$ and $\lambda^\prime$ denote $s$ or $p_x$. With Eq. \[Ek\] and Eq. \[Eint\], the total energy of condensate is written as $\epsilon(k_x)=\epsilon_{+}(k_x)+\epsilon_{\text{int}}(k_x)$. By minimizing $\epsilon(k_x)$ with respect to $k_x$, one can determine the condensate momentum $k_\text{c}$, and further determine the critical point $f_\text{c}$ for the superfluid Ising transition when $k_\text{c}$ changes from zero to finite. ![Tricritical point of interacting bosons in a shaken lattice. Three phases are Mott insulator phase(MI), normal superfluid (SF) phase and superfluid phase that breaks $Z_2$ symmetry ($Z_2$ SF). $g_\text{c}$ defines critical interaction strength for normal superfluid to Mott insulator transition, and $f^0_\text{c}$ is the critical shaking amplitude for single particle dispersion.[]{data-label="phase_diagram"}](phase_diagram){width="2.0in"} Our findings is shown in Fig. \[transition\]. Because the mixing between $s$- and $p_x$-band is stronger for smaller momentum. For deep lattice in the tight binding limit (Fig. \[transition\](a) and (c)), the dominant contribution to $U^{\lambda^\prime\lambda}_{k_x}$ is the onsite interaction of localized Wannier states. Since the Wannier wave function for $p_x$-band is more extended, the repulsive interaction energy has a minimum at $k_x=0$. Therefore, in the shaded area of Fig. \[transition\](b), even when the kinetic energy already exhibits double-minimum, by including interaction energy the total energy still possesses a unique minimum at zero-momentum. In another word, in this regime, bosons are condensed into the local maximum of single particle kinetic energy, as shown in Fig. \[transition\](c). In contrast, for shallow lattice where the Bloch wave function behaves like plane waves, around $k_x\approx 0$, for $s$-band the Bloch wave function is nearly a constant, whereas for $p_x$-band the Bloch wave function behaves like $\sin(2k_0x)$, which has stronger spatial modulation. Thus, the repulsive interaction is enhanced as $p_x$-component increases, and the interaction energy displays a local maximum at zero-momentum. Therefore, in the shaded area of Fig. \[transition\](d), bosons are condensed into finite momentum state which is not kinetic energy minimum state. As opposite to the tight binding limit, $f_\text{c}$ decreases as repulsive interaction increases. This phenomenon is quite intriguing since it invalids the conventional notion that bosons always condense into its single particle energy minimum. This happens when the self-energy correction due to interactions has strong momentum dependence. So far this effect has been discussed by Li [*et al*]{} in spin-orbit coupling system (system (a)) [@Stringari]. Whereas in system (a) this shift is relatively weak because the interaction constants between different spin states are very close, in particular, for $^{87}$Rb atoms [@Stringari]. In the system of shaking lattice, this is due to the difference of interaction constants between different bands, such as $U^{ss}_{k_x}$, $U^{sp}_{k_x}$ and $U^{pp}_{k_x}$, and their difference is very large because of the different behaviors of Bloch wave functions. Therefore this effect is much more profound and is much easier to be observed experimentally. *Effective Theory Approach.* Next we shall go beyond the microscopic theory and present a general low-energy effective theory to describe both the superfluid and Ising transition. This effective field theory should capture two key ingredients from microscopic physics as discussed above: i) the kinetic energy expanded in term of small $k_x$ is given by $ak^2_x+bk^4_x$ where $a$ can change sign; and ii) the interaction term contains both a constant term and a term proportional to $k^2_x$. Considering a $d$-dimensional lattice with modulation along $x$-direction only, its partition function is given by $$\begin{aligned} &\mathcal{Z}=\int \mathcal{D}[\phi^*,\phi]\exp\{\mathcal{S}[\phi^*,\phi]\}\\ &\mathcal{S}=\int d^{d}{\bf r}d\tau\left\{K_1\phi^*\partial_\tau\phi+K_2|\partial_\tau\phi|^2+\mathcal{E}[\phi^*,\phi]\right\}\end{aligned}$$ and (by setting $b=1$) $$\mathcal{E}=|\partial^2_x\phi|^2+a|\partial_x\phi|^2+\mathcal{T}+r|\phi|^2+\alpha|\phi|^4+\beta|\phi\partial_x\phi|^2, \label{Effective}$$ where $\phi$ is the order parameter, $\mathcal{T}=|\partial_y\phi|^2$ for $d=2$ and $\mathcal{T}=|\partial_y\phi|^2+|\partial_z\phi|^2$ for $d=3$, and the parameter $\alpha$ is positive for repulsive interactions. In the microscopic model discussed above, parameter $\beta$ can be either positive or negative. Here for simplicity, we consider the case with positive $\beta$. The parameter $a$ can be controlled by tuning the single particle dispersion, which is proportional to $f-f^0_\text{c}$ from discussion above. The parameter $r\sim g-g_\text{c}$ can be tuned by changing interaction strength $g$ that drives superfluid to Mott insulator transition. Assuming $\phi=|\phi|e^{ik_x x}$, we can rewrite $\mathcal{E}$ as $$\mathcal{E}(|\phi|,k_x)=(k^4_x +ak^2_x)|\phi|^2+r|\phi|^2+(\alpha+\beta k^2_x)|\phi|^4.$$ The ground state is determined by $\partial \mathcal{E}/\partial k_x=0$ and $\partial \mathcal{E}/\partial |\phi|=0$, which gives rise to three different phases: $\phi=0$ as Mott phase; $\phi\neq 0$ and $k_x=0$ as normal SF phase; and $\phi\neq 0$ and $k_x\neq 0$ as $Z_2$ SF phase. The phase diagram is given in Fig. \[phase\_diagram\] in terms of $f$ and $g$. All three phases meet at a tricritical point at $a=r=0$, around which the interaction effect is the strongest. Hereafter we concern about the critical exponent nearby the quantum tricritical point. Our discussion can be divided into two cases: Case **A.** No particle-hole symmetry. $K_1\neq 0$ and $K_2$-term becomes irrelevant [@Sachdev]. In this case, it is straightforward to show that the scaling dimension of $\phi$ is $\dim[\phi]=-(2d+7)/4$ and the critical dimension is $5/2$ [@supp]. For a physical system with $d=2$, the scaling dimensions of $r$, $a$ and $\alpha$ are $\dim[r]=2$, $\dim[a]=1$ and $\dim[\alpha]=1/2$, respectively, and all these three terms are relevant. This is different from conventional Bose Hubbard model with quadratic dispersion, where $\dim[\alpha]=0$ and the $\alpha$-term is marginal in two-dimension. The scaling dimension of $\beta$ is $\dim[\beta]=-1/2$, and the $\beta$-term is irrelevant. That means, in this case, although the momentum-dependent interaction plays an important role at mean-field level to shift the critical value, it does not play significant role for fluctuations beyond mean-field. In this case, the one-loop renormalizaiton group (RG) equations are derived as [@supp] $$\begin{aligned} &\frac{da}{dl}=a; \ \ \frac{dr}{dl}=2r\nonumber; \\ &\frac{d\alpha}{dl}=\epsilon\alpha-\frac{\alpha^2}{1+r}I_2(a);\end{aligned}$$ where $\epsilon=5/2-d=1/2$, $I_2$ is a function of $a$ defined in supplementary material [@supp]. In addition to the Gaussion fixed point at $(a,r,\alpha)=(0,0,0)$, these RG equations give another non-Gaussion fixed point at $(a, r,\alpha)=(0,0, \epsilon/I_2(0))$. The flow diagram is shown in Fig. \[flow\](a). However, since in this case $r$ does not receive any correction from interaction, the critical exponent of superfluid transition still remains as its mean-field value $\nu=1/2$, as in usual Bose gas case [@Uzunov; @Fisher]. Case **B.** Particle-hole symmetry. $K_1=0$ [@Sachdev]. In this case, the scaling dimension of $\phi$ is $\dim[\phi]=-(5+2d)/4$ and the critical dimension is $7/2$. In this case, for a system in two-dimension, $\epsilon=7/2-d=3/2$. Since $\epsilon>1$ it is not accurate to treat the system by means of perturbative expansion [@2d]. For a system with $d=3$, $\epsilon=1/2$, and the scaling dimensions of different terms are $\dim[r]=2$, $\dim[a]=1$, $\dim[\alpha]=1/2$ and $\dim[\beta]=-1/2$, respectively. These are all identical to the case . However, in this case, the one-loop RG equations read $$\begin{aligned} & \frac{da}{dl}=a; \ \ \frac{dr}{dl}=2r+\frac{2\alpha I_3(a)}{\sqrt{1+r}}; \nonumber\\ &\frac{d\alpha}{dl}=\epsilon\alpha-\frac{5I_3(a)\alpha^2}{2\sqrt{(1+r)^3}};\end{aligned}$$ where $\epsilon=7/2-d=1/2$ and $I_3(a)$ is also defined in the supplementary material [@supp]. The key difference is that the $r$-term now receives correction from interaction. The new non-Gaussian fixed point is located at $(a,r,\alpha)=(0,-2\epsilon/5,2\epsilon/(5I_3(0)))$, and the flow diagram is shown in Fig. \[flow\](b). More importantly, the critical exponent of superfluid transition $\nu=1/(2-2\epsilon/5)=5/9$ is now different from the mean-field value [@supp]. This is also different from conventional bosons with $k^2$ dispersion with $K_1=0$, which belongs to the class of $O(2)$ rotor model. In this sense, it represents a new type of critical behavior. ![Renormalization group flow diagram of case **A** (no particle-hole symmetry) (a) and case **B** (particle-hole symmetry) (b).[]{data-label="flow"}](flow){width="3.3in"} We note that in both cases we have $\epsilon=1/2$, which is because the quartic dispersion gives rise to a fractional critical dimension. Whereas in many systems where the critical dimension is usually an integer, $\epsilon$ is at least equal to one for a physical system below critical dimension. Thus, $\epsilon$-expansion is expected to work more accurately in our cases. Previously, for conventional Bose Hubbard model the critical exponent $\nu=1/2$ has been measured with *in-situ* density measurements [@Cheng2]. In the system of shaking lattice, one can tune the interaction to the vicinity of Mott transition, tune the chemical potential to the particle-hole symmetric point, and tune the band dispersion by shaking to the vicinity of Ising transition. Thus, case **B** can be realized and with the same *in-situ* method, this new critical phenomenon predicted here can be experimentally verified. [*Acknowledgment*]{}: We thank Xiaoliang Qi, Fa Wang and Hong Yao for helpful discussions. This work is supported by Tsinghua University Initiative Scientific Research Program(HZ), NSFC Grant No. 11004118 (HZ) and No. 11174176(HZ), and NKBRSFC under Grant No. 2011CB921500(HZ), NSF MRSEC (DMR-0820054)(CC), NSF Grant No. PHY- 0747907 (CC) and ARO-MURI No. 63834-PH-MUR (CC). [99]{} Y.-J. Lin, K. Jiménez-García, and I. B. Spielman, Nature **471**, 83 (2011). J. Y. Zhang, S. C. Ji, L. Zhang, Z. D. Du, W. Zheng, Y. J. Deng, H. Zhai, S. Chen, J. W. Pan, arXiv: 1305.7054 M. Aidelsburger, M. Atala, S. Nascimbène, S. Trotzky, Y.-A. Chen, and I. Bloch, Phys. Rev. Lett. **107**, 255301 (2011). J. Struck, C. Ölschläger, R. Le Targat, P. Soltan-Panahi, A. Eckardt, M. Lewenstein, P. Windpassinger, and K. Sengstock, Science **333**, 996 (2011). C. V. Parker, L. C. Ha, and C. Chin, Nature Phys. [**9**]{}, 769 (2013). N. Gemelke, E. Sarajlic, Y. Bidel, S. Hong, and S. Chu, Phys. Rev. Lett. [**95**]{}, 170404 (2005) H. Lignier, C. Sias, D. Ciampini, Y. Singh, A. Zenesini, O. Morsch, and E. Arimondo, Phys. Rev. Lett. [**99**]{}, 220403 (2007) We also verified numerically that qualitatively our results are not changed by including these processes and higher bands. This is different from the situation where degenerate single minima are generated by spin-orbit coupling. In that case there exists regime where interactions favor a superposition state exhibiting spatial stripe order due to spin-dependent interactions, see C. Wang, C. Gao, C.-M. Jian, and H. Zhai, Phys. Rev. Lett. **105**, 160403 (2010) and T.-L. Ho and S. Zhang, Phys. Rev. Lett. **107**, 150403 (2011). See supplementary material for detailed derivation of mean-field interaction energy, counting of scaling dimension and the derivation of renormalization group equations. Y. Li, L. P. Pitaevskii, and S. Stringari, Phys. Rev. Lett. **108**, 225301 (2012). S. Sachdev, Quantum Phase Transition, Second Edition, Cambridge University Press (2011), Chapter 9 D. I. Uzunov, Phys. Lett. [**87**]{}A, 11 (1981) M. P. A. Fisher, P. B. Weichman, G. Grinstein, D. S. Fisher, Phys. Rev. B [**40**]{} 546 (1989) Though it is not accurate, if one still applies the perturbation expansion to two-dimension case, it will predict a critical exponent $\nu=1/(2-2\epsilon/5)=5/7$ different from the mean-field value. X. Zhang, C. L. Hung, S. K Tung, C. Chin, Science, [**335**]{}, 1070 (2012) Supplementary Materials ======================= The band structure and the interaction energy in the shaking lattice -------------------------------------------------------------------- By keeping only $n=0,\pm 1$ terms, the single-particle Hamiltonian in Eq. (1) can be separated into two parts: $H\left( t\right) =H_{0}+V\left( t\right) $,$$\begin{aligned} H_{0} &=&\frac{\hat{k}_{x}^{2}}{2m}+VJ_{0}\left( f\right) \cos ^{2}\left( k_{0}x\right) , \\ V(t) &=&-VJ_{1}\left( f\right) \sin \left( 2k_{0}x\right) \cos \left( \omega t\right) .\end{aligned}$$First part is the time-independent part, which gives a static band structure as $$H_{0}=\sum_{\lambda ,k_{x}}\varepsilon _{\lambda }\left( k_{x}\right) \left\vert \varphi _{\lambda ,k_{x}}\right\rangle \left\langle \varphi _{\lambda ,k_{x}}\right\vert , \label{H0}$$where $\left\vert \varphi _{\lambda ,k_{x}}\right\rangle $ is the Bloch function, $\lambda $ is the band index and $k_{x}$ is the quasi-momentum. Expanding $V\left( t\right) $ in the Bloch basis, one obtains$$V\left( t\right) =-VJ_{1}\left( f\right) \cos \left( \omega t\right) \sum_{\lambda ^{\prime }\lambda ,k_{x}^{\prime }k_{x}}\left\vert \varphi _{\lambda ^{\prime },k_{x}^{\prime }}\right\rangle \left\langle \varphi _{\lambda ^{\prime },k_{x}^{\prime }}\right\vert \sin \left( 2k_{0}x\right) \left\vert \varphi _{\lambda ,k_{x}}\right\rangle \left\langle \varphi _{\lambda ,k_{x}}\right\vert .$$For the momentum transferred by $V\left( t\right) $ is $2k_{0}$, which is just the reciprocal lattice vector, one can prove that the matrix elements of $V\left( t\right) $ is propotianal to $\delta _{k_{x}^{\prime }k_{x}}$. So the Hamiltonian can be simplified into:$$\begin{aligned} H\left( t\right) &=&\sum_{k_{x}}\left[ \varepsilon _{s}\left( k_{x}\right) \left\vert \varphi _{s,k_{x}}\right\rangle \left\langle \varphi _{s,k_{x}}\right\vert +\varepsilon _{p}\left( k_{x}\right) \left\vert \varphi _{p,k_{x}}\right\rangle \left\langle \varphi _{p,k_{x}}\right\vert % \right] \notag \\ &&-VJ_{1}\left( f\right) \cos \left( \omega t\right) \sum_{k_{x}}\left\vert \varphi _{p,k_{x}}\right\rangle \left\langle \varphi _{p,k_{x}}\right\vert \sin \left( 2k_{0}x\right) \left\vert \varphi _{s,k_{x}}\right\rangle \left\langle \varphi _{s,k_{x}}\right\vert +h.c.\end{aligned}$$ One notes that Hamiltonian is diagonal in the momentum space, $H\left( t\right) =\sum_{k_{x}}H\left( k_{x},t\right) $. And $H\left( k_{x},t\right) $ can be rewritten into a matrix form$$H\left( k_{x},t\right) =\left( \begin{array}{cc} \varepsilon _{p}\left( k_{x}\right) & 2\Omega _{k_{x}}\cos \left( \omega t\right) \\ 2\Omega _{k_{x}}^{\ast }\cos \left( \omega t\right) & \varepsilon _{s}\left( k_{x}\right) \end{array}% \right) .$$where $\Omega _{k_{x}}=-\frac{1}{2}VJ_{1}\left( f\right) \left\langle \varphi _{p,k}\right\vert \sin \left( 2k_{0}x\right) \left\vert \varphi _{s,k}\right\rangle $. Making a unitary transformation, $$U\left( t\right) =\left( \begin{array}{cc} 1 & 0 \\ 0 & e^{i\omega t}% \end{array}% \right) ,$$one obtains the Hamiltonian in the rotational frame,$$H^{\prime }\left( k_{x},t\right) =\left( \begin{array}{cc} \varepsilon _{p}\left( k_{x}\right) & \Omega _{k_{x}}\left( 1+e^{2i\omega t}\right) \\ \Omega _{k_{x}}^{\ast }\left( 1+e^{-2i\omega t}\right) & \varepsilon _{s}\left( k_{x}\right) +\omega \end{array}% \right) . \notag$$Omitting the high frequency oscillation terms (this is so-called rotational wave approximation), one obtains the time-independent Hamiltonian,$$H^{\prime }\left( k_{x}\right) =\left( \begin{array}{cc} \varepsilon _{p}\left( k\right) & \Omega _{k} \\ \Omega _{k}^{\ast } & \varepsilon _{s}\left( k\right) +\omega \end{array}% \right) .$$Diagoanlizing $H^{\prime }\left( k_{x}\right) $, one obtains the energy bands as $$\epsilon _{\pm }\left( k_{x}\right) =A_{k_{x}}/2\pm \sqrt{\Delta _{k_{x}}^{2}/4+\left\vert \Omega _{k_{x}}\right\vert ^{2}},$$where $A_{k_{x}}=\varepsilon _{p}\left( k_{x}\right) +\varepsilon _{s}\left( k_{x}\right) +\omega $, $\Delta _{k_{x}}=\varepsilon _{p}\left( k_{x}\right) -\varepsilon _{s}\left( k_{x}\right) -\omega $. The correspoding two eigen wave functions are$$\begin{aligned} \varphi _{+,k_{x}}\left( x\right) &=&c_{+,s}\left( k_{x}\right) \varphi _{s,k_{x}}\left( x\right) +c_{+,p}\left( k_{x}\right) \varphi _{p,k_{x}}\left( x\right) , \label{wf_up} \\ \varphi _{-,k_{x}}\left( x\right) &=&c_{-,s}\left( k_{x}\right) \varphi _{s,k_{x}}\left( x\right) +c_{-,p}\left( k_{x}\right) \varphi _{p,k_{x}}\left( x\right) ,\end{aligned}$$where $c_{\pm ,\lambda }\left( k_{x}\right) $ is the combination coefficient obtained from the Diagoanlization of $H^{\prime }\left( k_{x}\right) $. Transforming the wave funtions back to the laboratorial frame, we obtain $$\begin{aligned} \varphi _{+,k_{x}}\left( x,t\right) &=&e^{-i\omega t}c_{+,s}\left( k_{x}\right) \varphi _{s,k_{x}}\left( x\right) +c_{+,p}\left( k_{x}\right) \varphi _{p,k_{x}}\left( x\right) , \\ \varphi _{-,k_{x}}\left( x,t\right) &=&e^{-i\omega t}c_{-,s}\left( k_{x}\right) \varphi _{s,k_{x}}\left( x\right) +c_{-,p}\left( k_{x}\right) \varphi _{p,k_{x}}\left( x\right) .\end{aligned}$$Considering the Bose condensation in the upper band, the time-average interaction energy is a funtion of condensate quasi-momentum,$$\begin{aligned} \epsilon _{\mathrm{int}}\left( k_{x}\right) &=&\frac{1}{T}% g\int_{0}^{T}dt\int dx\left\vert \varphi _{+,k_{x}}\left( x,t\right) \right\vert ^{4} \\ &=&U_{k_{x}}^{ss}\left\vert c_{+,s}\left( k_{x}\right) \right\vert ^{4}+4U_{k_{x}}^{sp}\left\vert c_{+,s}\left( k_{x}\right) \right\vert ^{2}\left\vert c_{+,s}\left( k_{x}\right) \right\vert ^{2}+U_{k_{x}}^{pp}\left\vert c_{+,p}\left( k_{x}\right) \right\vert ^{4}\end{aligned}$$where $U_{k_{x}}^{\lambda ^{\prime }\lambda }=g\int dx\left\vert \varphi _{\lambda ^{\prime },k_{x}}\left( x\right) \right\vert ^{2}\left\vert \varphi _{\lambda ,k_{x}}\left( x\right) \right\vert ^{2}$, $g$ is the interaction constant, $\lambda ^{\prime }$ and $\lambda $ denote $s$ or $p$. $|c_{+,\text{s}}\left( k_{x}\right)|^2$ and $|c_{+,\text{p}}\left( k_{x}\right)|^2$ are denoted by $n_{\text{s},k_x}$ and $n_{\text{p},k_x}$ in Eq. 5 of the main text , respectively. The mean-field study of the low energy effective theory ------------------------------------------------------- A low energy effective theory that describes both superfluid and Ising transitions can be constructed as Eq. (6)-(8) in the main text. In the momentum space the energy density can be written as $$\begin{aligned} \mathcal{E}=(k_c^4+ak_c^2)|\phi|^2+r|\phi|^2+(\alpha+\beta k_c^2)|\phi|^4,\end{aligned}$$ where we ignore the $k_y$ and $k_z$ dependent terms since the energy minima would be always located at $k_y=k_z=0$. The ground state is determined by minimizing the energy density as the following: $$\begin{aligned} && \frac{\partial\mathcal{E}}{\partial k_c}=0\Rightarrow[(2k_c^2+a)+\beta|% \phi|^2]k_c|\phi|^2=0,\cr&& \frac{\partial \mathcal{E}}{\partial |\phi|}% =0\Rightarrow [(k_c^4+ak_c^2+r)+2(\alpha+\beta k_c^2)|\phi|^2]|\phi|=0.\end{aligned}$$ We study the energy minimum in four regions of the parameter space, where $% \alpha$ and $\beta$ are always positive. 1. In the region $a>0$ and $r>0$ neither of equations $% (2k_c^2+a)+\beta|\phi|^2=0$ and $(k_c^4+ak_c^2+r)+2(\alpha+\beta k_c^2)|\phi|^2=0$ has solutions. The energy minimum is at $|\phi|=0$. 2. In the region $a>0$ and $r<0$ equation $(2k_c^2+a)+\beta|\phi|^2=0$ doesn’t have any solution. Then we take $k_c=0$ and solve equation $% (k_c^4+ak_c^2+r)+2(\alpha+\beta k_c^2)|\phi|^2=0$. The energy minimum is located at $k_c=0, ~|\phi|=-\frac{r}{2\alpha}$. 3. In the region $a<0$ and $r>0$ both equations $(2k_c^2+a)+\beta|% \phi|^2=0$ and $(k_c^4+ak_c^2+r)+2(\alpha+\beta k_c^2)|\phi|^2=0$ can have solutions, which are $$\begin{aligned} &&|\phi|^2=-\frac{2k_c^2+a}{\beta},\cr && |\phi|^2=-\frac{k_c^4+ak_c^2+r}{% 2(\alpha+\beta k_c^2)}.\end{aligned}$$ Then $k_c^2$ can be solved from the equation $$\begin{aligned} -\frac{2k_c^2+a}{\beta}=-\frac{k_c^4+ak_c^2+r}{2(\alpha+\beta k_c^2)}.\end{aligned}$$ The above equation has two roots as $$\begin{aligned} k_c^2=\frac{-(4r\alpha+a\beta)\pm\sqrt{(4r\alpha+a\beta)^2-12\beta(2a\alpha-% \beta r)}}{6\beta}.\end{aligned}$$ Here we ignore the negative root since $k_c^2>0$. To guarantee that the solution of Eq. (5) is valid two restriction conditions should be satisfied as the following: 1. We have $$\begin{aligned} -\frac{2k_c^2+a}{\beta}>0\end{aligned}$$ since $|\phi|^2>0$ in the Eq. (3). 2. We have $$\begin{aligned} -(4r\alpha+a\beta)+\sqrt{(4r\alpha+a\beta)^2-12\beta(2a\alpha-\beta r)}>0\end{aligned}$$ since $k_c^2>0$ in the Eq. (5). In the region $a<0$ and $r>0$ it’s straight forward to check that the condition (b) is always satisfied, while the condition (a) generate an upper bound. We can obtain this boundary by plugging the root of $k_c^2=\frac{% -(4r\alpha+a\beta)+\sqrt{(4r\alpha+a\beta)^2-12\beta(2a\alpha-\beta r)}}{% 6\beta}$ into the condition (a). Then the upper bound is $$\begin{aligned} r<\frac{a^2}{4}.\end{aligned}$$ Below the boundary of Eq. (8) we have the energy minimum at $$\begin{aligned} &&k_c^2=\frac{-(4r\alpha+a\beta)+\sqrt{(4r\alpha+a\beta)^2-12\beta(2a\alpha-% \beta r)}}{6\beta}, \cr && |\phi|^2=\frac{-4r\alpha+2a\beta+\sqrt{% (4r\alpha+a\beta)^2-12\beta(2a\alpha-\beta r)}}{3\beta^2}.\end{aligned}$$ 4. In the region $a<0$ and $r<0$ the energy minimum can be also located as Eq. (9). Here the restriction condition (b) generates a lower bound as $$\begin{aligned} r>\frac{2\alpha}{\beta} a.\end{aligned}$$ Renormalization group analysis ------------------------------ ### The system without particle-hole symmetry In the system without particle-hole symmetry we have $K_1\neq0$ in Eq. (7) of the main text. The term of $K_2$ becomes irrelevant and can be ignored. Then the partition function in $d$-dimensions is cast as $$\begin{aligned} \mathcal{Z}=\int D[\phi^\ast,\phi]e^{-S[\phi^\ast,\phi]},\end{aligned}$$ where $$\begin{aligned} S[\phi^\ast,\phi]=&&\int d^d \vec x d\tau \Big\{\phi^\ast(\vec x,\tau)\partial_\tau\phi(\vec x,\tau) +|\partial_x^2\phi(\vec x,\tau)|^2+a|\partial_x\phi(\vec x,\tau)|^2+\mathcal{T }\cr &&+r|\phi(\vec x,\tau)|^2+\alpha|\phi(\vec x,\tau)|^4+\beta|\phi(\vec x,\tau)\partial_x\phi(\vec x,\tau)|^2\Big\},\end{aligned}$$ where $\mathcal{T}=|\partial_y\phi|^2$ for $d=2$ and $\mathcal{T}% =|\partial_y\phi|^2+|\partial_z\phi|^2$ for $d=3$. We Fourier transform $\phi(\vec x,\tau)$ as $$\begin{aligned} \phi(\vec x,\tau)=\int^\infty_{-\infty}\frac{d\omega}{2\pi}\int\frac{d^dk}{% (2\pi)^2}\phi(\omega,\vec k)e^{i(\vec k\cdot\vec x-\omega\tau)}.\end{aligned}$$ Then the action can be written in the momentum space as $$\begin{aligned} S=&&\int^\infty_{-\infty}\frac{d\omega}{2\pi}\int\frac{d^dk}{(2\pi)^2} \phi^\ast(\omega,\vec k)(-i\omega+k_x^4+ak_x^2+\mathcal{T}% _k+r)\phi(\omega,\vec k)\cr&&+\int_{\omega k}^\Lambda\Big\{(\alpha+\beta k_{3x}k_{1x})\phi_i^\ast(\omega_4,\vec k_4)\phi_i^\ast(\omega_3,\vec k_3)\phi_i(\omega_2,\vec k_2)\phi_i(\omega_1,\vec k_1)\Big\},\end{aligned}$$ where $\mathcal{T}_k=k_y^2$ for $d=2$ and $\mathcal{T}=k_y^2+k_z^2$ for $d=3$. Here we used a short-handed notation $\int_{\omega k}^\Lambda=\prod^4_{i=1}\int^\infty_{-\infty} \frac{d\omega_i}{2\pi}% \int^\Lambda_0\frac{d^d k_i}{(2\pi)^2}(2\pi)^2\delta(\vec k_4+\vec k_3-\vec k_2-\vec k_1)\cdot(2\pi)\delta(\omega_4+\omega_3-\omega_2-\omega_1)$. Following the Wilson’s approach the renormalization group transformation involves three steps: (i) integrating out all momenta between $\Lambda/s$ and $\Lambda$, for tree level analysis just discarding the part of the action in this momentum-shell; (ii) rescaling frequencies and the momenta as $(\omega, k_x, k_y)\rightarrow (s^{[\omega]}\omega, s^{[k_x]}k_x, sk_y)$ so that the cutoff in k is once again at $\pm\Lambda$; and finally (iii) rescaling fields $\phi \rightarrow s^{[\phi]} \phi$ to keep the free-field action $S_0$ invariant. After we integrate out a thin momentum shell of high energy mode the limit of $k_y$ changes from $[0, \Lambda]$ to $[0, \Lambda/s]$ and the limit of $% k_x$ changes from $[0, \Lambda]$ to $[0, \Lambda/\sqrt{s}]$ , where $% s\gtrapprox1$. In order to compare the action with the original one we need to rescale the coordinate as $$k_x^{\prime }=\sqrt s k_x, ~~~k_y^{\prime }=sk_y.$$ Hence, the cutoff in $k_x$ and $k_y$ are back again at $\Lambda$. Here we give a definition to the scaling dimension. If a quantity scales as $$A^{\prime [A]}A,$$ we call $[A]$ the scaling dimension of $A$. In this manner the scaling dimensions of momentum $k_x$ and $k_y$ are $$[k_x]=\frac{1}{2}~~~\mbox{and}~~~ [k_y]=1.$$ A straight forward scaling analysis shows that the scaling dimensions of the parameters are $$\begin{aligned} &&[\omega]=2,~~~[a]=1, ~~~[r]=2,\cr &&[\phi]=-\frac{7+2d}{4}, ~~~[\alpha]=% \frac{5}{2}-d,~~~ [\beta]=\frac{3}{2}-d.\end{aligned}$$ We see that upper critical dimension is $5/2$. For a $d=2$ system the scaling dimension of $\beta$ is $-\frac{1}{2}$, which is irrelevant. Therefore, we ignore the $\beta$ term in the following calculations. The one-loop correction to the parameter $a$ is fully generated by the $\beta$ term. Since the $\beta$ term is ignored we don’t have the one-loop correction to the parameter $a$. Then the flow equation of $a$ just includes the tree-level scaling as $$\begin{aligned} \frac{da}{d\ell}=a.\end{aligned}$$ The one-loop corrections to $r$ and $\alpha$ are presented in Fig. 1. ![The one-loop Feynman graphs contributing to the renormalization of (a) the parameter $r$, (b) the parameter $\protect\alpha$ in systems without particle-hole symmetry.[]{data-label="fig:oneloop1"}](oneloop1){width="7cm"} By integrating out the momentum shell we get the one-loop correction to $r$ as $$\begin{aligned} 4\alpha \int^\infty_{-\infty}\frac{d\omega}{2\pi}\int_{\mathrm{shell}}\frac{% d^2k}{(2\pi)^2}\frac{1}{-i\omega+k_x^4+ak_x^2+k^2_y+r}.\end{aligned}$$ The integration over the Matsubara frequency $\omega$ can be calculated by performing a contour integration. $$\begin{aligned} &&\int^\infty_{-\infty}\frac{d\omega}{2\pi}\frac{1}{-i% \omega+k_x^4+ak_x^2+k^2_y+r}\cr =&&\int_{C}\frac{dz}{2\pi i}\frac{e^{z0^+}}{% -z+k_x^4+ak_x^2+k^2_y+r}\cr =&&-\theta(-(k_x^4+ak_x^2+k^2_y+r)),\end{aligned}$$ where $z=i\omega$ and contour C is over the left plane. We start our RG flow from the Gaussian fixed point, where $a=0, r=0$. Then the $\theta$ function $% \theta(-(k_x^4+k^2_y))$ vanish since $k_x^4+k^2_y>0$. The one-loop correction to the parameter $r$ is zero. Then the flow equation of $r$ just includes a tree-level scaling term as $$\begin{aligned} \frac{dr}{d\ell}=2r.\end{aligned}$$ The one-loop correction to the parameter $\alpha$ is $$\begin{aligned} -2\alpha^2\int^\infty_{-\infty}\frac{d\omega}{2\pi}\int_{\mathrm{shell}}% \frac{d^2k}{(2\pi)^2}\frac{1}{-i\omega+k_x^4+ak_x^2+k^2_y+r}\cdot\frac{1}{% i\omega+k_x^4+ak_x^2+k^2_y+r}.\end{aligned}$$ The integration over the Matsubara frequency $\omega$ can be done by performing a contour integration. $$\begin{aligned} &&\int^\infty_{-\infty}\frac{d\omega}{2\pi}\frac{1}{-i% \omega+k_x^4+ak_x^2+k^2_y+r}\cdot\frac{1}{i\omega+k_x^4+ak_x^2+k^2_y+r} \cr% =&&\frac{1}{2(k_x^4+ak_x^2+k^2_y+r)}.\end{aligned}$$ The integration over the momentum is as the following: $$\begin{aligned} && \int_{\mathrm{shell}}\frac{dk_xdk_y}{(2\pi)^2}\frac{1}{% 2(k_x^4+ak_x^2+k^2_y+r)}\cr=&&\int_{\mathrm{shell}}\frac{kdkd\theta}{(2\pi)^2% }\frac{1}{2(k_x^4+ak_x^2+k^2_y+r)}.\end{aligned}$$ $k_x$ scales as $k_x^{\prime }=\sqrt s k_x=e^{\frac{1}{2}\ell}k_x$, then $% dk_x=\frac{1}{2}k_xd\ell=\frac{1}{2}k\cos\theta d\ell$. In the same manner we have $dk_y=k\sin\theta d\ell$. Then $dk=\sqrt{(dk_x)^2+(dk_y)^2}=k\sqrt{% \frac{1}{4}\cos^2\theta+\sin^2\theta}d\ell$. The cutoff is set as $% k_x^4+ak_x^2+k^2_y=\Lambda^2$. $k^2$ can be solved as $k^2=\frac{% -(\sin^2\theta+a\cos^2\theta)+\sqrt{(\sin^2\theta+a\cos^2\theta)^2+4% \Lambda^2\cos^4\theta}}{2\cos^4\theta}$. In the following calculations we will conveniently set $\Lambda=1$ and henceforth measure all lengths in units of $\Lambda^{-1}$. The integration becomes $$\begin{aligned} &&\int_{\mathrm{shell}}\frac{kdkd\theta}{(2\pi)^2}\frac{1}{% 2(k_x^4+ak_x^2+k^2_y+r)}\cr=&&\frac{d\ell}{2(1+r)}\int^{2\pi}_0 \frac{d\theta% }{(2\pi)^2} \frac{-(\sin^2\theta+a\cos^2\theta)+\sqrt{(\sin^2\theta+a\cos^2% \theta)^2+4\cos^4\theta}}{2\cos^4\theta}\sqrt{\frac{1}{4}\cos^2\theta+\sin^2% \theta}\cr=&&\frac{d\ell}{2(1+r)} \cdot I_2(a),\end{aligned}$$ where the function $I_2(a)$ is defined as$$I_2(a)=\int^{2\pi}_0 \frac{d\theta}{(2\pi)^2} \frac{-(\sin^2\theta+a\cos^2% \theta)+\sqrt{(\sin^2\theta+a\cos^2\theta)^2+4\cos^4\theta}}{2\cos^4\theta}% \sqrt{\frac{1}{4}\cos^2\theta+\sin^2\theta}.$$ After the third step of rescaling in the renormalization group transformation the flow equation of $\alpha$ is calculated as $$\begin{aligned} \frac{d\alpha}{d\ell}=\epsilon\alpha-\frac{\alpha^2}{1+r}\cdot I_2(a),\end{aligned}$$ where $\epsilon=\frac{5}{2}-d$. Thus, we have all the flow equations as the following: $$\begin{aligned} &&\frac{da}{d\ell}=a,\cr&&\frac{dr}{d\ell}=2r,\cr&&\frac{d\alpha}{d\ell}% =\epsilon\alpha-\frac{\alpha^2}{1+r}\cdot I_2(a).\end{aligned}$$ There are two fixed points. One is the Gaussian fixed point $(a, r, \alpha)=(0,0,0)$, the other one the Gaussian-like fixed point$% (a^\ast,r^\ast, \alpha^\ast)=(0,0,\frac{\epsilon}{I_2(0)})$. Now we study the structure of the the flows near the new fixed point. Defining $% a=a^\ast+\delta a$, $r=r^\ast+\delta r$ and $\alpha=\alpha^\ast+\delta \alpha $ yields the linearized flow equations $$\begin{aligned} &&\frac{d\delta a}{d\ell}=\delta a,\cr&&\frac{d\delta r}{d\ell}=2\delta r,\cr% &&\frac{d\delta \alpha}{d\ell}=-\epsilon\delta \alpha.\end{aligned}$$ Then the scaling exponent of $r$ is $y_r=2$. If we use $\delta=|r-r_c|$ to define the distance to the critical point, the correlation length should scales as $\xi\sim \delta^{-\nu}$. The scaling analysis shows that $% \nu=1/y_r=1/2$. ### The system with particle-hole symmetry In the system with particle-hole symmetry the $K_1$ term vanishes in the Eq. (7) of the main text. Then the partition function in $d$-dimensions is written as $$\begin{aligned} \mathcal{Z}=\int D[\phi^\ast,\phi]e^{-S[\phi^\ast,\phi]},\end{aligned}$$ where $$\begin{aligned} S[\phi^\ast,\phi]=&&\int d^d \vec x d\tau \Big\{|\partial_\tau\phi(\vec x,\tau)|^2 +|\partial_x^2\phi(\vec x,\tau)|^2+a|\partial_x\phi(\vec x,\tau)|^2+\mathcal{T }\cr &&+r|\phi(\vec x,\tau)|^2+\alpha|\phi(\vec x,\tau)|^4+\beta|\phi(\vec x,\tau)\partial_x\phi(\vec x,\tau)|^2\Big\}.\end{aligned}$$ A straight forward scaling analysis shows that the scaling dimensions of the parameters are $$\begin{aligned} &&[k_x]=\frac{1}{2}, ~~~[k_y]=1,\cr && [\omega]=1,~~~[a]=1, ~~~[r]=2,\cr % &&[\phi]=-\frac{5+2d}{4}, ~~~[\alpha]=\frac{7}{2}-d,~~~ [\beta]=\frac{5}{2}% -d.\end{aligned}$$ The upper critical dimension is $\frac{7}{2}$. In two dimensions $[\alpha]=% \frac{3}{2}$ and $[\beta]=\frac{1}{2}$. Both of them are relevant. In three dimensions $[\alpha]=\frac{1}{2}$ is relevant and $[\beta]=-\frac{1}{2}$ is irrelevant. Here we consider the $d=3$ system. In this case the $\beta$ term is irrelevant, which will be ignored in our consideration. Then parameter $a$ received no corrections at one-loop level. The flow equation of $a$ is $$\frac{da}{d\ell}=a.$$ The Feynman graphs contributing to parameters $r$ and $\alpha$ are shown in Fig. 2. ![The one-loop Feynman graphs contributing to the renormalization of (a) the parameter $r$, (b) the parameter $\protect\alpha$ in systems with particle-hole symmetry.[]{data-label="fig:oneloop1"}](oneloop2){width="9cm"} The one-loop correction to $r$ is given as $$\begin{aligned} 4\alpha \int^\infty_{-\infty}\frac{d\omega}{2\pi}\int_{\mathrm{shell}}\frac{% d^3k}{(2\pi)^3}\frac{1}{\omega^2+k_x^4+ak_x^2+k^2_y+k_z^2+r}.\end{aligned}$$ The integration over the Matsubara frequency $\omega$ can be calculated by performing a contour integration. $$\begin{aligned} &&\int^\infty_{-\infty}\frac{d\omega}{2\pi}\frac{1}{% \omega^2+k_x^4+ak_x^2+k^2_y+k_z^2+r}\cr =&&\int_{C}\frac{dz}{2\pi i}\frac{1}{% \Big(-z+\sqrt{k_x^4+ak_x^2+k^2_y+k_z^2+r}\Big)\Big(z+\sqrt{% k_x^4+ak_x^2+k^2_y+k_z^2+r}\Big)}\cr =&&\frac{1}{2\sqrt{% k_x^4+ak_x^2+k^2_y+k_z^2+r}},\end{aligned}$$ where $z=i\omega$ and contour C is over the left plane. Analogous to the procedure in Eq. (25)-(28) the momentum shell integration can be performed as $$\begin{aligned} && \int_{\mathrm{shell}}\frac{dk_xdk_ydk_z}{(2\pi)^3}\frac{1}{2\sqrt{% k_x^4+ak_x^2+k^2_y+k_z^2+r}}\cr=&&\frac{d\ell}{2\sqrt{1+r}}\cdot I_3(a),\end{aligned}$$ where the function $I_3$ is defined as $$\begin{aligned} I_3(a)=&& \int^{2\pi}_0 \frac{d\theta}{(2\pi)^2} \Bigg(\frac{% -(\sin^2\theta+a\cos^2\theta)+\sqrt{(\sin^2\theta+a\cos^2\theta)^2 +4\cos^4\theta}}{2\cos^4\theta}\Bigg)^\frac{3}{2}\cdot\sin\theta \sqrt{\frac{% 1}{4}\cos^2\theta+\sin^2\theta}.\end{aligned}$$ Then we have the flow equation of $r$ as $$\begin{aligned} \frac{dr}{d\ell}=2r+\frac{2\alpha}{\sqrt{1+\alpha}}\cdot I_3(a).\end{aligned}$$ The one-loop correction to the parameter $\alpha$ is $$\begin{aligned} -10\alpha^2\int^\infty_{-\infty}\frac{d\omega}{2\pi}\int_{\mathrm{shell}}% \frac{d^3k}{(2\pi)^3}\frac{1}{(\omega^2+k_x^4+ak_x^2+k^2_y+k_z^2+r)^2}.\end{aligned}$$ The integration over the Matsubara frequency $\omega$ can be done by performing a contour integration. $$\begin{aligned} &&\int^\infty_{-\infty}\frac{d\omega}{2\pi}\frac{1}{% (\omega^2+k_x^4+ak_x^2+k^2_y+k_z^2+r)^2} \cr=&&\int_{C}\frac{dz}{2\pi i}% \frac{1}{\Big(-z+\sqrt{k_x^4+ak_x^2+k^2_y+k_z^2+r}\Big)^2 \Big(z+\sqrt{% k_x^4+ak_x^2+k^2_y+k_z^2+r}\Big)^2}\cr =&&\frac{1}{% 4(k_x^4+ak_x^2+k^2_y+k_z^2+r)^\frac{3}{2}}.\end{aligned}$$ Then it’s straight forward to obtain the flow equation of $\alpha$ as $$\begin{aligned} \frac{d\alpha}{d\ell}=\epsilon\alpha-\frac{5}{2}\frac{I_3(a)}{(1+r)^\frac{3}{% 2}}\alpha^2.\end{aligned}$$ Then all the flow equations are as the following: $$\begin{aligned} &&\frac{da}{d\ell}=a,\cr&& \frac{dr}{d\ell}=2r+\frac{2\alpha}{\sqrt{1+r}}% \cdot I_3(a), \cr &&\frac{d\alpha}{d\ell}=\epsilon\alpha-\frac{5}{2}\frac{% \alpha^2} {(1+r)^\frac{3}{2}}\cdot I_3(a).\end{aligned}$$ Then we have a new fixed point at $(r^\ast, \alpha^\ast, a^\ast)=(-\frac{2}{5% }\epsilon,\frac{2}{5I_3(0)}\epsilon,0)$. Around this fixed point we define $% r=r^\ast+\delta r$, $\alpha=\alpha^\ast+\delta\alpha$, $a=a^\ast+\delta a$ and have the linearized equations,$$\begin{aligned} \frac{d}{d\ell}\left(% \begin{array}{c} \delta r \\ \delta\alpha \\ \delta a% \end{array}% \right)=\left(% \begin{array}{ccc} 2-\frac{2}{5}\epsilon & 2 I_3(0) & \frac{4}{5 I_3(0)}\frac{\partial I_3(a)}{% \partial a}|_{a=0} \\ 0 & -\epsilon & 0 \\ 0 & 0 & 1% \end{array}% \right)\left(% \begin{array}{c} \delta r \\ \delta\alpha \\ \delta a% \end{array}% \right).\end{aligned}$$ The eigenvalues are $2-\frac{2}{5}\epsilon$, $-\epsilon$, and $1$. In three dimensions the correlation length exponent is $\nu=\frac{1}{2-\frac{2}{5}% \epsilon}=\frac{5}{9}$.
--- abstract: 'We calculate high energy massive scattering amplitudes of closed bosonic string compactified on the torus. We obtain infinite linear relations among high energy scattering amplitudes. For some kinematic regimes, we discover that some linear relations break down and, simultaneously, the amplitudes enhance to power-law behavior due to the space-time T-duality symmetry in the compact direction. This result is consistent with the coexistence of the linear relations and the softer exponential fall-off behavior of high energy string scattering amplitudes as we pointed out prevously. It is also reminiscent of hard (power-law) string scatterings in warped spacetime proposed by Polchinski and Strassler.' address: - 'Department of Electrophysics, National Chiao-Tung University, Hsinchu, Taiwan, R.O.C.' - 'Department of Electrophysics, National Chiao-Tung University and Physics Division, National Center for Theoretical Sciences, Hsinchu, Taiwan, R.O.C.' author: - 'Jen-Chi Lee' - Yi Yang title: 'Power-law Behavior of High Energy String Scatterings in Compact Spaces' --- Introduction and Overview ========================= It is well known that there are two fundamental characteristics of high energy string scattering amplitudes, which make them very different from field theory scatterings. These are the softer exponential fall-off behavior (in contrast to the hard power-law behavior of field theory scatterings) and the existence of infinite Regge-pole structure in the form factor of the high energy string scattering amplitudes. For the last few years, high-energy, fixed angle behavior of string scattering amplitudes [@GM; @Gross; @GrossManes] was intensively reinvestigated for massive string states at arbitrary mass levels [@ChanLee1; @ChanLee2; @CHL; @CHLTY; @PRL; @paperB; @susy; @Closed; @HL]. An infinite number of linear relations among string scattering amplitudes of different string states were discovered. An important new ingredient of these calculations is the zero-norm states (ZNS) [@ZNS1; @ZNS3; @ZNS2] in the old covariant first quantized (OCFQ) string spectrum. The discovery of these infinite linear relations constitutes the *third* fundamental characteristics of high energy string scatterings, which is not shared by the usual point-particle field theory scatterings. More recently, it was tempted to conjecture that [@Dscatt; @Wall; @Decay] the newly discovered linear relations, or stringy symmetries, are responsible for the softer exponential fall-off string scatterings at high energies. One way to justify this conjecture (that is: the coexistence of the infinite linear relations and the softer exponential fall-off behavior of high energy string scatterings) is to find more examples of high energy string scatterings, which show the unusual hard power-law behavior and, simultaneously, give the breakdown of the infinite linear relations. With this in mind, in this report [@Compact] we calculate high energy, fixed angle massive scattering amplitudes of closed bosonic string compactified on the torus [@Mende]. In the Gross regime (GR), for each fixed mass level with given quantized and winding momenta $\left( \frac{m}{R},\frac{1}{2}nR\right) $, we obtain infinite linear relations among high energy scattering amplitudes of different string states. Moreover we discover that, for some kinematic regime, the so called Mende regime (MR), infinite linear relations with $N_{R}=N_{L}$ break down and, simultaneously, the amplitudes enhance to power-law behavior [@Compact]. It is the space-time T-duality symmetry that plays a role here. There was another motivation to study the unusual high energy hard power-law behavior of string scattering. This is mainly motivated by the Gauge/String duality in the Type II B string theory on $AdS_{5}$ background [@Maldacena]. The work of Polchinski and Strassler and others [@PS; @And] suggested that the high energy behavior of string scattering in warped spacetime gives a consistent hard power-law behavior. It would be an interesting problem to understand the common features of the power-law string scatterings in these two different string backgrounds. High Energy Scattering ====================== We consider 26D closed bosonic string with one coordinate compactified on $S^{1}$ with radius $R$. The closed string boundary condition for the compactified coordinate is $$X^{25}(\sigma+2\pi,\tau)=X^{25}(\sigma,\tau)+2\pi Rn,$$ where $n$ is the winding number. The momentum in the $X^{25}$ direction is then quantized to be$$K=\frac{m}{R},$$ where $m$ is an integer. The left and right momenta are defined to be$$K_{L,R}=K\pm L=\frac{m}{R}\pm\dfrac{1}{2}nR\Rightarrow K=\dfrac{1}{2}\left( K_{L}+K_{R}\right) ,$$ and the mass spectrum can be calculated to be$$\left\{ \begin{array} [c]{c}% M^{2}=\left( \dfrac{m^{2}}{R^{2}}+\dfrac{1}{4}n^{2}R^{2}\right) +N_{R}% +N_{L}-2\equiv K_{L}^{2}+M_{L}^{2}\equiv K_{R}^{2}+M_{R}^{2}\\ N_{R}-N_{L}=mn \end{array} \right. , \label{mass}%$$ where $N_{R}$ and $N_{L}$ are the number operators for the right and left movers, which include the counting of the compactified coordinate. We have also introduced the left and the right level masses as$$M_{L,R}^{2}\equiv2\left( N_{L,R}-1\right) . \label{level mass}%$$ In the center of momentum frame, the kinematic can be set up to be $$\begin{aligned} k_{1L,R} & =\left( +\sqrt{p^{2}+M_{1}^{2}},-p,0,-K_{1L,R}\right) ,\\ k_{2L,R} & =\left( +\sqrt{p^{2}+M_{2}^{2}},+p,0,+K_{2L,R}\right) ,\\ k_{3L,R} & =\left( -\sqrt{q^{2}+M_{3}^{2}},-q\cos\phi,-q\sin\phi ,-K_{3L,R}\right) ,\\ k_{4L,R} & =\left( -\sqrt{q^{2}+M_{4}^{2}},+q\cos\phi,+q\sin\phi ,+K_{4L,R}\right)\end{aligned}$$ where $p\equiv\left\vert \mathrm{\vec{p}}\right\vert $ and $q\equiv\left\vert \mathrm{\vec{q}}\right\vert $ and$$\begin{aligned} k_{i} & \equiv\dfrac{1}{2}\left( k_{iR}+k_{iL}\right) ,\\ k_{i}^{2} & =K_{i}^{2}-M_{i}^{2},\\ k_{iL,R}^{2} & =K_{iL,R}^{2}-M_{i}^{2}\equiv-M_{iL,R}^{2}.\end{aligned}$$ With this setup, the center of mass energy $E$ is$$E=\dfrac{1}{2}\left( \sqrt{p^{2}+M_{1}^{2}}+\sqrt{p^{2}+M_{2}^{2}}\right) =\dfrac{1}{2}\left( \sqrt{q^{2}+M_{3}^{2}}+\sqrt{q^{2}+M_{4}^{2}}\right) . \label{COM}%$$ The conservation of momentum on the compactified direction gives$$m_{1}-m_{2}+m_{3}-m_{4}=0, \label{kk}%$$ and T-duality symmetry implies conservation of winding number$$n_{1}-n_{2}+n_{3}-n_{4}=0. \label{wind}%$$ The left and the right Mandelstam variables are defined to be$$\begin{aligned} s_{L,R} & \equiv-(k_{1L,R}+k_{2L,R})^{2},\\ t_{L,R} & \equiv-(k_{2L,R}+k_{3L,R})^{2},\\ u_{L,R} & \equiv-(k_{1L,R}+k_{3L,R})^{2}.\end{aligned}$$ We now proceed to calculate the high energy scattering amplitudes for general higher mass levels with fixed $N_{R}+N_{L}$. With one compactified coordinate, the mass spectrum of the second vertex of the amplitude is$$M_{2}^{2}=\left( \dfrac{m_{2}^{2}}{R^{2}}+\dfrac{1}{4}n_{2}^{2}R^{2}\right) +N_{R}+N_{L}-2.$$ We now have more mass parameters to define the “high energy limit”. We are going to use three quantities $E^{2},M_{2}^{2}$ and $N_{R}+N_{L}$ to define different regimes of “high energy limit”. The high energy regime defined by $E^{2}\simeq M_{2}^{2}$ $\gg$ $N_{R}+N_{L}$ will be called Mende regime (MR). The high energy regime defined by $E^{2}\gg M_{2}^{2}$, $E^{2}\gg$ $N_{R}+N_{L}$ will be called Gross region (GR). In the high energy limit, the polarizations on the scattering plane for the second vertex operator are defined to be $$\begin{aligned} e^{\mathbf{P}} & =\frac{1}{M_{2}}\left( \sqrt{p^{2}+M_{2}^{2}},p,0,0\right) ,\\ e^{\mathbf{L}} & =\frac{1}{M_{2}}\left( p,\sqrt{p^{2}+M_{2}^{2}},0,0\right) ,\\ e^{\mathbf{T}} & =\left( 0,0,1,0\right)\end{aligned}$$ where the fourth component refers to the compactified direction. In the MR, we will use [@Compact]$$\left\vert N_{L,R},q_{L,R}\right\rangle \equiv\left( \alpha_{-1}^{\mathbf{T}% }\right) ^{N_{L}-2q_{L}}\left( \alpha_{-2}^{\mathbf{P}}\right) ^{q_{L}% }\otimes\left( \tilde{\alpha}_{-1}^{\mathbf{T}}\right) ^{N_{R}-2q_{R}% }\left( \tilde{\alpha}_{-2}^{\mathbf{P}}\right) ^{q_{R}}\left\vert 0\right\rangle \label{new}%$$ as the second vertex operator in the calculation of high energy scattering amplitudes. The high energy scattering amplitudes in the MR can be calculated to be$$\begin{aligned} A & \simeq\left( -\dfrac{q\sin\phi\left( s_{L}+t_{L}\right) }{t_{L}% }\right) ^{N_{L}}\left( -\dfrac{q\sin\phi\left( s_{R}+t_{R}\right) }% {t_{R}}\right) ^{N_{R}}\left( \frac{1}{2M_{2}q^{2}\sin^{2}\phi}\right) ^{q_{L}+q_{R}}\nonumber\\ & \cdot\left( \left( t_{R}-2\vec{K}_{2R}\cdot\vec{K}_{3R}\right) +\dfrac{t_{R}^{2}\left( s_{R}-2\vec{K}_{1R}\cdot\vec{K}_{2R}\right) }% {s_{R}^{2}}\right) ^{q_{R}}\nonumber\\ & \cdot\left( \left( t_{L}-2\vec{K}_{2L}\cdot\vec{K}_{3L}\right) +\dfrac{t_{L}^{2}\left( s_{L}-2\vec{K}_{1L}\cdot\vec{K}_{2L}\right) }% {s_{L}^{2}}\right) ^{q_{L}}\nonumber\\ & \cdot\frac{\sin\left( \pi s_{L}/2\right) \sin\left( \pi t_{R}/2\right) }{\sin\left( \pi u_{L}/2\right) }B\left( -1-\dfrac{t_{R}}{2},-1-\dfrac {u_{R}}{2}\right) B\left( -1-\dfrac{t_{L}}{2},-1-\dfrac{u_{L}}{2}\right) .\label{amplitude}%\end{aligned}$$ Eq.(\[amplitude\]) is valid for $E^{2}\gg N_{R}+N_{L},$ $M_{2}^{2}\gg N_{R}+N_{L}.$ The infinite linear relations in the GR --------------------------------------- For the special case of GR with $E^{2}\gg M_{2}^{2}$, Eq.(\[amplitude\]) can be further reduced to$$\begin{aligned} \lim_{E^{2}\gg M_{2}^{2}}A & \simeq\left( -\frac{2\cot\frac{\phi}{2}}% {E}\right) ^{N_{L}+N_{R}}\left( -\frac{1}{2M_{2}}\right) ^{q_{L}+q_{R}% }E^{-1}\left( \sin\frac{\phi}{2}\right) ^{-3}\left( \cos\frac{\phi}% {2}\right) ^{5}\nonumber\\ & \cdot\frac{\sin\left( \pi s_{L}/2\right) \sin\left( \pi t_{R}/2\right) }{\sin\left( \pi u_{L}/2\right) }\exp\left( -\frac{t\ln t+u\ln u-(t+u)\ln(t+u)}{4}\right) .\label{linear}%\end{aligned}$$ We see that, in the GR, for each fixed mass level with given quantized and winding momenta $\left( \frac{m}{R},\frac{1}{2}nR\right) $, we have obtained infinite linear relations among high energy scattering amplitudes of different string states with various $(q_{L},q_{R})$. Note also that this result reproduces the correct ratios $\left( -\frac{1}{2M_{2}}\right) ^{q_{L}% +q_{R}}$ obtained in the previous works [@Dscatt; @Wall; @Decay]. However, the mass parameter $M_{2}$ here depends on $\left( \frac{m}{R},\frac{1}% {2}nR\right) $. Power-law and breakdown of the infinite linear relations in the MR ------------------------------------------------------------------ The power-law behavior of high energy string scatterings in a compact space was first suggested by Mende. Here we give a mathematically more concrete description. It is easy to see that the “power law” condition, i.e. Eq.(3.7) in Mende’s paper [@Mende]$$k_{1L}\cdot k_{2L}+k_{1R}\cdot k_{2R}=\text{constant,}% \label{mandy's condition}%$$ turns out to be$$\begin{aligned} & \sqrt{p^{2}+M_{1}^{2}}\cdot\sqrt{p^{2}+M_{2}^{2}}+p^{2}+2\left( \vec {K}_{1}\cdot\vec{K}_{2}+\vec{L}_{1}\cdot\vec{L}_{2}\right) \nonumber\\ & =\text{constant.}%\end{aligned}$$ As $p\rightarrow\infty$, due to the existence of winding modes in the compactified closed string, it is possible to choose $\left( \vec{K}_{1}% ,\vec{K}_{2};\vec{L}_{1},\vec{L}_{2}\right) $ such that$$\vec{K}_{1}\cdot\vec{K}_{2}+\vec{L}_{1}\cdot\vec{L}_{2}<0,$$ and let $\left( \vec{K}_{1}\cdot\vec{K}_{2}+\vec{L}_{1}\cdot\vec{L}% _{2}\right) \rightarrow-\infty$ to make $$\begin{aligned} k_{1L}\cdot k_{2L}+k_{1R}\cdot k_{2R}\simeq & \text{ constant}\\ \Rightarrow s_{L}+s_{R}\simeq & \text{ constant.}%\end{aligned}$$ In our calculation, this condition implies the beta functions in Eq.(\[amplitude\]) reduce to$$\begin{aligned} & B\left( -1-\dfrac{t_{R}}{2},-1-\dfrac{u_{R}}{2}\right) B\left( -1-\dfrac{t_{L}}{2},-1-\dfrac{u_{L}}{2}\right) \nonumber\\ & =\frac{\sin\left( \pi s_{R}/2\right) \Gamma(-\frac{t_{R}}{2}% -1)\Gamma(-\frac{u_{R}}{2}-1)\Gamma(-\frac{t_{L}}{2}-1)\Gamma(-\frac{u_{L}}% {2}-1)}{\pi\frac{s_{R}}{2}\left( 1+\frac{s_{R}}{2}\right) \left( -1+\frac{s_{R}}{2}\right) },\end{aligned}$$ which behaves as *power-law* in the high energy limit! On the other hand, it is obvious that the $(q_{L},q_{R})$ dependent power factors of the amplitude in Eq.(\[amplitude\])$$\begin{aligned} A_{q_{L},q_{R}} & \simeq\left( \frac{1}{2M_{2}q^{2}\sin^{2}\phi}\right) ^{q_{L}+q_{R}}\nonumber\\ & \cdot\left( \left( t_{R}-2\vec{K}_{2R}\cdot\vec{K}_{3R}\right) +\dfrac{t_{R}^{2}\left( s_{R}-2\vec{K}_{1R}\cdot\vec{K}_{2R}\right) }% {s_{R}^{2}}\right) ^{q_{R}}\nonumber\\ & \cdot\left( \left( t_{L}-2\vec{K}_{2L}\cdot\vec{K}_{3L}\right) +\dfrac{t_{L}^{2}\left( s_{L}-2\vec{K}_{1L}\cdot\vec{K}_{2L}\right) }% {s_{L}^{2}}\right) ^{q_{L}}%\end{aligned}$$ show *no* linear relations in the MR. Note that the mechanism to break the linear relations and the mechanism to enhance the amplitude to power-law are all due to $E\simeq M_{2}$ in the MR. In our notation, Eq.(\[mandy’s condition\]) is equivalent to the following condition$$\lim_{p\rightarrow\infty}\frac{\sqrt{p^{2}+M_{1}^{2}}\cdot\sqrt{p^{2}% +M_{2}^{2}}+p^{2}}{\vec{K}_{1}\cdot\vec{K}_{2}+\vec{L}_{1}\cdot\vec{L}_{2}% }\sim\frac{E^{2}}{\left( \dfrac{m_{1}m_{2}}{R^{2}}+\dfrac{1}{4}n_{1}% n_{2}R^{2}\right) }\sim-\text{ }\mathcal{O}(1).\label{condition}%$$ For our purpose here, as we will see soon, it is good enough to choose only one compactified coordinate to realize Eq.(\[condition\]). First of all, in addition to Eq.(\[kk\]) and Eq.(\[wind\]), Eq.(\[mass\]) implies$$m_{i}n_{i}=0,i=1,2,3,4\text{ (no sum on }i\text{).}%$$ This is because three of the four vertex are tachyons. Also, since we are going to take $n_{2}$ to infinity with fixed $N_{R}+N_{L}$ in order to satisfy Eq.(\[condition\]), we are forced to take $m_{2}=0$. In sum, we can take, say, $m_{i}=0$ for $i=1,2,3,4,$ and $n_{1}=-n_{2}=-n,n_{3}=-2n,n_{4}=0,$ and then let $n\rightarrow\infty$ to realize Eq.(\[condition\]). Note that it is crucial to choose different sign for $n_{1}$ and $n_{2}$ in order to achieve the minus sign in Eq.(\[condition\]). We stress that there are other choices to realize the condition. One notes that all choices implies$$N_{R}=N_{L}.$$ It is obvious that one can also compactify more than one coordinate to realize the Mende condition. We conclude that the high energy scatterings of the “highly winding string states” of the compactified closed string in the MR behave as the unusual UV power-law, and the usual linear relations among scattering amplitudes break down due to the unusual power-law behavior. This work is supported in part by the National Science Council, 50 billions project of Ministry of Educaton and National Center for Theoretical Science, Taiwan, R.O.C. [99]{} D. J. Gross and P. F. Mende, Phys. Lett. B **197**, 129 (1987); Nucl. Phys. B **303**, 407 (1988). D. J. Gross, Phys. Rev. Lett. **60**, 1229 (1988); Phil. Trans. R. Soc. Lond. A329, 401 (1989). D. J. Gross and J. L. Manes, Nucl. Phys. B **326**, 73 (1989). See section 6 for details. C. T. Chan and J. C. Lee, Phys. Lett. B **611**, 193 (2005). J. C. Lee, \[arXiv:hep-th/0303012\]. C. T. Chan and J. C. Lee, Nucl. Phys. B **690**, 3 (2004). C. T. Chan, P. M. Ho and J. C. Lee, Nucl. Phys. B **708**, 99 (2005). C. T. Chan, P. M. Ho, J. C. Lee, S. Teraguchi and Y. Yang, Nucl. Phys. B **725**, 352 (2005). C. T. Chan, P. M. Ho, J. C. Lee, S. Teraguchi and Y. Yang, Phys. Rev. Lett. 96 (2006) 171601. C. T. Chan, P. M. Ho, J. C. Lee, S. Teraguchi and Y. Yang, Nucl. Phys. B **749**, 266 (2006). C. T. Chan, J. C. Lee and Y. Yang, Nucl. Phys. B **738**, 93 (2006). C. T. Chan, J. C. Lee and Y. Yang, Nucl. Phys. B **749**, 280 (2006). Pei-Ming Ho, Xue-Yan Lin, Phys.Rev. D73 (2006) 126007. J. C. Lee, Phys. Lett. B **241**, 336 (1990); Phys. Rev. Lett. **64**, 1636 (1990). J. C. Lee and B. Ovrut, Nucl. Phys. B **336**, 222 (1990); J.C.Lee, Phys. Lett. B **326**, 79 (1994). T. D. Chung and J. C. Lee, Phys. Lett. B **350**, 22 (1995). Z. Phys. C **75**, 555 (1997). J. C. Lee, Eur. Phys. J. C **1**, 739 (1998). H. C. Kao and J. C. Lee, Phys. Rev. D **67**, 086003 (2003). C. T. Chan, J. C. Lee and Y. Yang, Phys. Rev. D **71**, 086005 (2005) C. T. Chan, J. C. Lee and Y. Yang, “ Scatterings of massive string states from D-brane and their linear relations at high energies”, Nucl.Phys.B**764**, 1 (2007). C. T. Chan, J. C. Lee and Y. Yang, “Power-law Behavior of Strings Scattered from Domain-wall and Breakdown of Their High Energy Linear Relations”, hep-th/0610219. J.C. Lee and Y. Yang, “Linear Relations of High Energy Absorption/Emission Amplitudes of D-brane”, Phys.Lett. B646 (2007) 120, hep-th/0612059. J.C. Lee and Y. Yang, “Linear Relations and their Breakdown in High Energy Nassive String Scatterings in Compact Spaces”, Nucl.Phys. B784 (2007) 22. Paul F. Mende, “High Energy String Collisions in a Compact Space”, Phys.Lett. B326 (1994) 216, hep-th/9401126. J. Maldacena, Adv,Theor. Math. Phys.2 (1998) 231. J.Polchinski and M. Strassler, Phys. Rev. Lett. 88(2002) 031601. O. Andreev, Phys. Rev. D70 (2004)027901.
--- abstract: 'We investigate a quantum key distribution (QKD) scheme which utilizes a biased basis choice in order to increase the efficiency of the scheme. The optimal bias between the two measurement bases, a more refined error analysis, and finite key size effects are all studied in order to assure the security of the final key generated with the system. We then implement the scheme in a local entangled QKD system that uses polarization entangled photon pairs to securely distribute the key. A 50/50 non-polarizing beamsplitter with different optical attenuators is used to simulate a variable beamsplitter in order to allow us to study the operation of the system for different biases. Over 6 hours of continuous operation with a total bias of 0.9837/0.0163 (Z/X), we were able to generate 0.4567 secure key bits per raw key bit as compared to 0.2550 secure key bits per raw key bit for the unbiased case. This represents an increase in the efficiency of the key generation rate by 79%.' address: - '$^1$ Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada' - '$^2$ Perimeter Institute, 31 Caroline Street North, Waterloo, ON, N2L 2Y5, Canada' - '$^3$ Institut für Experimentalphysik, Universität Innsbruck, Technikerstrasse 25, 6020 Innsbruck, Austria' author: - 'Chris Erven,$^1$ Xiongfeng Ma,$^1$ Raymond Laflamme,$^{1,2}$ and Gregor Weihs$^{1,3}$' bibliography: - 'Paper4\_Bibliography.bib' title: Entangled Quantum Key Distribution with a Biased Basis Choice --- Introduction {#sec.Introduction} ============ Quantum key distribution (QKD) allows two distant parties, Alice and Bob, to create a random secret key even when the quantum channel they share is accessible to an eavesdropper, Eve, so long as they also have an authenticated public classical channel. The security of QKD is built on the fundamental laws of physics in contrast to existing classical public key cryptography whose security is based on unproven computational assumptions. There are mainly two types of QKD schemes: prepare-and-measure schemes, the best known of which is the original BB84 protocol proposed by Bennett and Brassard [@BB84] in 1984; and entanglement based schemes, the simplest of which is the BBM92 scheme developed by Bennett *et al.* [@BBM92] in 1992. The BBM92 scheme essentially symmetrizes the BB84 protocol by distributing entangled pairs of qubits to Alice and Bob and having them both measure their half of each pair in one of two complementary bases. For a more complete overview of both QKD theory and experiments, please refer to the recent review articles by Gisin *et al.* [@GRTZ02], [Dusek]{} *et al.* [@DLH06], and Scarani *et al.* [@SBCDLP08]. A key feature of the BB84 protocol and many others is that the bases used are chosen randomly, independently, and uniformly. Most security proofs, including the seminal work by Shor and Preskill [@SP00], rely heavily on the symmetry which uniformity of basis choice provides. For example, it allows the sifted data from both bases to be grouped together and a simple error correction algorithm to be performed on the grouped data producing a single error rate. However, uniformly chosen bases have the consequence that on average half of the raw data is rejected leading to an efficiency limited to at most 50%. This is the reason for the curious factor of $\frac{1}{2}$ that appears in the key rates of many security proofs [@Lut00; @MFL07b]. This symmetry requirement was removed by Lo *et al.* [@LCA02] in 2004 when they proposed a simple modification to the BB84 scheme that could in principle allow one to asymptotically approach an efficiency of 100%. Their scheme relies on two changes to the BB84 protocol: non-uniformity in the choice of bases, and a refined data analysis. The first change of non-uniformity allowed Alice and Bob to achieve much higher efficiencies with their raw data. In fact, Lo *et al.* showed that this efficiency could be made arbitrarily close to 100% in the long key limit. The second change of a refined data analysis allowed Alice and Bob to maintain the security of their system since a simple error analysis was no longer sufficient. Acín *et al.* [@AMP06] have also studied this protocol in 2006 using a CHSH test for security under the conditions of no-signalling eavesdroppers. In this article, we detail the experimental implementation of the biased basis choice protocol with a simulated variable non-polarizing beamsplitter and a local entangled QKD system. We begin by first reviewing the theory for the biased protocol. We study the optimal bias ratio and the important parameters necessary to maintain security. We then follow with a description of the experimental setup for the implementation of the biased scheme. Lastly, we report on the results of the experiment and compare the efficiency of the biased protocol with those of an unbiased protocol. Theory {#sec.Theory} ====== Lo *et al.* [@LCA02] proposed their protocol as a modification to the original BB84 protocol which is a prepare-and-measure scheme. Here we make the simple extension to the entanglement based BBM92 scheme developed by Bennett *et al.* [@BBM92] in 1992. In the original scheme, a source of polarization entangled photon pairs in the singlet Bell state is placed in between Alice and Bob, and one photon from each pair is sent to Alice and Bob. Alice and Bob then randomly, independently, and uniformly choose to measure in either the rectilinear (H/V) basis or the diagonal ($+45^{\circ}$/$-45^{\circ}$) basis. Now we extend this scheme to remove the uniformity in the basis choices just as Lo *et al.* did for the BB84 protocol. We are allowed to do so without violating the security of the scheme because the proof by Lo *et al.* already relies on expanding the BB84 scheme into an imagined entanglement based scheme. Thus, their proof of security holds for performing the actual biased entanglement based scheme as well as performing the biased BB84 scheme. Additionally, we use the recently developed squashing model [@BML08b; @TT08b; @KAYI08] which allows us to assume that we are dealing with qubits so that the proof by Lo *et al.* holds (double clicks are assumed to be rare and ignored). The biased basis scheme has two main changes: first, Alice and Bob now choose their measurement bases randomly and independently but now non-uniformly with substantially different probabilities. This allows for a much higher probability of Alice and Bob using the same basis and thus allows them to achieve much higher efficiencies with their raw data. With uniformity removed, the second change necessary is a refined error analysis since an eavesdropper could now easily break a system which performed a simple error analysis on the lumped data by eavesdropping primarily along the predominant basis. To ensure security, it is crucial for Alice and Bob to divide their data into two subsets according to the bases used and compute an error rate for each subset separately. It is only with this second addition that one can ensure the security of this biased scheme. First we define the necessary quantities that we will use in our security analysis. Define $e_{bx}$ $(\delta_{px})$ and $e_{bz}$ $(\delta_{pz})$ to be the bit (phase) error rates in the $X$ (diagonal) and $Z$ (rectilinear) bases respectively, where an $e$ is used to denote a measurable quantity and a $\delta$ is used to denote a quantity that has to be inferred. Note that the bit error rates, $e_{bx}$ and $e_{bz}$, are known exactly by Alice and Bob once they perform error correction since they can count the number of errors found during error correction. However, the phase error rates, $\delta_{px}$ and $\delta_{pz}$, need to be estimated from the bit error rates since they are not directly accessible from Alice and Bob’s measurement data. In order to estimate the phase error rates, we define the quantities $p_{bx}$ $(p_{px})$ and $p_{bz}$ $(p_{pz})$ to be the bit (phase) error probabilities in the $X$ and $Z$ bases respectively. Since a basis independent source is assumed, we know that $$\label{eq.Probs} p_{pz} = p_{bx} \qquad p_{px} = p_{bz}.$$ Now in the long key limit $e_{bx}$ converges to $p_{bx}$ while $\delta_{pz}$ converges to $p_{pz}$. Using Eq. (\[eq.Probs\]) now allows us to say that $$\label{eq.PhaseEqualsBit} \delta_{px} = e_{bz} \qquad \delta_{pz} = e_{bx}$$ in the long key limit. Obviously for finite key lengths this equality will not hold exactly and we will have to consider statistical fluctuations. We will address the finite key size effects in a moment, but for now assume that Eq. (\[eq.PhaseEqualsBit\]) holds. The key point of the security analysis rests in the privacy amplification part [@BBCM95]. Privacy amplification is usually performed via the 2-universal hash functions discovered by Wegman and Carter [@CW79] and that is what is done in the experimental implementation detailed in this paper. Alice and Bob need to take care of the bits exposed during error correction which they can directly count during the error correction process since an eavesdropper can learn this information listening in to the classical channel. They also need to estimate the phase error rates for the two bases in order to estimate the amount of an eavesdropper’s information on the quantum signals that were distributed. Privacy amplification then needs to be applied to reduce the information of the eavesdropper from these two sources to an arbitrarily small amount. This leads to the following key generation rate according to [@MFL07b], expressed in terms of secure bits per raw bit, using Eq. (\[eq.PhaseEqualsBit\]) above $$\begin{aligned} \label{eq.KeyRate} R & \geq & (1-q)^2[1 - f(e_{bx})h_{2}(e_{bx}) - h_{2}(\delta_{px})] \nonumber \\ & & + q^2[1 - f(e_{bz})h_{2}(e_{bz}) - h_{2}(\delta_{pz})] \nonumber \\ & \geq & (1-q)^2[1 - f(e_{bx})h_{2}(e_{bx}) - h_{2}(e_{bz})] \nonumber \\ & & + q^2[1 - f(e_{bz})h_{2}(e_{bz}) - h_{2}(e_{bx})]\end{aligned}$$ where $q$ is defined to be Alice’s or Bob’s bias probability of measuring in the $Z$ basis and $(1-q)$ is their probability of measuring in the $X$ basis, $f(x)$ is the error correction inefficiency as a function of the error rate, normally $f(x) \geq 1$ with $f(x) = 1$ at the Shannon limit, and $h_{2}(x) = -x \log x - (1-x) \log (1-x)$ is the binary entropy function. For this initial analysis, the key rate formula assumes that Alice and Bob each pick the same identical bias. We benefit from separating the key rate into contributions from the $X$ and $Z$ bases since the key rate can still be positive for an error rate higher than 11% in one basis so long as the other error rate is low enough. This can be seen in Fig. 2 of Ref. [@MFDCTL06] since we use local operations and one way classical communication (1-LOCC) in our post-processing. Note that the cascade error correction algorithm, which we use, is considered 1-LOCC post-processing since the two way classical communication is not used to perform advantage distillation. Also note that in this treatment of cascade we assume that Eve learns both the positions of the errors and the revealed parity bits. The last thing to take care of are the finite key size effects since these will be very important in determining how to balance the optimal biasing ratio. For this analysis we assume the main finite key size effect is due to the parameter estimation; namely, the statistical fluctuations in the phase error rates compared to what is estimated from the bit error rates. The other possible finite key size effects are authentication, the leakage of information during error correction (typically referred to as $\mathrm{leak_{EC}}$), the probability of failure for error correction and error verification (typically referred to as $\varepsilon_{EC}$), and the probability of failure of privacy amplification (typically referred to as $\varepsilon_{PA}$) [@SR08b; @CS08]. For our experiment the probability of failure for our parameter estimation is on the order of $10^{-6}$ since, as will be discussed below, we choose a safety margin on our phase error estimates so that the probability that the actual phase error rates are outside of this range is less than $10^{-6}$. Also, we do not implement authentication on the classical channel, so we ignore its effect on our key rate. Note though that the resources required for authentication scale logarithmically in the length of the secret key generated by a QKD session [@ABBDDGGGLLLPPPPRRRRSSWZ07]. The information being leaked during error correction does not contribute any finite key size effects since we directly count the number of bits revealed and take care of them in the privacy amplification step [@Lut99]. Thus, there are no fluctuations in this contribution since we know it exactly. The error correction and verification algorithm we implement is due to Sugimoto and Yamazaki [@SY00] and allows us to bound the failure probability and thus set its value. For this experiment we implement the error correction algorithm so that its probability of failure is $< 10^{-10}$. Lastly, the probability of failure for the privacy amplification step is assumed to be negligible since it depends on the privacy amplification algorithm used and we assume that its possible to make one with a sufficiently small failure probability. For a strict finite key analysis, one should follow Refs. [@HHHT07; @SR08; @SR08b; @CS08]. We need to consider the statistical fluctuations in order to discuss the optimal bias between the two bases. As was stated above, both error rates $e_{bx}$ and $\delta_{pz}$ converge to $p_{bx} = p_{pz}$ in the long key limit. We use standard random sampling theory to determine the following formula given in [@MFL07b] for the estimates of our phase error rates $$\begin{aligned} \label{eq.RandomSampling} P_{\epsilon_{z}} & \equiv & \mathrm{Prob} \{\delta_{pz} > e_{bx}+\epsilon_{z}\} \nonumber \\ & \leq & \exp[-\frac{\epsilon_{z}^2 n_{xx}}{4 e_{bx} (1-e_{bx})}],\end{aligned}$$ where $n_{xx}$ is the number of bits generated from Alice and Bob both measuring in the $X$ basis, and $\epsilon_{z}$ is a small deviation from the measured bit error rate. Eq. (\[eq.RandomSampling\]) allows us to put a bound on the probability that the phase error rate deviates from our measured bit error rate by more than $\epsilon_{z}$. For example, if Alice and Bob measure 10,000 qubits in the $X$ basis ($n_{xx} = 10,000$), find an error rate of 5% ($e_{bx} = 0.05$), and set their desired deviation to 1% ($\epsilon_{z} = 0.01$), then Eq. (\[eq.RandomSampling\]) would allow them to say that the probability that their phase error rate ($\delta_{pz}$) deviates by more than $\epsilon_{z}$ is less than 0.52%. Or said another way, if we use $\delta_{pz} = 0.06$ in our key rate formula then we are confident that our key was generated securely with a probability of 99.48%. So our key rate formula has now become $$\begin{aligned} \label{eq.KeyRateWithEpsilon} R & \geq & (1-q)^2[1 - f(e_{bx})h_{2}(e_{bx}) - h_{2}(e_{bz} + \epsilon_{x})] \nonumber \\ & & + q^2[1 - f(e_{bz})h_{2}(e_{bz}) - h_{2}(e_{bx} + \epsilon_{z})],\end{aligned}$$ which is the same as before except for the addition of $\epsilon_{x}$ and $\epsilon_{z}$ which are now necessary to deal with the finite key statistics. We should note that the random sampling formula used in Eq. (\[eq.RandomSampling\]) is a good approximation in the long key limit. Now we can try to find the optimal bias between the two bases. Given estimates for $e_{bx}$, $e_{bz}$, and $N$ (the total coincidence count), and picking $P_{\epsilon} = P_{\epsilon_{x}} + P_{\epsilon_{z}}$, one can optimize the bias $q$ and the deviations $\epsilon_{x}$ and $\epsilon_{z}$ according to Eqs. (\[eq.KeyRateWithEpsilon\]) and (\[eq.RandomSampling\]). For example, with estimates of the parameters we expect to see in our system, we can graph the key rate (normalized in terms of secure key bits per raw key bit) versus the bias ratio $q$, shown in Figure \[fig:Rr\]. The inset shows the estimates for the parameters needed in order to find the optimum bias: $N$ is the total number of entangled pairs sent to Alice and Bob over the course of the experiment, $P_{\epsilon}$ is the confidence probability desired for our phase error statistics, $e_{bx}$ and $e_{bz}$ are the observed bit error rates in the $X$ and $Z$ bases over the course of the experiment, and $f$ is the observed error correction efficiency. From Fig. \[fig:Rr\] we can see that the key rate is maximized for a bias of $q = 0.97$. Once we have found the optimum bias we can then use it, along with Eq. \[eq.RandomSampling\], to figure out the optimum deviations $\epsilon_{x}$ and $\epsilon_{z}$ that will still allow us to achieve our desired confidence probability $P_{\epsilon}$. ![Plot of the key generation rate ($R$) in terms of the bias ratio ($q$).[]{data-label="fig:Rr"}](BiasVsKeyRate2D.pdf){width="15cm"} Examining Fig. \[fig:Rr\] we see that a maximum also occurs around $q = 0.03$ though it is slightly lower than the one at $q = 0.97$. It is obvious that the efficiency curve should be symmetric about the middle point $q = 0.5$ since biasing the protocol towards the $X$ basis should be just as good as biasing it towards the $Z$ basis. However, the curve is not entirely symmetric because the error rates and error correction efficiencies are not identical in the two bases. The error correction efficiency plays the largest part in the overall rate since it costs a fraction of $f(e)h(e)$ in the final key generation rate. Thus, since the error correction efficiency is lower in the $X$ basis than the $Z$ basis ($f_{X} > f_{Z}$), the optimum rate is also lower when the bias is skewed towards the $X$ basis. It is important to understand how the biased protocol utilizes the optimum bias in order to make the most efficient use of the raw key data. The optimum bias, $q$, along with the optimum deviations, $\epsilon_{x}$ and $\epsilon_{z}$, are chosen such that only the minimum number of measurements are made in the weak basis in order to achieve the desired confidence probability $P_{\epsilon}$. This has the consequence that privacy amplification of the measurement results from the strong basis has to be deferred until the end of the entangled photon distribution phase. It is only with the last distributed photon that Alice and Bob gain enough statistics in the weak basis to allow them to privacy amplify all the error corrected key generated in the strong basis over the course of the experiment. They will obtain small amounts of privacy amplified key from the weak basis over the course of the experiment, but the majority of the key that comes from the strong basis will not be available until after the distribution phase is completed. Experimental Implementation {#sec.ExperimentalImplementation} =========================== The purpose of this experiment was the experimental investigation of the biased basis QKD protocol and all of the practicalities associated with implementing the protocol; such as, choosing the proper bias, doing a more refined error analysis, and worrying about the finite key size effects. In order to focus on these issues, we did not involve the complications of a free-space link and instead chose for Alice and Bob to locally detect each of their halves of the photon pairs while sitting next to the source connected to it with a short optical fibre. Additionally, since we wanted to investigate many different biases we decided to simulate a variable non-polarizing beamsplitter since they do not exist to the best of the author’s knowledge. Indeed, even fixed biased non-polarizing beamsplitters are extremely expensive and difficult to manufacture; therefore, it is worthwhile to study the performance of the system for many different biases with a simulated variable biased beamsplitter first. The difficulties associated with biased beamsplitters might suggest one possible reason for employing an active basis selection scheme over a passive one since biasing might be done more easily. However, the active schemes would have their own complications for generating truly random, biased bit strings at a high enough rate for their polarization modulators. We are also currently investigating development of a variable non-polarizing beamsplitter to allow the flexible adjustment of the bias in our system without throwing away counts. The experimental setup consists of a compact spontaneous parametric down-conversion (SPDC) source, two compact passive polarization analysis modules, avalanche photodiode (APD) detectors, time-stampers, GPS time receivers, two laptop computers, and custom written software. The SPDC source is comprised of a 1[$\;\mathrm{mm}$]{} thick $\beta$-BBO crystal pumped with a 50[$\;\mathrm{mW}$]{} laser which produces entangled photon pairs at a degenerate wavelength of 815[$\;\mathrm{nm}$]{}. A $\beta$-BBO crystal 0.5[$\;\mathrm{mm}$]{} thick in each arm compensates for walk-off effects, which can ruin the entanglement. Typically, a total single photon count rate of 100,000[$\;\mathrm{s^{-1}}$]{} in each arm of the source and a coincident detection rate of 11,000[$\;\mathrm{s^{-1}}$]{} is measured locally. More details on the setup can be found in our earlier papers [@WE07; @ECLW08b; @ECLW08]. In order to implement the biased protocol we simulated a biased beamsplitter by placing the appropriate attenuators in the transmission arm of the original 50/50 beamsplitter (BS) used to perform the basis choice as shown in Fig. \[fig:PolAnaMod\]. ![Schematic of the polarization analysis module with a neutral density (ND) filter placed in the transmission arm of the 50/50 beamsplitter (BS) in order to achieve the desired bias in the measurement results. (Polarizing beamsplitters (PBS))[]{data-label="fig:PolAnaMod"}](PolarizationAnalysisModuleSchematicBiased.pdf){width="7cm"} The transmitted arm analyzes the photons in the diagonal basis. Also, in order to directly compare the efficiencies of experiments with different biases we used the appropriate attenuators ahead of the beamsplitter to make the rates of each experiment with a different bias equal. The attenuators are placed in the transmission arm ($X$, diagonal basis) so that the $Z$ (rectilinear) basis is the one which Alice and Bob predominantly measure in. As was mentioned earlier, we make this choice because error correction costs a factor of $f(e)h(e)$ (with $f(e) > 1$) while privacy amplification costs a factor of $h(e)$ in our key generation rate. Thus, since the $Z$ (rectilinear) basis has a much lower intrinsic error rate, we would like to make it the predominant basis since less error correction will be needed than if we chose the $X$ (diagonal) basis to be the predominant one. Our custom written software needed to be modified in order to analyze the error rates in both bases separately and defer the privacy amplification until the end of the entangled pair distribution phase. Error correction is first performed on the $X$ basis measurements followed by the $Z$ basis measurements, revealing the bit error rates $e_{bx}$ and $e_{bz}$. The number of bits revealed during the error correction of the $X$ and $Z$ measurements are recorded. After the distribution phase, we use the actual experimental results for $N$, $n_{xx}$, $n_{zz}$, $q$, $e_{bx}$, $e_{bz}$, $f(e_{bx})$, and $f(e_{bz})$, along with the desired $P_{\epsilon_{x}}$ and $P_{\epsilon_{z}}$ to calculate the optimum $\epsilon_{x}$ and $\epsilon_{z}$ according to Eq. \[eq.RandomSampling\]. This allows us to distill the maximum amount of key from our raw data. The appropriate privacy amplification factor, according to Eq. \[eq.KeyRateWithEpsilon\], is now calculated for each measurement set. Privacy amplification using a 2-universal hash function [@BBR88; @BBCM95; @CW79] then takes care of the bits revealed during error correction, the estimated phase error rates, and the additional safety margin needed for the finite key statistics to produce the final secure key. For this experiment we improved our error correction algorithm since knowing the bit error rates $e_{bx}$ and $e_{bz}$ as precisely as possible is extremely important. To do this we improved the cascade error correction algorithm [@BBSBS92; @BS94], which was initially used in a modified form in our system, to the optimized algorithm outlined by Sugimoto and Yamazaki [@SY00]. In previous experiments with our original modified cascade algorithm we achieved a residual bit error rate of $1.92 \times 10^{-3}$ [@ECLW08], which is clearly insufficient for realistic key sizes especially once finite key size effects have to be taken into account. In order to be secure, it has been shown that privacy amplification needs to work with key lengths on the order of $\sim 10^{7}$ bits [@CS08]. Consequently, the probability of residual errors needs to be a least two orders of magnitude less than this in order for the privacy amplification to succeed with high probability. With the improved optimized cascade algorithm we are now able to set a parameter, $s$, which then determines the residual bit error rate according to $P_{residual} < 2^{-s}$. For this experiment we chose $s = 40$ which should give us a maximum residual bit error rate of $9.09 \times 10^{-13}$. Results {#sec.Results} ======= On the day of the experiment, we measured visibilities of 99.6% and 92.4% are directly measured in the rectilinear and diagonal bases respectively. This corresponds to baseline error rates in the two bases of $e_{bx} = 0.038$ and $e_{bz} = 0.002$ due to the source. The limited visibility in the diagonal basis (high $e_{bx}$) is likely due to the broad spectral filtering (10[$\;\mathrm{nm}$]{}) in the polarization detector box which allows some unentangled photon pairs through the filters which still have a strong correlation in the rectilinear basis, but have almost no correlation in the diagonal basis. The limited visibility is also likely due to uncompensated transverse walk-off in the $\beta$-BBO crystal which is aggravated by the narrow pump beam spot. We performed four different experiments with the varying biases shown in Table \[tab.Biases\], each approximately six hours in duration in order to compare their efficiencies. While the appropriate attenuators were put into the transmission arms of both Alice’s and Bob’s detectors to simulate the desired bias, differences in coupling efficiencies within the polarization analysis modules produced slightly asymmetric biases between Alice and Bob for the $Z$ basis choice. In order to figure out the optimum $\epsilon$’s needed and the proper privacy amplification factor, the simple uniform bias analysis above had to be expanded into a more complex analysis that allowed Alice and Bob to have non-identical biases. Fig. \[fig.BiasVsKeyRate3D\] is the generalization of Fig. \[fig:Rr\] and plots the key rate (R) versus Alice’s bias (shown along the left axis) versus Bob’s bias (shown along the right axis). Using this analysis we proceeded to find the optimum $\epsilon$’s, calculate the privacy amplification factors, and complete the key generation process. ----- --------------- --------------- --------------- Exp \# Alice Bob Total 1 0.4570/0.5430 0.4752/0.5248 0.4343/0.5657 2 0.5660/0.4340 0.6074/0.3926 0.6639/0.3361 3 0.7398/0.2602 0.7606/0.2394 0.8938/0.1062 4 0.8804/0.1196 0.9062/0.0938 0.9837/0.0163 ----- --------------- --------------- --------------- : The observed biases in each of the experiments. \[tab.Biases\] ![Plot of the key generation rate (R) in terms of Alice’s bias ratio ($q_{A}$ and Bob’s bias ratio ($q_{B}$). Alice’s bias is plotted along the left axis while Bob’s is plotted along the right.[]{data-label="fig.BiasVsKeyRate3D"}](BiasVsKeyRate3D_61_small.pdf){width="16cm"} Fig. \[fig.QBER\] shows the QBER’s in the $X$ and $Z$ bases measured over the course of each experiment. The average QBER’s in the $X$ and $Z$ bases for each experiment are tabulated in Table \[tab.Results\]. The increase in the QBER’s from the baseline 3.8% and 0.2% to those observed is attributed to the typical leakage of the polarizing beamsplitters in the polarization analysis modules, the uncompensated birefringence in the singlemode fiber used to transport the photons between the source and the polarization analysis modules, and to accidental coincidences. ----- ------- ------- ------------- -------------- ------------ ------------- ------ Exp Secure Bits Inefficiency \# X Z Raw Sifted Final Per Raw Bit 1 1.39% 5.55% 28,655,075 14,779,423 7,365,984 0.2550 - 2 0.90% 5.76% 29,705,827 15,713,427 8,392,528 0.2825 1.11 3 0.89% 5.36% 29,319,830 18,627,251 10,568,944 0.3605 1.41 4 0.82% 5.80% 32,162,313 26,154,132 14,687,016 0.4567 1.79 ----- ------- ------- ------------- -------------- ------------ ------------- ------ : The QBERs and key rates for each of the experiments. \[tab.Results\] ![Plot of the QBER’s in the X (blue) and Z (green) bases over the course of the experiments.[]{data-label="fig.QBER"}](QBERs_small.pdf){width="18cm"} Fig. \[fig.KeyRates\] shows the raw key rate, sifted key rate, and average final key rate over the course of each experiment. The statistics for each experiment are grouped by the same colour, the upper box holds the raw key rates, the middle box holds the sifted key rates, and the lower box holds the average final key rates for each experiment. Note that we can only show an average final key rate since privacy amplification has to be deferred until the end of the entangled photon distribution phase and error correction phase, and then is performed on the “entire” sifted key at once [@Note1]. The results of each experiment are summarized in Tab. \[tab.Results\] along with their efficiencies compared to the first experiment which we take as the “unbiased” experiment. As is clearly shown in Table \[tab.Results\] the efficiency of each experiment increases with the bias reaching a value of 0.4567 secure key bits per raw bit for the final experiment which had a bias of 0.9837/0.0163. Thus, by implementing the biased QKD protocol we were able to increase the secure key generation rate by 79% over the unbiased case. Clearly, the use of a biased protocol for the generation of secure key bits results in a more efficient use of the distributed entangled photon pairs and allows Alice and Bob to distill more secret key from the same number of distributed pairs than what an unbiased protocol would allow. Additionally, as the number of entangled photon pairs is increased, the number of secure final key bits per raw key bit will approach 1.0 as was pointed out earlier. ![Plot of the raw, sifted, and average final key rates over the course of the experiment each experiment. The rates are grouped by colour with Experiment \#1 in blue, Experiment \#2 in green, Experiment \#3 in red, and Experiment \#4 in magenta. The upper box holds the raw key rates, the middle box holds the sifted key rates, and the lower box holds the average final key rates[]{data-label="fig.KeyRates"}](KeyRates_small.pdf){width="18cm"} [|c|c|c|]{}\ Pass 1 & Pass 2 & Pass 3\ 16 & 33 & 65\ \ Pass 1 & Pass 2 & Pass 3\ 72 & 144 & 289\ Since the error correction algorithms were greatly improved during this experiment, we include actual data for their operation during experiment \#1. We use the error correction algorithm developed by Sugimoto and Yamazaki [@SY00] as an optimization of the cascade error correction algorithm developed by Brassard *et al.* [@BS94] which was first mentioned in an earlier form in [@BBSBS92]. Table \[tab.BlockSizes\] shows the average block sizes used during the error correction of the sifted $X$ and $Z$ keys. Table \[tab.ErrorSequence\] shows the number of errors corrected during each pass and sequence of cascade. A pass is defined as a new random shuffling of the bits to form blocks of the sizes above which are then error corrected with BINARY [@BS94]. When an error is found, cascade then goes back through all previous sequences of shufflings of bits to correct errors that were missed beforehand. Table \[tab.CascadeStats\] shows the average numbers of errors corrected during the use of the BINARY and BICONF[@BS94] primitives and the corresponding number of bits revealed. Additionally, it shows the average key block lengths and QBERs found during cascade. Knowing these allows the calculation of the error correction efficiencies for our algorithm by first calculating the number of bits which an error correction algorithm operating at the Shannon limit would have revealed via $h_{2}(QBER) \times \mathrm{AvgKeyLength}$. These efficiencies are also shown in Table \[tab.CascadeStats\] relative to an error correction algorithm operating at the Shannon limit. [|c|c|c|c|c|]{}\ & Totals & Sequence 1 & Sequence 2 & Sequence 3\ Pass 1 & 31.4 & 31.4 & - & -\ Pass 2 & 27.2 & 13.6 & 13.6 & -\ Pass 3 & 7.1 & 2.7 & 1.1 & 3.3\ \ & Totals & Sequence 1 & Sequence 2 & Sequence 3\ Pass 1 & 5.6 & 5.6 & - & -\ Pass 2 & 4.4 & 2.2 & 2.2 & -\ Pass 3 & 1.2 & 0.5 & 0.2 & 0.5\ [|c|c|c|]{}\ BINARY & BICONF & Total\ 65.8 & 1.2 & 67.0\ \ BINARY & BICONF & Total\ 437.5 & 53.3 & 490.8\ \ BINARY & BICONF & Total\ 11.2 & 1.7 & 12.9\ \ BINARY & BICONF & Total\ 98.2 & 57.6 & 155.8\ \ X & Z &\ 1,207.7 & 927.2 &\ \ X & Z &\ 5.4 & 1.2 &\ Error correction efficiency (X) & Error correction efficiency (X) &\ 1.31 & 1.59 &\ As was discussed above, the new algorithm allows us to set the desired residual error rate via the parameter $s$ which determines the rate according to $P_{residual} < 2^{-s}$. However, as the residual error rate requirements grow more stringent the efficiency of the error correction algorithm relative to the Shannon limit, $f(x)$, will begin to deteriorate. For all experiments there were no residual errors left in the error corrected key after error correction was performed. Conclusions {#sec.Conclusions} =========== In conclusion, we have implemented the first experiment to utilize a biased (non-uniform) basis choice in order to increase the efficiency of the number of secure key bits generated from raw key bits. We investigated many of the issues associated with its implementation including choosing the optimal bias, doing a more refined error analysis, and taking care of finite size key effects. We simulated a biased non-polarizing beamsplitter in order to study different biases for the system and their resulting efficiencies. All other aspects of the experiment were implemented in their entirety so that our 50/50 beamsplitter can be easily exchanged with a new fixed non-polarizing beamsplitter with *no* other changes needed to the system. For the near optimal biases of 0.8804/0.1196 (Z/X) for Alice and 0.9062/0.0938 (Z/X) for Bob, we were able to generate 0.4567 secure key bits per raw key bit; whereas, the unbiased case generated 0.2550 secure key bits per raw key bit. This represents an increase in the efficiency of the key generation rate by 79% over the unbiased case. An improved error correction algorithm was also implemented and statistics for its operation on actual generated key material was discussed. Acknowledgements {#acknowledgements .unnumbered} ================ Support for this work by NSERC, QuantumWorks, CIFAR, CFI, CIPI, ORF, ORDCF, ERA, and the Bell family fund is gratefully acknowledged. The authors would like to thank N. Lütkenhaus, R. Kaltenbaek, T. Moroder, H. Hasseler, and O. Moussa for their helpful discussions. We would also like to thank R. Horn and M. Wesolowski for their help with the setup of the experiment. Lastly, the authors would like to thank the anonymous referees for their many comments, which were very useful in improving the quality of this paper References {#references .unnumbered} ==========
--- abstract: 'State-of-the-art speaker recognition systems comprise an x-vector (or i-vector) *speaker embedding* front-end followed by a *probabilistic linear discriminant analysis* (PLDA) backend. The effectiveness of these components relies on the availability of a large collection of labeled training data. In practice, it is common that the domains (e.g., language, demographic) in which the system are deployed differs from that we trained the system. To close the gap due to the domain mismatch, we propose an unsupervised PLDA adaptation algorithm to learn from a small amount of unlabeled in-domain data. The proposed method was inspired by a prior work on feature-based domain adaptation technique known as the *correlation alignment* (CORAL). We refer to the model-based adaptation technique proposed in this paper as CORAL+. The efficacy of the proposed technique is experimentally validated on the recent NIST 2016 and 2018 Speaker Recognition Evaluation (SRE’16, SRE’18) datasets.' address: 'Biometrics Research Laboratories, NEC Corporation, Japan\' bibliography: - 'refs.bib' title: | The CORAL+ Algorithm for Unsupervised Domain\ Adaptation of PLDA --- Speaker recognition, domain adaptation, unsupervised, discriminant analysis Introduction {#sec:intro} ============ Speaker recognition is the task of recognizing a person from his/her voice given a small amount of speech utterance from the speaker [@hansen2015]. Recent progresses have shown successful application of deep neural network to derive deep *speaker embeddings* from speech utterances [@snyder2017deep; @variani2014deep]. Analogous to word embeddings [@bengio2000; @mikolov2013], a speaker embedding is a fixed-length continuous-value vector that provides a succinct characterization of speaker’s voice rendered in a speech utterance. Similar to the classical i-vectors [@Dehak10frontend], deep speaker embeddings live in a simpler Euclidean space where distance could be measured easily, compared to the much complex input patterns. Techniques like within-class covariance normalization (WCCN) [@Hatch2006], linear discriminant analysis (LDA) [@Bishopbook], probabilistic LDA (PLDA) [@Princepaper; @Ioffe; @kenny2010bayesian] can be applied. Systems comprising x-vector speaker embedding (and i-vector) followed by PLDA have shown state-of-the-art performances on speaker verification task [@snyder2018x]. Training an x-vector PLDA system typically requires over hundred hours of training data with speaker labels, and with the requirement that the training set must contains multiple recordings of a speaker under different settings (recording devices, transmission channels, noise, reverberation etc.). These knowledge sources contribute to the robustness of the system against such nuisance factors. The challenging problem of domain mismatch arises when a speaker recognition system is used in a different domain (e.g., different languages, demographic etc.) than that of the training data. Its performance degrades considerably. It is impractical to re-train the system for each and every domain as the effort at collecting large labelled data sets is expensive and time consuming. A more viable solution is to adapt the already trained model using a smaller, and possibly unlabeled, set of in-domain data. Domain adaptation could be accomplished at different stages of the x-vector (or i-vector) PLDA pipeline. PLDA adaptation is preferable in practice since the same feature extraction and speaker embedding front-end could be used while domain adapted PLDA backbends are used to cater for the condition in each specific deployment. PLDA adaptation involves the adaptation of its mean vector [^1] and covariance matrices. In the case of unsupervised adaptation (i.e., no labels are given), the major challenge is how the adaptation could be performed on the within and between class covariance matrices given that only the total covariance matrix could be estimated directly from the in-domain data. In this paper, we show that this could be accomplished by applying similar principle as in the feature-based correlation alignment (CORAL) [@Sun2016] from which a pseudo-in-domain within and between class covariance matrices could be computed. We further improve the robustness by introducing additional adaptation parameter and regularization to the adaptation equation. The proposed unsupervised adaptation method is referred to as CORAL+. Domain adaptation of PLDA {#sec:plda_adaptation} ========================= This section presents a brief description of *probabilistic linear discriminant analysis* (PLDA) widely used in state-of-the-art speaker recognition system. We then draw attention to the domain mismatch issue and how the *correlation alignment* (CORAL) [@Sun2016; @Alam2018] technique deals with it via feature transformation. Probabilistic LDA {#sec:plda} ----------------- Let the vector $\phi$ be a speaker embedding (e.g., x-vector, i-vector, etc.). We assume that the vector $\phi$ is generated from a linear Gaussian model [@Bishopbook], as follows [@Princepaper; @prince2012computer] $$p\left(\phi|{\bf h},{\bf x}\right) = \mathcal{N} \left( \left. \phi \right| \mu , \mathbf{Fh} + \mathbf{Gx} + \mathbf{\Sigma} \right) \label{eq:plda}$$ The vector $ {\bf \mu} $ represents the global mean, while $ {\bf F} $ and $ {\bf G} $ are the speaker and channel loading matrices, and the diagonal matrix $\Sigma$ models the residual variances. The variables $ {\bf h} $ and ${\bf x}$ are the latent speaker and channel variables, respectively. A PLDA model is essentially a Gaussian distribution in the speaker embedding space. This could be seen more clearly in the form of the marginal density: $$p\left(\phi\right) = \mathcal{N}\left(\left. \phi \right| \mu, \mathbf{\Phi}_{\rm b}+\mathbf{\Phi}_{\rm w} \right) \label{eq:plda_marginal}$$ The main idea here is to account for the speaker and channel variability with a between-class and a within-class covariance matrices $$\begin{array}{l} \mathbf{\Phi}_{\rm b} = \mathbf{FF}^{\mathsf T} \\ \mathbf{\Phi}_{\rm w} = \mathbf{GG}^{\mathsf T} + \mathbf{\Sigma} \end{array}$$ respectively. We refer the readers to [@Princepaper; @Ioffe; @prince2012computer] for details on the model training procedure. In a speaker verification task, the PLDA model serves as a backend classifier. For a given pair of enrolment and test utterances, i.e, their speaker embeddings $\phi_1$ and $\phi_2$, we compute the log-likelihood ratio score $$l\left(\phi_1, \phi_2\right) = \frac{p\left(\phi_1, \phi_2\right)}{p\left(\phi_1\right)p\left(\phi_2\right)}$$ corresponding to the hypothesis test whether the two belong to the same or different speaker. The denominator is evaluated by substituting $\phi_1$ and $\phi_2$ in turn to . The numerator is computed using $$p\left(\phi_1, \phi_2 \right) = \mathcal{N}\left(\left. \begin{bmatrix} \phi_1 \\ \phi_2 \end{bmatrix} \right| \begin{bmatrix} \mu \\ \mu \end{bmatrix}, \begin{bmatrix} \mathbf{C} & \mathbf{\Phi}_{\rm b} \\ \mathbf{\Phi}_{\rm b} & \mathbf{C} \end{bmatrix} \right) \label{eq:plda_joint}$$ where $\mathbf{C} = \mathbf{\Phi}_{\rm b} + \mathbf{\Phi}_{\rm w}$ is the total covariance matrix. The assumption is that the unseen data follow the same distribution as given by the within and between classes covariance matrices derived from the training set (i.e., the dataset we used to train the PLDA). Problem arises when the training set was drawn from a domain (out-of-domain) different from that of the enrollment and test utterances (in-domain). Correlation Alignment {#sec:coral} --------------------- *Correlation alignment* (CORAL) [@Sun2016] aims to align the second-order statistics, i.e., covariance matrices, of the out-of-domain (OOD) features to match the in-domain (InD) features. No class (i.e., speaker) label is used and therefore it belongs to the class of unsupervised adaptation techniques. The algorithm consist of two steps, namely, whitening followed by re-coloring. Let $\mathbf{C}_{\rm o}$ and $\mathbf{C}_{\rm I}$ be the covariance matrices of the OOD and InD data, respectively. Denote $\phi$ as a OOD vector, domain adaptation is performed by first whitening and then re-coloring, as follows $$\phi^{'} = \mathbf{C}_{\rm I}^{\frac{1}{2}} \mathbf{C}_{\rm o}^{-\frac{1}{2}} \phi$$ where $$\mathbf{C}_{\rm o}^{-\frac{1}{2}} = \mathbf{Q}_{\rm o} \mathbf{\Lambda}_{\rm o}^{-\frac{1}{2}} \mathbf{Q}_{\rm o}^{\mathsf T} \notag$$ whitens the input vector, and $$\mathbf{C}_{\rm I}^{\frac{1}{2}} = \mathbf{Q}_{\rm I} \mathbf{\Lambda}_{\rm I}^{\frac{1}{2}} \mathbf{Q}_{\rm I}^{\mathsf T} \notag$$ does the re-coloring. Here, $\mathbf{Q}$ and $\mathbf{\Lambda}$ are the eigenvectors and eigenvalues pertaining to the covariance matrices[^2]. Such simpler and “frustratingly easy” approach [@Alam2018] has shown to outperform a more complicated non-linear transformation reported in [@Lin2018]. In [@Alam2018], CORAL is performed on the OOD x-vectors (or i-vectors) embeddings, and the transformed vectors (pseudo in-domain) are used to re-train the PLDA. Note that speaker labels of the OOD training data remain the same. The CORAL+ Algoritm {#sec:coral+} =================== CORAL is a feature-based domain adaptation technique [@Sun2016]. We propose integrating CORAL to PLDA leading to a model-based domain adaptation. Domain adaptation ----------------- It is commonly known that a linear transformation on a normally distributed vector leads to an equivalent transformation on the mean vector and covariance matrix of its density function. Let $\mathbf{A} =\mathbf{C}_{\rm I}^{1/2} \mathbf{C}_{\rm o}^{-1/2}$ be the transformation matrix and $\phi^{'} = \mathbf{A}^{\mathsf{T}}\phi$ the transformed vector. The covariance matrix of the pseudo in-domain vector $\phi^{'}$ is given by $$\begin{aligned} \mathbf{C}_{\rm o}^{'} = \mathbf{A}^{\mathsf{T}} \mathbf{C}_{\rm o} \mathbf{A} = \mathbf{A}^{\mathsf{T}} \mathbf{\Phi}_{\rm w,o} \mathbf{A} + \mathbf{A}^{\mathsf{T}} \mathbf{\Phi}_{\rm b,o} \mathbf{A} \end{aligned}$$ Here, we have considered a PLDA trained on OOD data with a total covariance matrix $\mathbf{C}_{\rm o} = \mathbf{\Phi}_{\rm w,o} + \mathbf{\Phi}_{\rm b,o}$ given by the sum of within and between class covariance matrices, as noted in Section \[sec:plda\]. The above equation shows that training a PLDA on the transformed vectors $\phi^{'}$, as proposed in [@Alam2018], is equivalent to transforming the within-class, between-class, and total covariance matrices of a PLDA trained on OOD data. Model-level adaptation ---------------------- Instead of replacing the covariance matrices in an OOD PLDA with pseudo in-domain matrices, model-level adaptation allows us to consider their interpolation $$\begin{aligned} \mathbf{\Phi}^{+}_{\rm b} =& (1-\beta)\mathbf{\Phi}_{\rm b,o} + \beta \mathbf{A}^{\mathsf{T}} \mathbf{\Phi}_{\rm b,o} \mathbf{A} \\ \mathbf{\Phi}^{+}_{\rm w} =& (1-\gamma)\mathbf{\Phi}_{\rm w,o} + \gamma \mathbf{A}^{\mathsf{T}} \mathbf{\Phi}_{\rm w,o} \mathbf{A} \end{aligned} \notag$$ where $\{\beta, \gamma\}$ are the adaptation parameters constrained to lie between zero and one. Notice that the first term on the right-hand-side of the equations is the OOD between/within covariance matrix while the second term is the pseudo-in-domain covariance matrix. For clarity, we further simplify the adaptation equations, as follows $$\begin{aligned} \mathbf{\Phi}^{+}_{\rm b} =& \mathbf{\Phi}_{\rm b,o} + \beta \left(\mathbf{A}^{\mathsf{T}} \mathbf{\Phi}_{\rm b,o} \mathbf{A} - \mathbf{\Phi}_{\rm b,o} \right) \\ \mathbf{\Phi}^{+}_{\rm w} =& \mathbf{\Phi}_{\rm w,o} + \gamma \left(\mathbf{A}^{\mathsf{T}} \mathbf{\Phi}_{\rm w,o} \mathbf{A} - \mathbf{\Phi}_{\rm w,o} \right) \end{aligned} \label{eq:model_adapt}$$ The second term on the right-hand-side of the equations represents the new information seen in the in-domain data to be added to the PLDA model. ![The effects of regularization. Elements with negative variances are removed automatically.](pldakaldi_coral3v2.png "fig:"){width="2.8in"} \[fig:regul\] Regularized adaptation ---------------------- The central idea of domain adaptation is to propagate the uncertainty seen in the in-domain data to the PLDA model. The adaptation equations in , do not guarantee that the variances, and therefore the uncertainty, increase. In this section, we achieve this goal in the transform space where both the OOD and pseudo-in-domain matrices are simultaneously diagonalized. Let $\mathbf{B}$ be an orthogonal matrix such that $\mathbf{B}^{\mathsf{T}} \mathbf{\Phi} \mathbf{B} = \mathbf{I}$ and $\mathbf{B}^{\mathsf{T}} \left(\mathbf{A}^{\mathsf{T}} \mathbf{\Phi} \mathbf{A} \right) \mathbf{B} = \mathbf{E}$, where $\mathbf{E}$ is a diagonal matrix. This procedure is referred to as *simultaneous diagonalization*. The transformation matrix $\mathbf{B}$ is obtained by performing twice the *eigenvalue decomposition* (EVD) on the matrix $\mathbf{\Phi}$ and then $\mathbf{A}^{\mathsf{T}} \mathbf{\Phi} \mathbf{A}$ after the first transformation has been applied. The procedure is illustrated in Algorithm \[alg:coral+\]. By applying the simultaneous diagonalization on , the following adaptation could be obtained: $$\begin{aligned} \mathbf{\Phi}^{+}_{\rm b} =& \mathbf{\Phi}_{\rm b,o} + \beta \mathbf{B}_{\rm b}^{-\mathsf{T}} \left(\mathbf{E}_{\rm b} - \mathbf{I} \right) \mathbf{B}_{\rm b}^{-1} \\ \mathbf{\Phi}^{+}_{\rm w} =& \mathbf{\Phi}_{\rm w,o} + \gamma \mathbf{B}_{\rm w}^{-\mathsf{T}} \left(\mathbf{E}_{\rm w} - \mathbf{I} \right) \mathbf{B}_{\rm w}^{-1} \end{aligned} \label{eq:coral+_wo_reg}$$ As before, the between and within class covariance matrices are adapted separately. Notice that the term $\left(\mathbf{E} - \mathbf{I} \right)$ will ends up with negative variances if any diagonal elements of $\mathbf{E}$ is less than one. We propose the following regularized adaptation: $$\begin{aligned} \mathbf{\Phi}^{+}_{\rm b} =& \mathbf{\Phi}_{\rm b,o} + \beta \mathbf{B}_{\rm b}^{-\mathsf{T}} max\left(\mathbf{E}_{\rm b} - \mathbf{I} \right) \mathbf{B}_{\rm b}^{-1} \\ \mathbf{\Phi}^{+}_{\rm w} =& \mathbf{\Phi}_{\rm w,o} + \gamma \mathbf{B}_{\rm w}^{-\mathsf{T}} max\left(\mathbf{E}_{\rm w} - \mathbf{I} \right) \mathbf{B}_{\rm w}^{-1} \end{aligned} \label{eq:coral+}$$ The $max(.)$ operator ensures that the variance increases. We refer to the regularized adaptation in as the CORAL+ algorithm, while corresponds to the CORAL+ algorithm without regularization. Algorithm \[alg:coral+\] summarizes the CORAL+ algorithm. Figure \[fig:regul\] shows a plot of the diagonal elements of the term $(\mathbf{E}_{\rm b}$ - $\mathbf{I})$ in . Those entries with negative variances were removed automatically by the $max(.)$ operator. It ensures that the uncertainty increases (or stays the same) in the adaptation process. It is worth noticing that, one could recover the subspace matrices $\{\mathbf{F}, \mathbf{G}\}$ via EVD. Nevertheless, this is not generally required as scores could be computed by plugging in the adapted covariance matrices $\mathbf{\Phi}^{+}_{\rm b}$, $\mathbf{\Phi}^{+}_{\rm w}$ and $\mathbf{C}^{+} = \mathbf{\Phi}^{+}_{\rm b} + \mathbf{\Phi}^{+}_{\rm w}$ into and . ![image](coralplus.png){width="3.2in"} \[alg:coral+\] Experiment ========== Experiments were conducted on the the recent SRE’16 and SRE’18 datasets. The performance was measured in terms of *equal error rate* (EER) and *minimum detection cost* (MinCost) [@sre16; @sre18]. The latest SREs organized by NIST have been focusing on domain mismatch as one of the technical challenges. In both SRE’16 and SRE’18, the training set consists primarily English speech corpora collected over multiple years in the North America. This dataset encompasses Switchboard, Fisher, and the MIXER corpora used in SREs 04 – 06, 08, 10, and 12. The enrollment and test segments are in Tagalog and Cantonese for SRE’16, and Tunisian Arabic for SRE’18. Domain adaptation was performed using the unlabeled subsets provided for the evaluation. The enrollment utterances have a nominal duration of 60 seconds, while the test duration ranges from 10 to 60 seconds. We used x-vector speaker embedding, which has shown to be very effective for speaker verification task over short utterances. (Recent results show that i-vector is more effective for longer utterance of over 2 minutes). The x-vector extractor follows the same configuration and was trained using the same setup as the Kaldi recipe [^3]. A slight difference here is that we used an attention model in the pooling layer and extended the data augmentation [@okabe2018attentive]. In our experiments, the dimension of the x-vector was 512. As commonly used in most state-of-the-art systems, LDA was used to reduce the dimensionality. We investigated the cases of 150- and 200-dimensional x-vector after LDA projection. CORAL [@Sun2016] transformation was applied on the raw x-vectors before LDA. The transformed, and then projected x-vectors were used to train a PLDA for the [CORAL PLDA]{} baseline. It is worth noticing that the LDA projection matrix was computed from the raw x-vectors, from which the CORAL transformation was also derived. We find that this gives the best performance compared to that reported in [@Alam2018]. The proposed CORAL+ is a model-based adaptation technique. Domain adaptation is achieved by adapting the parameters (i.e., covariance matrices) pertaining to the [OOD PLDA]{} as in and Algorithm \[alg:coral+\] using the unlabeled in-domain dataset. The adaptation parameters were set empirically to $0.80$ in the experiments. Tables \[table:sre16\] and \[table:sre18\] show the performance of the baseline PLDA model trained on the out-of-domain English dataset ([OOD PLDA]{}), the PLDA trained on the x-vectors which have been adapted using CORAL ([CORAL PLDA]{}), and the OOD PLDA adapted to in-domain with CORAL+ algorithm ([CORAL+ PLDA]{}). Also shown in the tables is the CORAL+ adaptation without regularization ([w/o reg]{}). This correspond to the use of replacing in Algorithm \[alg:coral+\]. The results on both SRE’16 and SRE’18 show consistent improvement of [CORAL+ PLDA]{} compared to the [OOD PLDA]{} baseline. The relative improvement amounts to $36.6\%$ and $22.35\%$ reduction in EER, and $32.0\%$ and $23.0\%$ reduction in MinCost on SRE’16 and SRE’18, respectively, for LDA dimension of $200$. Also shown in the tables is an unsupervised adaptation method implemented in Kaldi ^\[kaldi\]^ ([Kaldi PLDA]{}). The proposed [CORAL+ PLDA]{} consistently outperforms this baseline on both SRE’16 and SRE’18 though the improvement over this baseline is more apprarent on SRE’18. At LDA dimension of 200, the relative improvement amounts to $10.5\%$ reduction in EER, and $6.0\%$ reduction in MinCost on SRE’18. Compared to the feature-based CORAL ([CORAL PLDA]{}), the benefit of CORAL+ ([CORAL+ PLDA]{}) is more apparent on SRE’18. We obtained a relative reduction of $9.7\%$ in EER and $9.1\%$ in MinCost at LDA dimension of $200$. It is worth mentioning that SRE’16 has a unlabeled set with about the same size compared to that of SRE’18. Nevertheless, SRE’18 unlabeled set exhibits less variability (speaker and channel). This also explains the benefit of regularized adaptation on SRE’18 when a smaller and constrained unlabelled dataset is available for domain adaptation. Conclusion ========== We have presented the CORAL+ algorithm for unsupervised adaptation of PLDA backend to deal with the domain mismatch issue in practical applications. Similar to the feature-based correlation alignment (CORAL) technique, the CORAL+ domain adaptation is accomplished by matching the out-of-domain statistics to that of the in-domain. We show that statistics matching could be directly applied on PLDA model. We further improve the robustness by introducing additional adaptation parameter and regularization to the adaptation equation. The proposed method shows significant improvement compared to the PLDA baseline. Results also show the benefit of model-based adaptation especially when the data available for adaptation is relatively small and constrained. ---------------- ------------------------------------------------------ --------- --------- --------- EER (%) MinCost EER (%) MinCost [OOD PLDA]{} 9.69 0.783 9.94 0.813 [Kaldi PLDA]{} 6.82 0.552 6.57 0.558 [CORAL PLDA]{} **[6.50]{} & **[0.539]{} & 6.31 & **[0.543]{}\ [CORAL+ PLDA]{} & 6.62 & 0.540 & **[6.30]{} & 0.553\ [w/o reg]{} & 6.93 & 0.544 & 6.51 & 0.547\ ******** ---------------- ------------------------------------------------------ --------- --------- --------- : [*Performance comparison on SRE’16 (CMN). The dimension of x-vector after LDA is $150$ and $200$. Boldface denotes the best performance for each column.*]{} \[table:sre16\] ----------------- ------------------------------------------------------ --------- --------- --------- EER (%) MinCost EER (%) MinCost [OOD PLDA]{} 7.19 0.538 7.47 0.569 [Kaldi PLDA]{} 6.25 0.435 6.48 0.466 [CORAL PLDA]{} 6.22 0.449 6.42 0.482 [CORAL+ PLDA]{} **[5.95]{} & **[0.421]{} & **[5.80]{} & **[0.438]{}\ [w/o reg]{} & 6.49 & 0.441 & 6.33 & 0.460\ ******** ----------------- ------------------------------------------------------ --------- --------- --------- : [*Performance comparison on SRE’18 (CMN2). The dimension of x-vector after LDA is $150$ and $200$. Boldface denotes the best performance for each column.*]{} \[table:sre18\] [^1]: Mean shift due to domain mismatch could be solved by centralizing the datasets to a common origin [@Lee2017]. [^2]: The whitening and re-coloring procedures are better known as the zero-phase component analysis (ZCA) transformation [@Kessy2018]. As opposed to principal component analysis (PCA) and Cholesky whitening (and re-coloring), ZCA preserves the maximal similarity of the transformed feature to the original space. [^3]: https://github.com/kaldi-asr/kaldi/tree/master/egs/sre16/v2 \[kaldi\]
--- abstract: 'We studied experimentally the effect of turbulent thermal diffusion in a multi-fan turbulence generator which produces a nearly homogeneous and isotropic flow with a small mean velocity. Using Particle Image Velocimetry and Image Processing techniques we showed that in a turbulent flow with an imposed mean vertical temperature gradient (stably stratified flow) particles accumulate in the regions with the mean temperature minimum. These experiments detected the effect of turbulent thermal diffusion in a multi-fan turbulence generator for relatively high Reynolds numbers. The experimental results are in compliance with the results of the previous experimental studies of turbulent thermal diffusion in oscillating grids turbulence (Buchholz et al. 2004; Eidelman et al. 2004). We demonstrated that turbulent thermal diffusion is an universal phenomenon. It occurs independently of the method of turbulence generation, and the qualitative behavior of particle spatial distribution in these very different turbulent flows is similar. Competition between turbulent fluxes caused by turbulent thermal diffusion and turbulent diffusion determines the formation of particle inhomogeneities.' author: - 'A. Eidelman' - 'T. Elperin' - 'N. Kleeorin' - 'I. Rogachevskii' - 'I. Sapir-Katiraie' title: 'Turbulent Thermal Diffusion in a Multi-Fan Turbulence Generator with the Imposed Mean Temperature Gradient' --- Introduction ============ The main goal of this study is to describe the experimental investigation of the effect of turbulent thermal diffusion in a multi-fan turbulence generator. Turbulent thermal diffusion is associated with the correlation between temperature and velocity fluctuations in a turbulent flow with an imposed mean temperature gradient and causes a relatively strong non-diffusive mean flux of particles in the direction of the mean heat flux. This effect results in the formation of large-scale inhomogeneities in particle spatial distribution whereby particles accumulate in the vicinity of the minimum of the mean fluid temperature. Turbulent thermal diffusion was predicted theoretically by Elperin et al. (1996; 1997) and detected experimentally by Buchholz et al. (2004) and Eidelman et al. (2004) in oscillating grids turbulence. The mechanism of the phenomenon of turbulent thermal diffusion for inertial solid particles is as follows. Inertia causes particles inside the turbulent eddies to drift out to the boundary regions between the eddies (i.e., regions with low vorticity and maximum of fluid pressure). Therefore, particles accumulate in regions with maximum pressure of the turbulent fluid. Similarly, there is an outflow of particles from regions with minimum pressure of fluid. In a homogeneous and isotropic turbulence without large-scale external gradients of temperature, a drift from regions with increased or decreased concentration of particles by a turbulent flow of fluid is equiprobable in all directions, as well as pressure and temperature of the surrounding fluid do not correlate with the turbulent velocity field. Therefore, only turbulent diffusion determines the turbulent flux of particles. In a turbulent fluid flow with a mean temperature gradient, the mean heat flux is not zero, i.e., the fluctuations of temperature and the velocity of the fluid are correlated. Fluctuations of temperature cause fluctuations of fluid pressure. These fluctuations result in fluctuations of the number density of particles. Indeed, an increase of pressure of the surrounding fluid is accompanied by an accumulation of particles due to their inertia. Therefore, the direction of the mean flux of particles coincides with that of the heat flux, and the mean flux of particles is directed to the region with minimum mean temperature, and the particles accumulate in this region (Elperin et al. 1996). The mechanism of turbulent thermal diffusion is associated with a nonzero divergence of a particle velocity field. The latter is caused either by particle inertia or inhomogeneity of fluid density in a non-isothermal low-Mach number turbulent fluid flow. Therefore, the effect of turbulent thermal diffusion can be also observed in the suspension of non-inertial particles (e.g., particles with a Stokes time of the order of $10^{-6} - 10^{-5}$ s in air flows) or for gaseous admixtures in non-isothermal low-Mach number turbulent fluid flows (see Elperin et al., 1997). Note that when we refer to compressible non-isothermal fluid flow with low Mach numbers, it means that ${\rm div} \, (\rho \, {\bf v}) \approx 0$, where ${\bf v}$ is the fluid velocity and $\rho$ is the fluid density. The latter implies that ${\rm div} \, {\bf v} \approx - ({\bf v} \cdot {\mbox{\boldmath $ \nabla$}}) \rho / \rho \not = 0 .$ In particular, in a non-isothermal fluid flow with a temperature gradient ${\rm div} \, {\bf v} \approx - ({\bf v} \cdot {\mbox{\boldmath $ \nabla$}}) \rho / \rho \approx ({\bf v} \cdot {\mbox{\boldmath $ \nabla$}}) T / T \not = 0$, where $T$ is the fluid temperature. Numerical simulations, laboratory experiments and observations in the atmospheric turbulence revealed formation of long-living inhomogeneities in spatial distribution of small inertial particles and droplets in turbulent fluid flows (see, e.g., Wang and Maxey 1993; Korolev and Mazin 1993; Eaton and Fessler 1994; Fessler et al. 1994; Maxey et al. 1996; Aliseda et al. 2002; Shaw 2003). The origin of these inhomogeneities was intensively studied by Elperin et al. (1996; 1997; 2000a; 2000b). It was pointed out that the effect of turbulent thermal diffusion is important for understanding different atmospheric phenomena (e.g., atmospheric aerosols, smog formation, etc). In particular, the existence of a correlation between the appearance of temperature inversions and the aerosol layers (pollutants) in the vicinity of the temperature inversions is well known (see, e.g., Csanady 1980; Seinfeld 1986; Flagan and Seinfeld 1988). Turbulent thermal diffusion can cause the formation of large-scale aerosol layers in the vicinity of temperature inversions in atmospheric turbulence (Elperin et al. 2000a; 2000b). Observations of the vertical distributions of pollutants in the atmosphere show that maximum concentrations can occur within temperature inversion layers (see, e. g., Csanady 1980; Seinfeld 1986; Jaenicke 1987). The characteristic parameters of the atmospheric turbulent boundary layer are: the maximum scale of turbulent flow is $L \sim 10^3 - 10^4 $ cm; the turbulent fluid velocity in the scale $L$ is $ u \sim 30 - 100 $ cm/s; the Reynolds number is $ {\rm Re} = u L /\nu \sim 10^6 - 10^7$ (see, e. g., Csanady 1980; Seinfeld 1986; Blackadar 1997), where $\nu$ is the kinematic viscosity. For instance, for particles with material density $ \rho_p \sim 1 - 2 $ g / cm$^3 $ and radius $a = 20 \, \mu$m the characteristic time of formation of inhomogeneities is of the order of $ 1 $ hour for the temperature gradient $ 1 K / 100 $ m and $ 2 $ hours for the temperature gradient $ 1 K / 200 $ m. The effect of turbulent thermal diffusion might be also of relevance in different industrial non-isothermal turbulent flows (Elperin et al. 1998). Turbulent thermal diffusion is a new and fundamental phenomenon. Therefore, this phenomenon should be studied for different types of turbulence and different experimental set-ups. Previously phenomenon of turbulent thermal diffusion was studied experimentally only in oscillating grids turbulence whereby the Reynolds numbers were not so high (see for details Buchholz et al. 2004; Eidelman et al. 2004). In order to study the effect of turbulent thermal diffusion at higher Reynolds numbers we constructed a multi-fan turbulence generator. Similar apparatus was used in the past in turbulent combustion studies (Birouk et al. 1996) and in studies of turbulence-induced preferential concentration of solid particles in microgravity conditions (Fallon and Rogers 2002). The multi-fan turbulence generator allows us to produce nearly homogeneous isotropic turbulent fluid flow with a small mean velocity. Using Particle Image Velocimetry and Image Processing Techniques we determined velocities and spatial distribution of tracer particles in the isothermal and non-isothermal turbulent flows. Our experiments with non-isothermal turbulent flows were performed for a stably stratified fluid flow with a vertical mean temperature gradient which was formed by a cooled bottom wall and a heated top wall of the chamber. We found that particles accumulate in the vicinity of the bottom wall of the chamber (where the mean fluid temperature is minimum) due to the effect of turbulent thermal diffusion. Sedimentation of particles in a gravity field in these experiments was very slow in comparison with accumulation of particles caused by turbulent thermal diffusion. In the present study we demonstrated that turbulent thermal diffusion occurs independently of the method of turbulence generation. Experimental set-up =================== A multi-fan turbulence generator includes eight fans (120 mm in outer diameter and with controlled rotation frequency of up to 2800 rpm) mounted in the corners of a cubic Perspex box and facing the center of the box. The Perspex box is a cube $400 \times 400 \times 400$ mm$^3$, with eight 272 mm equilateral triangles mounted in its corners used as solid bases for the fans (see Fig. 1). Each fan was calibrated separately, and the input current and rotation speed were measured and logged. We also tested one fan operating above an electric heat source. We placed a thermocouple in the vicinity of the fan’s motor, and repeated this test for different temperatures (290, 310, 330 K). These experiments showed that the fan rotation speed does not depend on the temperature. ![\[Fig1\] The scheme of the test section.](fig1.eps){width="7cm"} At the top and bottom walls of the Perspex box we installed two heat exchangers with rectangular $3 \times 3 \times 15$ mm$^3$ fins. The upper wall was heated up to 343 K, the bottom wall was cooled to 283 K. Therefore, comperatively large vertical mean temperature gradient $(\sim 92$ K/m) was formed in the core of the flow. The temperature was measured with a high-frequency response thermocouple ($0.005$ inches in diameter) which was glued externally to a wire $(2.2$ mm of Cu in diameter coated with $0.85$ mm Teflon). The accuracy of the temperature measurements was of the order of $0.1$ K. We found that the temperature measurements affected the flow in a very small area around the wire. Two additional fans were installed at the bottom and top walls of the chamber in order to produce a large mean temperature gradient in the core of the flow. Our measurements showed that these two additional fans only weakly affected considerably the homogeneity and isotropy of the turbulent flow. Velocity fields were measured using Particle Image Velocimetry (PIV) technique. The flow was seeded with incense smoke and was illuminated by a Surelite LSI-10 (Continuum) Nd:YAG pulsed laser with a power of 170 mJ/pulse. The light sheet optics includes spherical and cylindrical Galilei telescopes with tuneable divergence and adjustable focus length. We used a progressive-scan 12 bit digital CCD camera (pixel size $6.7 \, \mu$m $\times 6.7 \, \mu$m each) with a dual-frame-technique for cross-correlation processing of captured images. A programmable Timing Unit (PC interface card) generated sequences of pulses to control the laser, camera and data acquisition rate. The data was processed using standard cross correlation techniques (DaVis 7.0 code, LaVision, Göttingen). An incense smoke with sub-micron particles as a tracer for the PIV measurements was produced by high temperature sublimation of solid incense particles. Analysis of smoke particles using a microscope (Nikon, Epiphot with an amplification of 560) and a PM-300 portable laser particulate analyzer showed that these particles have an approximately spherical shape and that their mean diameter is of the order of $0.7 \mu$m. In order to prevent from any thermal effects caused by the incense smoke generator, we placed it far away from the test section behind a wall, so that the incense smoke was transported through a five meter long pipeline and was cooled before it entered the test section. The smoke was feeded into the test section at room temperature. We measured the smoke temperature inside the pipeline at two locations: 0.5 m and 3.5 m from the generator. At the first location the smoke temperature was 318 K, while at the second location it was 294 K. The number density of smoke particles inserted to the test section in the experiments was of the order of $10^4$ cm$^{-3}$. We determined mean and r.m.s. velocities, two-point correlation functions and an integral scale of turbulence from the measured velocity fields. A series of 130 pairs of images acquired with a frequency of 4 Hz were stored for calculating the velocity maps and for ensemble and spatial averaging of turbulence characteristics. The center of the measurement region coincides with the center of the chamber. We measured the velocity in flow area of $92 \times 92$ mm$^2$ with a spatial resolution of $1024 \times 1024$ pixels. These regions were analyzed with interrogation windows of $32 \times 32$ pixels. A velocity vector was determined in every interrogation window, allowing us to construct a velocity map comprising $32 \times 32$ vectors. The mean and r.m.s. velocities for each point of the velocity map (1024 points) were determined by averaging over 130 independent maps, and then over 1024 points. The two-point correlation functions of the velocity field were determined for each point of the central part of the velocity map ($16 \times 16$ vectors) by averaging over 130 independent velocity maps, and then over 256 points. Our tests showed that 130 image pairs contain enough data to obtain reliable statistical estimates. An integral scale $L$ of turbulence was determined from the two-point correlation functions of the velocity field. For this end we used exponential approximation of the correlation function since the experimentally measured correlation function did not reach zero values. ![\[Fig2\] The scheme of the temperature measurements. The wire that holds the thermocouple is inserted from the bottom of the chamber (some parts of the test section are not shown).](fig2.eps){width="8cm"} \[tab1\] [|l|c|c|]{}\ \ Horizontal and vertical directions & Y & Z\ Reynolds number ${\rm Re} = u \, L / \nu$ & 703 & 875\ Integral length scale $L$ (mm) & 14.85 & 16.4\ r.m.s. velocity $u$ (m/s) & 0.71 & 0.8\ Turbulence integral time scale $\tau =L/ u$ (ms) & 20.9 & 20.5\ Rate of dissipation $\varepsilon=u^3/ L$ (m$^2/$ s$^3$) & 24.1 & 31.2\ Taylor microscale $\lambda = \sqrt{15 \, \nu \, \tau}$ (mm) & 2.17 & 2.15\ Kolmogorov length scale $\eta=L \, {\rm Re}^{-3/4}$ ($\mu$m) & 109 & 102\ $Re_\lambda = u \, \lambda / \nu$ in the Taylor microscale & 103 & 115\ Spatial resolution of of the velocity measurements was about $2.9$ mm for a probed area $92 \times 92$ mm$^2$ with interrogation window $32 \times 32$ pixels. The maximum tracer particle displacement in the experiment was of the order of $8$ pixels, i.e., $1/4$ of the interrogation window. The average displacement of tracer particles was of the order of $2.5$ pixels. Therefore, the average accuracy of the velocity measurements was of the order of $4 \%$ for the accuracy of the correlation peak detection in the interrogation window which was of the order of $0.1$ pixels (see, e.g., Adrian 1991; Westerweel 1997; 2000). In order to create large mean temperature gradient, the top and bottom fans were run at different speeds than the peripheral fans (see below). This regime was found empirically. Clearly, this introduces a weak anisotropy in velocity fluctuations in the vertical direction which is less than $10 \%$. Note that the anisotropy of turbulence is not essential for the validation of the phenomenon of turbulent thermal diffusion. Fluid flow parameters (at the rotation speed $1500$ rpm for eight corner fans and $2300$ rpm for top and bottom fans) at the horizontal $Y$ and vertical $Z$ directions are presented in Table 1. In the experiments the maximum mean flow velocity was of the order of $0.1 - 0.2$ m/s while the r.m.s. velocity was of the order of $0.7 - 1.1$ m/s. Thus, the measured r.m.s. velocity was much higher than the mean fluid velocity in the core of the flow. Figure 3 shows the two-point correlation functions of the velocity field at the horizontal and vertical directions. In particular, in Fig. 3 we plotted the longitudinal velocity correlation coefficients, $f(y) = \langle u_y(0) u_y(y) \rangle / \langle u_y^2(0) \rangle $ and $f(z) =\langle u_z(0) u_z(z) \rangle / \langle u_z^2(0) \rangle $, where ${\bf u}$ are the fluid velocity fluctuations. Figure 3 and Table 1 demonstrate that the multi-fan turbulence generator produced a weakly anisotropic turbulent fluid flow with a small mean velocity. The energy spectrum of the fluid flow is shown in Fig. 4. The one-dimensional longitudinal energy spectrum was determined by a standard procedure. In particular, we determined the Fourier components $u_y(k_y)$ and $u_z(k_z)$ of the fluctuating velocity field, and then determined $\langle |u_y(k_y)|^2 \rangle$ and $\langle |u_z(k_z)|^2 \rangle$, where $\langle ... \rangle$ is the ensemble averaging over 130 independent velocity maps. In Fig. 4 we plotted horizontal and vertical energy component spectra. As can be seen from Fig. 4, the energy spectrum is different from $-5/3$ law. However, our study showed that the existence of the effect of turbulent thermal diffusion is independent of the slope of the energy spectrum. ![\[Fig3\] The two-point longitudinal correlation functions of the velocity field at the horizontal direction (filled squares) and vertical direction (unfilled squares).](fig3.eps){width="8cm"} ![\[Fig4\] The energy spectrum of the fluid flow.](fig4.eps){width="8cm"} Spatial particle number density distributions were obtained using a single frame from the double frame captured for PIV measurements. For this purpose the intensity of laser light Mie scattering by tracer particles was recorded and averaged over 130 single frames. We probed the central $ 9.2 \times 9.2 $ cm region in the chamber by determining the mean intensity of scattered light in $ 32 \times 16 $ interrogation windows with a size of $ 32 \times 64 $ pixels. The vertical distribution of the intensity of the scattered light was determined in 16 vertical strips, which are composed of 32 interrogation windows. Variations of the obtained vertical distributions between these strips were very small. We used spatial average across the strips and ensemble average over 130 images of the vertical distributions of the intensity of scattered light. The turbulent diffusion coefficient in the test section $D_{_{T}}$ was of the order of $D_{_{T}} \sim 40$ cm$^2$ / s. The turbulent diffusion time $\tau_{_{TD}} = L_d^2 / D_{_{T}} \sim 3 $ seconds in the scale of core flow $L_d=10$ cm. Thus the steady state particle spatial distribution is reached during several seconds, while the turbulence integral time scale $\tau \sim 2 \times 10^{-2}$ s. The measurements were started several minutes after the seed was inserted in the chamber. Turbulent thermal diffusion: theory and experiment ================================================== Now let us discuss the effect of turbulent thermal diffusion and compare the theoretical predictions with the experimental results. The mean number density of particles $\bar N$ advected by a turbulent fluid flow is given by $$\begin{aligned} && {\partial \bar N \over \partial t} + {\rm div} \, [\bar N (\bar{\bf V} + {\bf V}_{\rm eff}) - D_{_{T}} {\mbox{\boldmath $ \nabla$}} \bar N] = 0 \;, \label{A2}\\ && {\bf V}_{\rm eff} = - \tau \, \langle {\bf u}_p \, {\rm div} \, {\bf u}_p \rangle = - D_{_{T}} (1 + \kappa) {{\mbox{\boldmath $ \nabla$}} \bar T \over \bar T} \;, \label{P3}\end{aligned}$$ where $ D_{_{T}} = (\tau /3) \langle {\bf u}^2 \rangle $ is the turbulent diffusion coefficient, $\tau$ is the momentum relaxation time of the turbulent velocity field, ${\bf u}$ are fluctuations of fluid velocity, $\bar{\bf V}$ is the mean fluid velocity, ${\bf u}_p$ are fluctuations of particle velocity, $\bar T$ is the mean fluid temperature. Coefficient $\kappa$ depends on particle inertia (the particle size $a$), the Reynolds number and the mean fluid temperature. In Eq. (\[A2\]) we neglected a small molecular mean flux of particles caused by molecular (Brownian) diffusion, molecular thermal diffusion (or molecular thermophoresis) and small particle terminal fall velocity. Equation (\[A2\]) was previously derived by different methods (see Elperin et al. 1996; 1997; 1998; 2000b; 2001; Pandya and Mashayek 2002; Reeks 2005). ![\[Fig5\] Mechanism of turbulent thermal diffusion of inertial particles.](fig5.eps){width="8cm"} For non-inertial particles, their velocity coincides with fluid velocity $ {\bf v} =\bar{\bf V} + {\bf u},$ and ${\rm div} \, {\bf v} \approx - ({\bf v} \cdot {\mbox{\boldmath $ \nabla$}}) \rho / \rho \approx ({\bf v} \cdot {\mbox{\boldmath $ \nabla$}}) T / T ,$ where $\rho$ and $T$ are the density and temperature of the fluid, and $\bar{\bf V} = \langle {\bf v} \rangle $. Therefore, the effective velocity of non-inertial particles ${\bf V}_{\rm eff} = - \tau \, \langle {\bf u} \, {\rm div} \, {\bf u} \rangle$ is given by $ {\bf V}_{\rm eff} = - D_{_{T}} ({\mbox{\boldmath $ \nabla$}} \bar T) / \bar T .$ Here we used the equation of state for an ideal gas, and neglected small gradients of the mean fluid pressure. This effective velocity causes an additional turbulent flux of particles directed to the minimum of mean fluid temperature (phenomenon of turbulent thermal diffusion). Note that turbulent thermal diffusion for non-inertial particles is the purely kinematic effect. Indeed, the equation for the instantaneous mass concentration, $C= m_p \, n / \rho$, of non-inertial particles in non-isothermal flow reads $$\begin{aligned} {\partial C \over \partial t} + ({\bf v} \cdot {\mbox{\boldmath $ \nabla$}}) C = {1 \over \rho} \, {\rm div} \, (D \, \rho \, {\mbox{\boldmath $ \nabla$}} C) \;, \label{RE1}\end{aligned}$$ where $m_p$ is the particle mass and $n$ is the instantaneous number density of particles. For very small molecular diffusion $D$, this equation reads $$\begin{aligned} {\partial C \over \partial t} + ({\bf v} \cdot {\mbox{\boldmath $ \nabla$}}) C \approx 0\;, \label{RE2}\end{aligned}$$ which implies that the mass concentration $C$ is conserved along the fluid particle trajectory. In homogeneous turbulence all trajectories are similar. Therefore, the number density of non-inertial particles $n \propto \rho$, i.e., the number density of non-inertial particles behaves locally as the fluid density. In particular, the location of the maximum of the number density of non-inertial particles coincides with the location of the maximum of the fluid density, and vice versa. For very small mean fluid pressure gradients, ${\mbox{\boldmath $ \nabla$}} \bar \rho / \bar \rho \approx - {\mbox{\boldmath $ \nabla$}} \bar T / \bar T$, where $\bar \rho$ is the mean fluid temperature. Therefore, the location of the maximum of the mean number density of non-inertial particles coincides with the location of the minimum of the mean fluid temperature, and vice versa. The equation for the mean number density of non-inertial particles $\bar N$ is equivalent to the following equation for the mean mass concentration $ \bar C= m_p \, \bar N / \bar \rho$ of non-inertial particles: $$\begin{aligned} {\partial \bar C \over \partial t} + (\bar {\bf V} \cdot {\mbox{\boldmath $ \nabla$}}) \bar C = {1 \over \bar \rho} \, {\rm div} \, (D_{_{T}} \, \bar \rho \, {\mbox{\boldmath $ \nabla$}} \bar C) \; . \label{RE5}\end{aligned}$$ If one ignores the turbulent thermal diffusion term in Eq. (\[A2\]) for the mean number density of non-inertial particles $\bar N$, then the equation for the mean mass concentration $\bar C$ of non-inertial particles has an incorrect form. For inertial particles, their velocity ${\bf v}_p$ depends on the velocity of the surrounding fluid ${\bf v}$. In particular, for a small Stokes time ${\bf v}_p \approx {\bf v} - \tau_p d{\bf v} / dt + {\rm O}(\tau_p^2)$ (see Maxey 1987), where $ \tau_p $ is the Stokes time. Using the Navier-Stokes equation it can be shown that ${\rm div} \, {\bf v}_p \approx {\rm div} \, {\bf v} + \tau_p \Delta P / \rho + {\rm O}(\tau_p^2)$ (Elperin et al. 1996). The effective velocity (\[P3\]) for inertial particles is given by $ {\bf V}_{\rm eff} = - D_{_{T}} \alpha ({\mbox{\boldmath $ \nabla$}} \bar T) / \bar T $, where the coefficient $\alpha = 1 + \kappa(a)$, $\, \, \kappa(a) \propto \tau_p \propto a^2$ and $a$ is the particle size. Therefore, the mean particle velocity is $\bar{\bf V}_p = \bar{\bf V} + {\bf V}_{\rm eff}$. Turbulent thermal diffusion implies an additional non-diffusive turbulent flux of inertial particles to the minimum of mean fluid temperature (i.e., the additional turbulent flux of inertial particles in the direction of the turbulent heat flux). In order to demonstrate that the directions of the turbulent flux of inertial particles and the turbulent heat flux coincide, let us assume that the mean temperature $\bar T_2$ at point $2$ is larger than the mean temperature $\bar T_1$ at point $1$ (see Fig. 5). Consider two small control volumes $"a"$ and $"b"$ located between these two points (see Fig. 5), and let the direction of the local turbulent velocity at the control volume $"a"$ at some instant be the same as the direction of the turbulent heat flux $\langle {\bf u} \theta \rangle $ (i.e., it is directed to the point $1$). Let the local turbulent velocity at the control volume $"b"$ be directed at this instant opposite to the turbulent heat flux (i.e., it is directed to the point $2$). In a fluid flow with an imposed mean temperature gradient, pressure $p$ and velocity ${\bf u}$ fluctuations are correlated, and regions with a higher level of pressure fluctuations have higher temperature and velocity fluctuations. Indeed, using equation of state of the ideal gas it can be easily shown that the fluctuations of the temperature $\theta$ and pressure $p$ at the control volumes $"a"$ are positive, and at the control volume $"b"$ they are negative. Therefore, the fluctuations of the particle number density $n$ are positive in the control volume $"a"$ (because inertial particles are locally accumulated in the vicinity of the maximum of pressure fluctuations), and they are negative at the control volume $"b"$ (because there is an outflow of inertial particles from regions with a low pressure). The mean flux of particles $\langle {\bf u} n \rangle$ is positive in the control volume $"a"$ (i.e., it is directed to the point $1$), and it is also positive at the control volume $"b"$ (because both fluctuations of velocity and number density of particles are negative at the control volume $"b"$). Therefore, the mean flux of inertial particles $\langle {\bf u} n \rangle$ is directed, as is the turbulent heat flux $\langle {\bf u} \theta \rangle$, towards the point 1. ![\[Fig6\] Vertical temperature profile. Here $Z$ is a dimensionless vertical coordinate measured in units of the height of the chamber, and $Z=0$ at the bottom of the chamber.](fig6.eps){width="8cm"} The contribution of turbulent thermal diffusion to the turbulent flux of particles is given by $$\begin{aligned} J_{_{T}}^{TTD} = - D_{_{T}} \alpha {{\mbox{\boldmath $ \nabla$}} \bar T \over \bar T} \bar N = - D_{_{T}} k_{_{T}} {{\mbox{\boldmath $ \nabla$}} \bar T \over \bar T} \;, \label{RE7}\end{aligned}$$ where $ D_{_{T}} k_{_{T}} $ is the coefficient of turbulent thermal diffusion, $k_{_{T}} = \alpha \bar N = (1+\kappa) \bar N$ is the turbulent thermal diffusion ratio, and the coefficient $\alpha = k_{_{T}} / \bar N $ is the specific turbulent thermal diffusion ratio. Neglecting the term $\bar N {\bf V}_{\rm eff}$ in Eq. (\[A2\]) for the mean number density of particles, we arrive at simple diffusion equation: $\partial \bar N / \partial t = D_{_{T}} \Delta \bar N ,$ where we neglected a small mean velocity $\bar{\bf V}$. The steady-state solution of this equation is $\bar N= \, $ const, i.e., a uniform spatial distribution of particles. On the other hand, our measurements in both, the multi-fan turbulence generator and an oscillating grids turbulence generator (Buchholz et al. 2004; Eidelman et al. 2004) demonstrate that the solution $\bar N= \, $ const is valid only for an isothermal turbulent flow. Let us take into account the effect of turbulent thermal diffusion in Eq. (\[A2\]). Then the steady-state solution of Eq. (\[A2\]) reads: ${\mbox{\boldmath $ \nabla$}} \bar N / \bar N = - \alpha {\mbox{\boldmath $ \nabla$}} \bar T / \bar T ,$ which yields $$\begin{aligned} {\bar N \over \bar N_0} = 1 - \alpha {\bar T - \bar T_0 \over \bar T_0} \;, \label{R10}\end{aligned}$$ where $\bar N_0 = \bar N(\bar T = \bar T_0)$ and $\bar T_0$ is the reference mean temperature. Now let us discuss the measurements of mean temperature and particle spatial distribution in the multi-fan turbulence generator, and compare the experimental results with the theoretical predictions. The mean temperature vertical profile in the multi-fan turbulence generator is shown in Fig. 6. Here $Z$ is a dimensionless vertical coordinate measured in the units of the height of the chamber, and $Z=0$ at the bottom of the chamber. ![\[Fig7\] Ratio $E^T / E$ of the normalized average distributions of the intensity of scattered light versus the normalized vertical coordinate $Z$.](fig7.eps){width="8cm"} ![\[Fig8\] A typical normalized $E^T/E$ image in $YZ$ plane. Here $Y$ and $Z$ are the horizontal and vertical coordinates.](fig8.eps){width="8cm"} Measurements performed using different concentrations of the incense smoke in the flow showed that the distribution of the scattered light intensity normalized by the average over the vertical coordinate light intensity, is independent of the mean particles number density in the isothermal flow. In order to characterize the spatial distribution of particle number density, $ \bar N \propto E^T / E $, in a non-isothermal flow, the distribution of the scattered light intensity $E$ for the isothermal case was used to normalize the scattered light intensity $E^T$ obtained in a non-isothermal flow under the same conditions. The scattered light intensities $E^T$ and $E$ in each experiment were normalized by corresponding scattered light intensities averaged over the vertical coordinate. The ratio $E^T / E$ of the normalized average distributions of the intensity of scattered light as a function of the normalized vertical coordinate $Z$ is shown in Fig. 7. A typical normalized $E^T/E$ image in $YZ$ plane is shown in Fig. 8. Inspection of Figs. 7-8 demonstrates that particles are redistributed in a turbulent flow with a mean temperature gradient, e.g., they accumulate in regions with minimum mean temperature (in the lower part of the chamber). ![\[Fig9\] Normalized particle number density $N_z \equiv \bar N / \bar N_0$ versus normalized temperature difference $T_z \equiv (\bar T - \bar T_0) / \bar T_0 $.](fig9.eps){width="8cm"} In order to determine the specific turbulent thermal diffusion ratio, $\alpha$, in Fig. 9 we plotted the normalized particle number density $N_z \equiv \bar N / \bar N_0$ versus the normalized temperature difference $ T_z \equiv (\bar T - \bar T_0) / \bar T_0 ,$ where $\bar T_0$ is the reference temperature and $ \bar N_0 = \bar N(\bar T = \bar T_0).$ Figure 9 was plotted using the mean temperature vertical profile shown in Fig. 6. The normalized local mean temperatures \[the relative temperature differences $ (\bar T - \bar T_0) / \bar T_0 $\] in Fig. 9 correspond to the different locations inside the probed region. In particular, in Fig. 9 the location of the point with reference temperature $\bar T_0$ is $Z=0$ (the lowest point of the probed region with a maximum $\bar N)$. In these experiments we found that the coefficient $\alpha \approx 2.68$. The specific turbulent thermal diffusion ratio $\alpha$ in the experiments with oscillating grids turbulence (Eidelman et al., 2004; Buchholz et al., 2004) was $\alpha = 1.29-1.87$ (depending on the frequency of the grid oscillations and on the direction of the imposed vertical mean temperature gradient), while in the multi-fan turbulence $\alpha = 2.68$. The latter value of the coefficient $\alpha$ is larger than that obtained in the experiments in oscillating grids turbulence, where the Reynolds numbers were smaller than those achieved in the multi-fan turbulence generator. Therefore, we demonstrated that the specific turbulent thermal diffusion ratio $\alpha$ increases with increase of Reynolds number. Note also that the experiments with oscillating grids turbulence were performed with two directions of the imposed vertical mean temperature gradient (for stable and unstable stratifications). The specific turbulent thermal diffusion ratio, $\alpha = 1 + \kappa(a)$, comprises two terms, the first one (which equals to 1) is independent of the particle size and the second term depends on the size of particles. In particular, $\kappa(a) \propto \tau_p \propto a^2$, where $a$ is the particle size. For non-inertial particles, $\kappa(a)=0$, and $\alpha = 1$. The deviation of the coefficient $\alpha$ in both experiments from $\alpha=1$ is caused by a small yet finite inertia of the particles and also by the dependence of coefficient $\kappa$ on the Reynolds numbers. The exact value of parameter $\alpha$ for inertial particles cannot be found within the framework of the theory of turbulent thermal diffusion (Elperin et al. 1996; 1997; 1998; 2000b; 2001) for the conditions of our experiments (i.e., for large mean temperature gradients). However, in the experiments performed for different ranges of parameters and different directions of a mean temperature gradient, and in two different experimental set-ups, the coefficient $\alpha$ was more than $1$, that agrees with the theory. Therefore, we demonstrated that turbulent thermal diffusion occurs independently of the method of turbulence generation. The size of the probed region did not affect our results. The variability of the results obtained in different experiments was within $0.5 \%$. This is caused by variability of optical conditions, light intensity variations in a light sheet and errors in light intensity detection. Therefore, it can be concluded that error in particle number density measurements is less than $0.5 \%$. Note that the contribution of the mean flow to the spatial distribution of particles is negligibly small. In particular, the normalized distribution of the scattered light intensity measured in the different vertical strips in the regions where the mean flow velocity and the coefficient of turbulent diffusion may vary, are practically identical (the difference being only about $1 \%)$. The effect of the gravitational settling of small particles ($0.5 - 1 \, \mu$m) is negligibly small (the terminal fall velocity of these particles being less than $0.01$ cm/s). Due to the effect of turbulent thermal diffusion, particles are redistributed in the vertical direction in the chamber: particles accumulated in the lower part of the chamber, i.e., in regions with a minimum mean temperature. Some fraction of particles sticks to the fan propellers and chamber walls, and the total number of particles without feeding fresh smoke slowly decreases. The characteristic time of this decrease is about 15 minutes. However, the spatial distribution of the normalized number density of particles does not change over time. It must be noted that the accuracy of the measurements in these experiments $(\sim 0.5 \%)$ is considerably higher than the magnitude of the observed effect $(\sim 5 \%)$. Therefore, our experiments detected the effect of turbulent thermal diffusion in the multi-fan turbulence generator. These results are in compliance with the results of the previous experiments in oscillating grids turbulence (see Buchholz et al. 2004; Eidelman et al. 2004). Conclusions =========== We studied experimentally the effect of turbulent thermal diffusion in a multi-fan turbulence generator using Particle Image Velocimetry and Image Processing Techniques. In a turbulent flow with an imposed vertical mean temperature gradient (stably stratified flow) particles accumulate in regions of minimum mean temperature. Therefore, our experiments detected the effect of turbulent thermal diffusion in the multi-fan turbulence generator, i.e., non-diffusive mean flux of particles in the direction of the mean heat flux. Turbulent thermal diffusion is an universal phenomenon. In particular, using two very different turbulent flows created by oscillating grids turbulence generator (Buchholz et al. 2004; Eidelman et al. 2004) and multi-fan turbulence generator, we demonstrated that the qualitative behavior of particle spatial distribution in non-isothermal turbulent flow is similar. The same physics is responsible for formation of particle inhomogeneities, i.e., competition between turbulent fluxes caused by turbulent thermal diffusion and turbulent diffusion. We are grateful to two anonymous referees for their very helpful and important comments. This research was partly supported by the German-Israeli Project Cooperation (DIP) administered by the Federal Ministry of Education and Research (BMBF) and Israel Science Foundation governed by the Israeli Academy of Science. Adrian RJ (1991) Particle imaging techniques for experimental fluid mechanics. Ann. Rev. Fluid Mech. 23: 261-304 Aliseda A; Cartellier A; Hainaux F; Lasheras JC (2002) Effect of preferential concentration on the settling velocity of heavy particles in homogeneous isotropic turbulence. J. Fluid Mech. 468: 77-105 Birouk M; Chauveau C; Sarh B; Quilgars A; Gokalp I (1996) Turbulence effects on the vaporization of mono-component single droplets. Combustion Science and Technology 413: 113-114 Blackadar AK (1997) Turbulence and diffusion in the atmosphere. Springer, Berlin Buchholz J; Eidelman A; Elperin T; Grünefeld G; Kleeorin N; Krein A; Rogachevskii I (2004) Experimental study of turbulent thermal diffusion in oscillating grids turbulence. Experiments in Fluids 36: 879-887 Csanady GT (1980) Turbulent diffusion in the environment. Reidel, Dordrecht Eaton JK; Fessler JR (1994) Preferential concentration of particles by turbulence. Int J Multiphase Flow 20: 169-209 Eidelman A; Elperin T; Kleeorin N; Krein A; Rogachevskii I; Buchholz J; Grünefeld G (2004) Turbulent thermal diffusion of aerosols in geophysics and in laboratory experiments. Nonlinear Processes in Geophysics 11: 343-350 Elperin T; Kleeorin N; Rogachevskii I (1996) Turbulent thermal diffision of small inertial particles. Phys Rev Lett. 76: 224-228 Elperin T; Kleeorin N; Rogachevskii I (1997) Turbulent barodiffusion, turbulent thermal diffusion and large-scale instability in gases. Phys Rev E 55: 2713-2721 Elperin T; Kleeorin N; Rogachevskii I (1998) Formation of inhomogeneities in two-phase low-mach-number compressible turbulent flows. Int J Multiphase Flow 24: 1163-1182 Elperin T; Kleeorin N; Rogachevskii I (2000a) Mechanisms of formation of aerosol and gaseous inhomogeneities in the turbulent atmosphere. Atmosph Res 53: 117-129 Elperin T; Kleeorin N; Rogachevskii I; Sokoloff D (2000b) Turbulent transport of atmospheric aerosols and formation of large-scale structures. Physics and Chemistry of the Earth A25: 797-803 Elperin T; Kleeorin N; Rogachevskii I; Sokoloff D (2001) Mean-field theory for a passive scalar advected by a turbulent velocity field with a random renewal time. Phys Rev E 64: 026304 (1-9) Fallon T; Rogers CB (2002) Turbulence-induced preferential concentration of solid particles in microgravity conditions. Experiments in Fluids 33: 233-241 Fessler JR; Kulick JD; Eaton JK (1994) Preferential concentration of heavy particles in a turbulent channel flow. Phys. Fluids 6: 3742-3749 Flagan R; Seinfeld JH (1988) Fundamentals of air pollution engineering. Prentice Hall, Englewood Cliffs Jaenicke R (1987) Aerosol physics and chemistry. Springer, Berlin Korolev AV; Mazin IP (1993) Zones of increased and decreased concentration in stratiform clouds. J. Appl. Meteorol. 32: 760-773 Maxey MR (1987) The gravitational settling of aerosol particles in homogeneous turbulence and random flow field. J Fluid Mech 174: 441-465 Maxey MR; Chang EJ; Wang LP (1996) Interaction of particles and microbubbles with turbulence. Experim. Thermal and Fluid Science 12: 417-425 Pandya RVR; Mashayek F (2002) Turbulent thermal diffusion and barodiffusion of passive scalar and dispersed phase of particles in turbulent flows. Phys Rev Lett 88: 044501 (1-4) Reeks MW (2005) On model equations for particle dispersion in inhomogeneous turbulence. Int J Multiphase Flow 31: 93-114 Seinfeld JH (1986) Atmospheric chemistry and physics of air pollution. John Wiley, New York Shaw RA (2003) Particle-turbulence interactions in atmospheric clouds. Ann. Rev. Fluid Mech. 35: 183-227 Wang LP; Maxey MR (1993) Settling velocity and concentration distribution of heavy particles in homogeneous isotropic turbulence. J. Fluid Mech. 256: 27-68 Westerweel J (1997) Fundamentals of digital particle image velocimetry. Meas. Sci. Technology 8: 1379-1392 Westerweel J (2000) Theoretical analysis of the measurement precision of particle image velocimetry. Exp. Fluids, Suppl. 29: S3-S12
--- abstract: 'If a component of cosmological dark matter is made up of massive particles - such as sterile neutrinos - that decay with cosmological lifetime to emit photons, the reionization history of the universe would be affected, and cosmic microwave background anisotropies can be used to constrain such a decaying particle model of dark matter. The optical depth depends rather sensitively on the decaying dark matter particle mass $m_\mathrm{dm}$, lifetime $\tau_\mathrm{dm}$, and the mass fraction of cold dark matter $f$ that they account for in this model. Assuming that there are no other sources of reionization and using the WMAP 7-year data, we find that 250 eV $\apprle$ $m_\mathrm{dm}$ $\apprle$ 1 MeV, whereas $2.23\times 10^{3}$ yr $\apprle$ $\tau_\mathrm{dm}/f$ $\apprle$ $1.23\times 10^{18}$ yr. The best fit values for $m_\mathrm{dm}$ and $\tau_\mathrm{dm}/f$ are 17.3 keV and $2.03\times 10^{16}~\mathrm{yr}$ respectively.' author: - 'S. Yeung, M. H. Chan, and M. -C. Chu' title: Cosmic Microwave Background constraints of decaying dark matter particle properties --- Introduction ============ There have been tremendous progress in cosmology in the past decade. The availability of high quality observational data such as those from WMAP [@kom10] has led to tight constraints on cosmological parameters and models. There is now a standard model of cosmology, in which only a small portion of the total mass-energy in the universe is ordinary matter, the rest being dark components which we have little understanding of. In some of the proposed dark matter models, such as sterile neutrino [@dol94], the dark matter particles may decay and emit photons [@bor08; @cen01], which are redshifted with the expansion of the universe and may eventually ionize hydrogen and helium at later times. Therefore, decaying dark matter particles may contribute to reionization and imprint their signatures on the cosmic microwave background anisotropies (CMBA). In this paper, we constrain the mass and life time of decaying dark matter particles by using the WMAP data of CMBA. There are strong evidences for reionization at late universe [@bec01], many sources of which have been proposed, such as star formation [@gne97], UV radiation from black holes [@sas96], and supernova-driven winds [@teg93]. @bie06 point out that the X-ray photons produced in the decays of sterile neutrinos can boost the production of molecular hydrogen, and as a result the rates of cooling of gas and early star formation are increased, leading to reionization at redshift consistent with the WMAP results. @boy06 use extragalactic diffuse X-ray background to constrain the decay rate of sterile neutrinos as a warm dark matter candidate. @sel06 use the Ly-alpha forest power spectrum measured by the Sloan Digital Sky Survey and high-resolution spectroscopy observations in combination with WMAP data and galaxy clustering to constrain sterile neutrino masses. The lower limits obtained are 13.1 keV at 95% C.L. and 9.0 keV at 99.9% C.L.. In @zha07, decaying dark matter is also considered to be an energy source of reionization. However several approximations are made in that paper; in particular, the fraction of the decay energy deposited in baryonic gas is simply characterized by a phenomenological parameter. In this paper, we do not make approximations about the amount of energy absorbed by the baryons. The ionization and heating rates are calculated using the appropriate cross sections. Furthermore, we vary both the decaying dark matter particle parameters and cosmological parameters to fit the CMBA data, while in @zha07 only the decaying dark matter particle parameters and the scalar amplitude are varied to fit the CMBA spectrum. There is also another earlier work [@map05] studying the effect on reionization by the decaying process of sterile neutrinos, where a relation between sterile neutrino mass and lifetime is used, which is based on the assumption that sterile neutrinos are the dominant component of dark matter. However in this paper we do not assume any relation between the decaying sterile neutrino mass and lifetime [^1]; we treat them as two independent parameters instead, and we introduce another free parameter, the mass fraction of dark matter that are decaying, $f$. The optical depth depends rather sensitively on the mass $m_\mathrm{dm}$ and life time $\tau_\mathrm{dm}$ of dark matter particles, as well as $f$. However, to good approximation, the effects of $f$ and $\tau_\mathrm{dm}$ are degenerate and only their ratio is an independent parameter. We then constrain these parameters, $m_\mathrm{dm}$ and $\tau_\mathrm{dm}/f$ by the WMAP 7-year data. Assuming that such a decaying process is the only source of reionization [^2], we find that $m_\mathrm{dm}$ is less than about 1 MeV, and $\tau_\mathrm{dm}/f$ is less than about $10^{18}~\mathrm{yr}$, with the best-fit values being 17.3 keV and $2.03\times 10^{16}~\mathrm{yr}$ respectively. In Section 2, we present the calculation of the ionization fraction and optical depth in this decaying dark matter model. The Markov Chain Monte Carlo fitting to WMAP data results and discussion are presented in Section 3, and Section 4 is a summary and conclusion. The Model ========= The evolution of the ionization fractions $x_\mathrm{H}(z)$ and $x_\mathrm{He}(z)$ (for hydrogen and helium respectively) and the matter temperature $T_m(z)$ satisfy the coupled ordinary differential equations $$\begin{aligned} \frac{dx_\mathrm{H}(z)}{dz}=\frac{-1}{(1+z)n_\mathrm{H}(z)H(z)}[R_{i\mathrm{H}}(z)+R_{sx\mathrm{H}}(z)]\label{ode1} \\ \frac{dx_\mathrm{He}(z)}{dz}=\frac{-1}{(1+z)n_\mathrm{He}(z)H(z)}[R_{i\mathrm{He}}(z)+R_{sx\mathrm{He}}(z)]\label{ode2} \\ \frac{dT_m(z)}{dz}=\frac{-2}{3k_\mathrm{B}(1+z)H(z)}\frac{R_{h\mathrm{H}}(z)+R_{h\mathrm{He}}(z)+R_{sT}(z)}{[1+x_\mathrm{H}(z)+x_\mathrm{He}(z)]n_\mathrm{H}(z)},\label{ode3}\end{aligned}$$ where $R_{sx}(z)$ and $R_{sT}(z)$ are the standard net recombination and net heating rates respectively, and $R_{iS}(z)$ and $R_{hS}(z)$ are the additional ionization and heating rates respectively due to the decaying dark matter particles. The additional terms include the contribution from both hydrogen ($S=\mathrm{H}$) and helium ($S=\mathrm{He}$) atoms. The hydrogen ionization rate $R_{i\mathrm{H}}(z)$ due to the decaying dark matter particles is: $$\label{ionequi} R_{i\mathrm{H}}(z) = n_\mathrm{H}(z) \left[1-x_\mathrm{H}\left(z\right)\right] \int^\infty_{E_\mathrm{th,H}} \frac{4\pi J(E)}{E} \sigma_\mathrm{H}\left(E\right) dE,$$ where $x_\mathrm{H}\left(z\right)$ is the ionization fraction at redshift $z$, $n_\mathrm{H}(z)$ is the total hydrogen number density including neutral and ionized hydrogen at redshift $z$, $J\left(E\right)$ is the photon energy flux per unit energy per unit solid angle at energy $E$, and $\sigma_\mathrm{H}\left(E\right)$ is the photoionization cross section of hydrogen at energy $E$. The hydrogen heating rate $R_{h\mathrm{H}}(z)$ due to the decaying dark matter particles is: $$\label{heat_eqn1} R_{h\mathrm{H}}(z) = n_\mathrm{H}[1-x_\mathrm{H}(z)]\int^\infty_{E_\mathrm{th,H}} \frac{4\pi J(E)(E-E_\mathrm{th,H})}{E} \sigma_\mathrm{H}\left(E\right) dE.$$ Similarly, the helium ionization rate $R_{i\mathrm{He}}(z)$ due to the decaying dark matter particles is: $$\label{ionequiHe} R_{i\mathrm{He}}(z) = n_\mathrm{He}(z) \left[1-x_\mathrm{He}\left(z\right)\right] \int^\infty_{E_\mathrm{th,He}} \frac{4\pi J(E)}{E} \sigma_\mathrm{He}\left(E\right) dE,$$ and the helium heating rate $R_{h\mathrm{He}}(z)$ due to the decaying dark matter particles is: $$\label{heat_eqn1He} R_{h\mathrm{He}}(z) = n_\mathrm{He}[1-x_\mathrm{He}(z)]\int^\infty_{E_\mathrm{th,He}} \frac{4\pi J(E)(E-E_\mathrm{th,He})}{E} \sigma_\mathrm{He}\left(E\right) dE.$$ The photon flux $J(E)$ is $$J(E)=\frac{1}{4\pi}\left[n_\mathrm{dm0}(1+z)^3\right]\frac{c~e^{-t(z_\mathrm{em})/\tau_\mathrm{dm}}e^{-\tau_\mathrm{abs}(z,z_\mathrm{em})}}{H(z_\mathrm{em})\tau_\mathrm{dm}}, \label{JE}$$ where $z_\mathrm{em}$ is the red shift when the dark matter particle decayed, $E=E_0\frac{1+z}{1+z_\mathrm{em}}$, $E_0$ being the energy of the emitted photon, $t(z)$ is the time elapsed since the big bang at redshift $z$, $n_\mathrm{dm0}$ is the present number density of the decaying dark matter particles if they did not decay so that $n_\mathrm{dm0}(1+z)^3$ is the number density at redshift $z$, and $\tau_\mathrm{abs}$ is the optical depth caused by absorption of photons by ionization defined by $$\begin{split} \label{eq:tau_abs} \tau_\mathrm{abs}(z,z_\mathrm{em})\equiv\int^{z_\mathrm{em}}_z \frac{1}{(1+z^\prime)H(z^\prime)}\left[1-x_\mathrm{H}\left(z^\prime\right)\right]n_\mathrm{H}(z^\prime)\sigma_\mathrm{H}(E=E_0\frac{1+z^\prime}{1+z_\mathrm{em}})cdz^\prime\\ +\int^{z_\mathrm{em}}_z \frac{1}{(1+z^\prime)H(z^\prime)}\left[1-x_\mathrm{He}\left(z^\prime\right)\right]n_\mathrm{He}(z^\prime)\sigma_\mathrm{He}(E=E_0\frac{1+z^\prime}{1+z_\mathrm{em}})cdz^\prime. \end{split}$$ We consider photon energy $E$ between the hydrogen threshold energy $E_\mathrm{th,H}$(=13.6 eV) and $E_0$ at redshift $z$. The lower limit is $E_\mathrm{th,H}$ since photons with $E$ below $E_\mathrm{th,H}$ do not have enough energy to ionize hydrogen. The upper limit is $E_0$ since the redshift due to the expansion of the universe would cause photons to have energy $E$ smaller than $E_0$. Hence the integral in (\[ionequi\]) can be written as: $$\begin{aligned} &\int^\infty_{E_\mathrm{th,H}} \frac{4\pi J(E)}{E} \sigma_\mathrm{H}\left(E\right) dE\nonumber \\ &=\frac{c~n_\mathrm{dm0}(1+z)^3}{\tau_\mathrm{dm}}\int^{E_0}_{E_\mathrm{th,H}} \frac{e^{-t(z_\mathrm{em})/\tau_\mathrm{dm}}e^{-\tau_\mathrm{abs}(z,z_\mathrm{em})}}{E~H(z_\mathrm{em})} \sigma_\mathrm{H}\left(E\right) dE\nonumber \\ &=\frac{c~n_\mathrm{dm0}(1+z)^3}{\tau_\mathrm{dm}}\int^{E_0/(1+z)}_{E_\mathrm{th,H}/(1+z)} \frac{e^{-t(z_\mathrm{em}=E_0/E_\mathrm{obs}-1)/\tau_\mathrm{dm}}e^{-\tau_\mathrm{abs}(z,z_\mathrm{em}=E_0/E_\mathrm{obs}-1)}}{E_\mathrm{obs}H(z_\mathrm{em}=E_0/E_\mathrm{obs}-1)} \sigma_\mathrm{H}\left(E_\mathrm{obs}(1+z)\right) dE_\mathrm{obs}\label{eq:ion_int},\end{aligned}$$ where we have made a change of integration variable in the third line from $E$ to $E_\mathrm{obs}$, the present observed energy of a photon produced in the past by the decaying process. The integral for the helium contribution can be rewritten similarly. The photoionization cross section $\sigma(E)$ is approximated by @astrophy-book: $$\sigma(E)=\sigma_\mathrm{th} \left\{ \beta\left[ \frac{E}{E_\mathrm{th}}\right] ^{-s}+(1-\beta)\left[ \frac{E}{E_\mathrm{th}}\right] ^{-(s+1)} \right\},$$ where $\sigma_\mathrm{th}=6.30\times 10^{-18} \mathrm{cm}^2$, $\beta=1.34$ and $s=2.99$ for hydrogen, and $\sigma_\mathrm{th}=7.42\times 10^{-18} \mathrm{cm}^2$, $\beta=1.66$ and $s=2.05$ for helium. To account for the energetic secondary electrons that could ionize and heat up hydrogen and helium atoms, we multiply the cross sections in the ionization rates by an additional factor $\{1+\phi[x(z)]E(z)/E_{\rm th}\}$ [@map05]. For the cross sections in the ionization rates, $\phi(x)=C(1-x^a)^b$ and $C=0.3908$, $a=0.4092$ and $b=1.7592$ for hydrogen, and $C=0.0554$, $a=0.4614$ and $b=1.6660$ for helium [@shu85]. For the cross sections in the heating rates, $\phi(x)=C[1-(1-x^a)^b]$ and $C=0.9971$, $a=0.2663$ and $b=1.3163$ for hydrogen and helium. In this model we take the Hubble parameter $H(z)$ to be that given in the $\Lambda$CDM model, $H_0\sqrt{\Omega_\Lambda+\Omega_m(1+z)^3}$ where $H_0 = H(z = 0)$ is the present Hubble parameter, $\Omega_m$ is the present matter density, and $\Omega_\Lambda$ is the dark energy density. There are three free parameters $m_\mathrm{dm}$, $\tau_\mathrm{dm}$, $f$ in this model, where $m_\mathrm{dm}$ and $\tau_\mathrm{dm}$ are the mass and life time of the decaying dark matter particles, and $f$ is the present mass fraction of the total dark matter accounted for by these hypothetical decaying particles. $m_\mathrm{dm}$ and $f$ are directly related to $E_0$ and $n_\mathrm{dm0}$ in the above equations: $$\begin{aligned} &E_0=m_\mathrm{dm}/2&, \\ &n_\mathrm{dm0}=f\Omega_\mathrm{dm} \frac{3H_0^2}{8\pi G m_\mathrm{dm}c^2}e^{t_0/\tau_\mathrm{dm}}&,\label{params_ndm0}\end{aligned}$$ where $\Omega_\mathrm{dm}$ is the present dark matter density, $G$ is the gravitational constant, and $t_0$ is the age of the universe. If we assume $\tau_\mathrm{dm}$ is much larger than $t_0$, the exponential factors in Eqs. (\[eq:ion\_int\]) and (\[params\_ndm0\]) can be ignored. The ionization and heating rates due to decaying dark matter particles then depend only on the ratio between $f$ and $\tau_\mathrm{dm}$. By solving the two ordinary differential equations (\[ode1\]) and (\[ode2\]) we get the ionization fraction $x$ and matter temperature $T_m$ as functions of $z$. The subprogram RECFAST [@sea00] used in CAMB [@lew00] is modified to include the additional terms due to the decaying dark matter particles. We have changed CAMB so that the reionization history $x(z)$ is calculated by our model, rather than the *ad hoc* $\tanh$ function built in. This introduces three new parameters: $f$, $\tau_\mathrm{dm}$ and $m_\mathrm{dm}$, which are varied together with other cosmological parameters. We have not changed the Boltzmann equations since the effects introduced by the decaying cold dark matter are very small and are of higher order. For example, the photons from the dark matter decay affect other photons by first scattering with electrons which then scatter with other photons. Also, very few dark matter particles have decayed by the time of recombination. The fraction of electrons affected by the decay photons can be approximated by the ratio between the number densities of the decay photons and the CMB photons, which is at most of the order $10^{-7}$ within the interested ranges of $f$ and $\tau_{\rm dm}$. Therefore only very few electrons and hence CMB photons are affected by the decaying dark matter particles directly. COSMOMC [@lew02] and the 7-year WMAP data are used to constrain the parameters in this model. MCMC Results and Discussion =========================== By varying $m_\mathrm{dm}$, $\tau_\mathrm{dm}$, $f$ and other standard parameters in COSMOMC and comparing the resulting CMBA spectrum with WMAP 7-year data, we obtain the constraints on these parameters. The lower limits of $m_\mathrm{dm}$, $\tau_\mathrm{dm}$ and $\tau_\mathrm{dm}/f$ are set to be 13.6 eV, $10^{10}~\mathrm{yr}$ and $10^{10}~\mathrm{yr}$ respectively. The minimum value for $\tau_\mathrm{dm}$ is chosen to be 10 Gyr because it has to be large enough so that not too much dark matter have already decayed by now. The result is shown in Figure \[fig:cosmomc\_results\]. From the COSMOMC results the set of best fit parameters is shown in Table \[xbestfit\], and the marginalized limits are shown in Table \[xlimits\]. Parameter Symbol Value -------------------------------------------------------------------- ---------------------- ----------------------- -- -- -- -- Hubble parameter ($\mathrm{km}~\mathrm{s}^{-1} \mathrm{Mpc}^{-1}$) $H_0$ $69.6$ Physical baryon density $\Omega_bh^2$ $0.02238$ Physical dark matter density $\Omega_ch^2$ $0.1141$ Curvature fluctuation amplitude $A_s$ $2.17 \times 10^{-9}$ Scalar spectral index $n_s$ $0.963$ Decaying dark matter particle mass ($\mathrm{keV}$) $m_\mathrm{dm}$ $17.3$ Decaying dark matter particle life time over fraction (yr) $\tau_\mathrm{dm}/f$ $2.03\times 10^{16}$ : Best fit parameters in the decaying dark matter model. $A_s$ is defined at the pivot scale of 0.05 Mpc$^{-1}$.[]{data-label="xbestfit"} Symbol Prior Limits (68%) Limits (95%) --------------------------------------------------------- -------------- -------------------- -------------------- -- -- -- $H_0$ ($\mathrm{km}~\mathrm{s}^{-1} \mathrm{Mpc}^{-1}$) {40, 100} {67.5, 72.3} {65.3, 74.6} $\Omega_bh^2$ {0.005, 0.1} {0.02176, 0.02284} {0.02123, 0.02339} $\Omega_ch^2$ {0.01, 0.99} {0.1076, 0.1185} {0.1024, 0.1243} $\ln(10^{10}A_s)$ {2.7, 4} {3.047, 3.115} {3.013, 3.149} $n_s$ {0.5, 1.5} {0.948, 0.975} {0.935, 0.987} $\log_{10}(m_\mathrm{dm}/\mathrm{eV})$ {1.13, 9} {3.79, 4.84} {2.41, 6.05} $\log_{10}(\tau_\mathrm{dm}/f/\mathrm{Gyr})$ {1, 10} {6.10, 7.90} {3.36, 9.09} : List of the marginalized limits for different parameters in the decaying dark matter model, together with standard cosmological parameters.[]{data-label="xlimits"} We can see that $m_\mathrm{dm}$ and $\tau_\mathrm{dm}/f$ are highly correlated, and are less than about 1 MeV and $10^{19}$ yr respectively. This mass range overlaps with that of sterile neutrinos in some models, for example @boy06. For comparison, the ionization fraction using the set of best fit parameters in the decaying dark matter model and assumed in the original CAMB are plotted together in Figure \[fig:xe-fit\], and the corresponding CMBA spectra are plotted in Figure \[fig:cl-fit\]. From Figure \[fig:xe-fit\] we can see that the two different ionization fractions agree quite well for most values of $z$, but the difference is significant for $z\approx 50$. Nevertheless the resulting CMBA spectra are still nearly the same. We can also see that the ionization fraction in the decaying dark matter model actually agrees quite well with the ad hoc imposed $\tanh$ function in default CAMB. Recently, joint CMBA-quasar absoprtion line constraints on the reionization history using a model independent principal component decomposition method suggests that reionization is 50% complete between $9.0 < z < 11.8$, and 99% complete between $5.8 < z < 10.4$ (95% CL) [@mit12]. A similar study obtained a best-fit reionization history very close to our result presented in Figure \[fig:xe-fit\] (Figure 1 in @pan11). Another recent work based on the patchy kinetic Sunyaev-Zel’dovich effect concludes that reionization ended at $z > 5.8$ or 7.2 (95% CL), depending on whether correlation with the cosmic infrared background is assumed or not [@zah11]. Our result shown in Figure \[fig:xe-fit\] is consistent with these recent constraints. ![CMBA temperature power spectra, calculated using the decaying dark matter model with the best fit parameters in Table \[xbestfit\], and the original CAMB with the best fit WMAP parameters. The values of the best fit $-\ln(\mathrm{likelihood})$ in the decaying dark matter model and standard $\Lambda$CDM model are 5527.44 and 5532.39 respectively.[]{data-label="fig:cl-fit"}](he_cls.eps){width="1\linewidth"} Our results are consistent with the constraint from diffuse X-ray background [@boy06]. The empirical bound in @boy06 is $\log_{10}(\tau_\mathrm{dm}/\mathrm{Gyr}/f)$ $\apprge$ 8.5, which gives further reduction of the allowed region in the contour plot in Figure \[fig:cosmomc\_results\]. Summary and Conclusion ====================== We have investigated the effects on CMBA by a component (mass fraction $f$) of dark matter particles with mass $m_\mathrm{dm}$ that decay with cosmological lifetime $\tau_\mathrm{dm}$. The photons emitted are redshifted and may ionize hydrogen and helium at later times, affecting the reionization history of the universe. If $\tau_\mathrm{dm}$ is much longer than the age of the universe, the optical depth depends only on the ratio of $\tau_\mathrm{dm}$ and $f$. We obtained constraints on these parameters by using the WMAP 7-year data and modified RECFAST, CAMB and COSMOMC codes and assuming that the only reionization source is the decaying dark matter. In the long lifetime limit, we find that 250 eV $\apprle$ $m_\mathrm{dm}$ $\apprle$ 1 MeV, $2.23\times 10^{3}$ yr $\apprle$ $\tau_\mathrm{dm}/f$ $\apprle$ $1.23\times 10^{18}$ yr, and the best fit values of $m_\mathrm{dm}$ and $\tau_\mathrm{dm}/f$ are 17.3 keV and $2.03\times 10^{16}~\mathrm{yr}$ respectively. Sterile neutrinos with mass 17.4 keV are possible within our marginal limits at 95% CL, which may account for the 8.7 keV emission observed by the $Suzaku$ mission [@cha10; @pro10]. The allowed range of $\tau_\mathrm{dm}/f$ is reduced further if the constraint from diffuse X-ray background is taken into account: $3.16\times 10^{17}$ yr $\apprle$ $\tau_\mathrm{dm}/f$ $\apprle$ $1.23\times 10^{18}$ yr [@boy06]. We have shown that the reionization history of the universe is sensitive to decaying dark matter parameters, and future experiments may lead to tighter constraints on dark matter models. This work is partially supported by grants from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Nos. 400805 and 400910). We thank the ITSC of the Chinese University of Hong Kong for providing its clusters for computations. Barger, V., Phillips, R. J. N. and Sarkar, S. 1995, Phys. Lett. B, 352, 365 Becker, R. H. et al. 2001, , 122, 2850 Becker, G. D., Rauch, M., Sargent, W. L. W., 2007, , 662, 72 Biermann, P. L., Kusenko, A., 2006, , 96, 091301 Borzumati, F., Bringmann, T., & Ullio, P., 2008, , 77, 063514 Boyarsky, A., Neronov, A., Ruchayskiy, O., & Shaposhnikov, M. 2006, , 370, 213 Cen, R., 2001, , 546, L77 Chan, M. H., Chu, M. C., 2011, , 727, L47 Dodelson, S., Widrow, L. M., 1994, , 72, 1 Gnedin, N. Y., Ostriker, J. P. 1997, , 486, 581 Komatsu, E. et al. 2010, , 192, 18 Lewis, A., Challinor, A., & Lasenby, A. 2000, , 538, 473 Lewis, A., Bridle, S. 2002, , 66, 103511 Mapelli, M. & Ferrara, A. 2005, , 364, 2 McGreer, I. D., Mesinger, A., Fan, X., 2011, , 415, 3237 Mitra, S., Choudhury, T. R., Ferrara, A., 2012, , 419, 1480 Osterbrock, D. E., 1974, Astrophysics of Gaseous Nebulae (W. H. Freeman and Company, San Francisco) Pandolfi, S. et al. 2011, arXiv:1111.3570v1 Prokhorov, D. A. and Silk, J. 2010, arXiv:1001.0215 Sasaki, S., & Umemura, M. 1996, , 462, 104 Seager, S., Sasselov, D. D., & Scott, D. 2000, , 128, 407 Seljak, U., Makarov, A., McDonald, P., & Trac, H. 2006, , 97, 191303 Shull, J. M., van Steenberg, M. E. 1985, , 298, 268 Tegmark, M., Silk, J., & Evrard, A. 1993, , 417, 54 Zahn, O. et al. 2011, arXiv:1111.6386v1 Zhang, L., Chen, X., Kamionkowski, M., Si, Z.-G. Si, & Zheng, Z. 2007, , 76, 061301 [^1]: The lifetime refers to the radiative channel only. Since the sterile neutrinos can also decay into 3 active neutrinos, the total lifetime $\approx \tau_\mathrm{dm}/128$ [@bar95]. [^2]: There is a well-known discrepancy between reionization redshifts deduced from CMBA and quasar absorption line observations. However, the constraints based on quasar absorption line observations are highly model-dependent [@mcg11]. In particular, the steep rise in the Gunn-Peterson effective optical depth at z $\apprge$ 6 is highly controversial, as it is very sensitive to the assumed density field and continuum fitting [@bec07]. Recently, direct and model-independent limits on the fraction of neutral Hydrogen at z $\approx$ 5-6 were obtained using the simple statistic of the covering fraction of dark pixels, and they can be consistent with the ionization history derived from CMBA observations [@mcg11]. On the other hand, a recent work shows that model independent joint CMBA-quasar absorption line constraints still permit a broad range of reionization history for $z > 6$ [@mit12].
--- abstract: 'Cumulative Prospect Theory (CPT) is a modeling tool widely used in behavioral economics and cognitive psychology that captures subjective decision making of individuals under risk or uncertainty. In this paper, we propose a dynamic pricing strategy for Shared Mobility on Demand Services (SMoDSs) using a passenger behavioral model based on CPT. This dynamic pricing strategy together with dynamic routing via a constrained optimization algorithm that we have developed earlier, provide a complete solution customized for SMoDS of multi-passenger transportation. The basic principles of CPT and the derivation of the passenger behavioral model in the SMoDS context are described in detail. The implications of CPT on dynamic pricing of the SMoDS are delineated using computational experiments involving passenger preferences. These implications include interpretation of the classic fourfold pattern of risk attitudes, strong risk aversion over mixed prospects, and behavioral preferences of self reference. Overall, it is argued that the use of the CPT framework corresponds to a crucial building block in designing socio-technical systems by allowing quantification of subjective decision making under risk or uncertainty that is perceived to be otherwise qualitative.' author: - 'Yue Guan[^1]' - 'Anuradha M. Annaswamy' - 'H. Eric Tseng' bibliography: - 'references.bib' title: | \ **Cumulative Prospect Theory Based Dynamic Pricing for Shared Mobility on Demand Services** --- #### Index Terms[:]{.nodecor} Cumulative Prospect Theory, Dynamic Pricing, Shared Mobility on Demand, Smart Cities, Risk Attitudes. Introduction ============ Until recently, available solutions for urban transportation have been clearly binary, with the first option represented by public transportation that provides low cost and reduced flexibility and the second corresponding to private automobiles that have high cost and improved flexibility. Emergence of ride sharing platforms such as Uber, Lyft, and Didi Chuxing have changed this landscape, introducing a continuum of services at various levels of cost, flexibility, and carbon footprint. With a projection of a total number of 2 billion vehicles on roads by the year 2035 [@12Billio40:online], the emergence of new concepts such as Mobility on Demand [@ambrosino2004demand; @chong2013autonomy] are urgently needed. One such paradigm is the notion of Shared Mobility on Demand Services (SMoDSs), which consists of customized dynamic routing and dynamic pricing for multiple passengers. This paper pertains to an SMoDS that can provide a customized combination of affordability, flexibility, and carbon footprint. We build on our earlier work in [@guan2019dynamicrouting] and [@annaswamy2018transactive], and offer a solution based on Cumulative Prospect Theory for determining dynamic tariffs. The results of [@guan2019dynamicrouting] correspond to designing dynamic routes for passengers who request the SMoDS, based upon the requested pickup, drop-off locations, and a pre-specified bound on the walking distance by each passenger. An Alternating Minimization (AltMin) based algorithm was presented that optimizes a relevant time cost. The SMoDS server then offered pickup and drop-off locations as well as walking, waiting and riding times to each passenger derived via the AltMin algorithm. The notion of *Transactive Control* in [@annaswamy2018transactive] was introduced to enable the SMoDS to offer a dynamic tariff to the passenger which can serve as an incentive for the decision on the offer. A passenger behavioral model based on Utility Theory [@von2007theory] was derived, with the utility of the passenger being a function of both travel times and tariff. The resulting socio-technical model that combines the passenger behavioral model and the optimization of dynamic routes was used to derive a desired probability of acceptance that led to the average estimated waiting time of passengers on the SMoDS platform being regulated around a desired value. The derivation of the actual dynamic tariffs was however not addressed and assumed to be such that the desired probability of acceptance from each passenger was realized. The results mentioned above have two deficiencies. The first is that the passenger behavioral model is significantly more complex than that considered in [@annaswamy2018transactive]. Strategic decision making, adjustments based upon framing effect, loss aversion, and probability distortion are several key features related to subjective decision making of individuals when facing uncertainty, which makes classic Expected Utility Theory (EUT) inadequate. And an intrinsic feature of the SMoDS is uncertainty in the realized travel times as the route of the passenger could be updated at any time due to the need to accommodate new passengers during the current ride. An important concept that can be utilized towards a more accurate behavioral model for decision making under uncertainty is *Prospect Theory* [@kahneman2013prospect; @tversky1992advances] in general, and Cumulative Prospect Theory (CPT), in particular, where the distortion is applied to cumulative probabilities so as to avoid violations of first order stochastic dominance [@tversky1992advances]. The second deficiency is the lack of focus on specific dynamic tariffs related to the SMoDS. We address both of these deficiencies in this paper. The main contribution of this paper is a CPT based dynamic pricing strategy, where decisions of passengers are based on the subjective utility of the travel times and tariff offered by the SMoDS server. The overall framing, probability distortion, parameterization of the behavioral model, and impact of risk attributes on dynamic pricing are all discussed. Computational experiments involving passenger preferences are exploited to analyze various scenarios of passenger’s risk attitudes via the proposed CPT based behavioral model. Since being introduced by Kahneman and Tversky in 1979 [@kahneman2013prospect], Prospect Theory has achieved remarkable successes in behavioral economics [@barberis2013thirty] and cognitive psychology [@arkes1985psychology]. Until recently, PT has been widely applied in engineering applications where uncertainty plays an important role, such as cloud storage defense [@xiao2017cloud], energy storage of smart grids [@wang2014integrating], and common-pool resource sharing [@hota2016fragility]. In the context of transportation, PT has been explored in [@han2005integrating] through a Stackelberg Games that studies the interplay between the objectives of individual travelers and that of the policy maker, and in [@xu2011prospect] through travelers’ route choices when the travel times are uncertain and deriving the static tolls that result in the optimal system performance. Though PT has been investigated in the area of smart cities/transportations and asset pricing [@barberis2001prospect], to the best of our knowledge, no prior work has been reported related to the applications of PT in SMoDS or for evaluating dynamic tariffs. Dynamic Routing and Dynamic Pricing {#background} =================================== The problem considered in this paper is a SMoDS which accommodates ride requests from passengers in real time. The overall schematic of the CPT based dynamic pricing strategy is illustrated in Fig. \[fig:diagram\], which consists of three main building blocks. The first block updates the dynamic route for each passenger via the AltMin algorithm developed in [@guan2019dynamicrouting] when a new request is received, and calculates the updated $\text{EWT}(t)$ right after the moment of request if the passenger decides to accept the offer. $\text{EWT}(t)$ denotes the average *Estimated Waiting Time* of all passengers who are in the pickup queue at timestamp $t$, i.e., have accepted the SMoDS offers but are yet to be picked up. Given the definition, it is easy to see that $\text{EWT}(t)$ can be regarded as a Key Performance Indicator (KPI) [@hall2015effects] to measure the degree of balance between demand and supply. We therefore apply this KPI as a desired target, ${\text{EWT}}^*$, of economic efficiency of the proposed SMoDS platform. The second block determines the desired probability of acceptance $p^*$ for the new passenger required by the SMoDS platform so as to ensure that the expected $\text{EWT}(t)$ after the passenger’s decision approaches ${\text{EWT}}^*$ [@annaswamy2018transactive]. Finally, the third block utilizes the CPT framework to determine the dynamic tariff $\gamma$ that will nudge the passenger towards $p^*$, and forms the focus of this paper. The details of the first two blocks are described in Sections \[dynamicrouting\] and \[dynamicpricing\] respectively. With this overall background, we then proceed to elaborate the CPT framework starting from Section \[CPT\]. ![Overall schematic of the CPT based dynamic pricing strategy.[]{data-label="fig:diagram"}](diagram.pdf){width="90.00000%"} Dynamic Routing via AltMin Algorithm {#dynamicrouting} ------------------------------------ An AltMin based optimization algorithm is developed in [@guan2019dynamicrouting] to design the optimal routes given the requested pickup, drop-off locations, and pre-specified bounds on the walking distances by the passengers, using an objective function that minimizes a weighted sum of various travel time cost terms, including the total travel time of the vehicle, the walking, waiting, and riding times of each passenger. The optimization procedure is carried out iteratively by determining a set of routing points through which the vehicle picks up and/or drops off passengers, and the sequence at which these routing points are visited. It has been demonstrated in [@guan2019dynamicrouting] that the AltMin algorithm is capable of accommodating real time requests, and outperforms standard Mixed Integer Quadratically Constrained Programming based approaches with an order of magnitude improvement in computational efficiency and with comparable optimality. Dynamic Pricing via Utility Theory {#dynamicpricing} ---------------------------------- The behavioral model of passengers in [@annaswamy2018transactive] was based on utility theory and utilized to determine the probability of the passenger to accept the SMoDS offer. For this purpose, a utility function of taking any transportation option $$\label{objective_utility} u = a_1 t_{\text{walk}} + a_2 t_{\text{wait}} + a_3 t_{\text{ride}} + b \gamma + c$$ was proposed, where $t_{\text{walk}}, t_{\text{wait}}, t_{\text{ride}}$ denote the walking, waiting, riding times, respectively, $\gamma$ denotes the tariff, and $c$ denotes a constant summarizing all other unobservables that might count, such as the need for private space, the positive externalities of reducing greenhouse gas emission via sharing a trip. $a_1, a_2, a_3$ and $b$ are nonpositive weights which depend on the passenger preference regarding the transportation option. If the resulting utility is denoted as $U^{\ell}$ perceived for the SMoDS, with $U^j \in \mathbb{R}, j \in \{1, \, \cdots, \, N\}$ corresponding to the perceived utility of all $N \in {\mathbb{Z}}_{>0}$ available transportation options to choose from, the probability of accepting the SMoDS offer can be determined using discrete choice model [@ben1985discrete] as $$\label{discrete_choice_model} p^{\ell} = \frac{e^{U^{\ell}}}{\sum_{j=1}^N e^{U^j}}, \, \ell \in \{1, \, \cdots, \, N\}$$ While (\[discrete\_choice\_model\]) denotes the actual probability that the passenger will accept the SMoDS offer, from the perspective of the SMoDS platform, it is desired to provide a service that generates the desired collective performance for the platform. Let $\text{EWT}(t^-)$ and $\text{EWT}(t^+)$ denote the value of $\text{EWT}(t)$ immediately before timestamp $t$, and right after timestamp $t$ if the new passenger takes the offer. With this in mind, for the request received at $t_r$, a desired probability of acceptance $p^*$ was chosen to be a function of $\Delta \text{EWT}({t_r}^+) = \text{EWT}({t_r}^+) - {\text{EWT}}^*$ such that $\Delta \text{EWT}(t)$ was regulated around zero after $t_r$ in [@annaswamy2018transactive]. In general one can design $p^*$ as $$\label{mapping} p^* = H\big[\text{EWT}({t_r}^-), \text{EWT}({t_r}^+) \, \big | \, {\text{EWT}}^*\big]$$ with the mapping $H(\cdot, \cdot | \cdot)$ designed such that the expected $\text{EWT}({t_r}^+)$ after the decision of the passenger approaches ${\text{EWT}}^*$. It was demonstrated in [@annaswamy2018transactive] that $H(\cdot, \cdot | \cdot)$ can be chosen such that an overall acceptance rate close to 80% can be realized along with managing to regulate $\text{EWT}(t)$ around ${\text{EWT}}^*$. This is comparable to the statistics of 60-70% reported in other ride sharing platforms [@cohen2016using]. We will therefore attempt to design the dynamic tariffs for the SMoDS to have the passenger’s actual probability of acceptance defined in (\[discrete\_choice\_model\]) towards the targeted value $p^*$. Behavioral Model using CPT {#CPT} ========================== An important feature in the SMoDS is the presence of uncertainty, as the vehicle has to accommodate new passengers at anytime in the route. As a result, the scheduled pickup and drop-off times for a given passenger may stochastically vary over an interval (see Fig. \[fig:uncertainty\]), making the SMoDS an uncertain prospect, i.e., a prospect with stochastic outcome, which leads to the usage of CPT. In contrast, certain prospects are ones whose outcomes are always deterministic. ![Source of uncertainty in the SMoDS. $t^p$ and $t^d$ denote the actual pickup and drop-off time respectively, $\underline{t^p} < \overline{t^p}$ and $\underline{t^d} < \overline{t^d}$ denote the possibly earliest and latest timestamps.[]{data-label="fig:uncertainty"}](uncertainty.pdf){width="80.00000%"} The key axioms of CPT state that when making decision under uncertainty, individuals normally perceive the utility in a subjective and irrational fashion influenced by the following: [@kahneman2013prospect; @tversky1992advances] - *Framing effect*: Individuals value prospects with respect to a reference point instead of an absolute value, and perceive gains and losses differently. - *Loss aversion*: Individuals are affected much more by losses than gains. - *Diminishing sensitivity*: In both gain and loss regimes, sensitivity diminishes when the prospect gets farther from the reference. Therefore, the perceived value is concave in the gain regime and convex for losses. - *Probability distortion*: Individuals overweight small probability events and underweight large probability events. A quantitative description of theses axioms is enabled by defining $V(\cdot)$ the value function and $\pi(\cdot)$ the probability weighting function, both of which are illustrated in Fig. \[fig:CPT\_figures\]. The details of the two functions are elaborated as follows. ![Illustrations of $V(\cdot)$ and $\pi(\cdot)$ in the CPT framework.[]{data-label="fig:CPT_figures"}](CPT_figures.pdf){width="90.00000%"} We first define $U$ as a random variable to denote the objective utility of an uncertain prospect, and $F_U(u)$ as the corresponding Cumulative Distribution Function (CDF). If $U$ takes on discrete values $u_i \in \mathbb{R}, \forall i \in \{1, \, \, \dots, \, n\}$ and $u_1 < \cdots < u_n$, where $n \in {\mathbb{Z}}_{>0}$ is the number of possible outcomes, one can determine the objective utility $U^o$ as the expectation of $U$ according to EUT [@von2007theory], i.e., $$\label{EUT_calculate_discrete} U^o = \sum_{i=1}^n p_i u_i$$ where $p_i \in (0, 1)$ is the probability of outcome $u_i$, and $\sum_{i=1}^n p_i=1$. The subjective utility $U^s_R$ perceived by the passenger within the CPT framework is given by $$\label{CPT_calculate_discrete} U^s_R = \sum_{i=1}^n w_i V(u_i)$$ where $R$ denotes the reference corresponding to the framing effect[^2], and $w_i$ denotes the weighting that represents the subjective perception of $p_i$. Suppose that $k$ out of the $n$ outcomes are losses, $0 \leq k \leq n, k \in \mathbb{Z}_{\geq 0}$, and the rest are non-losses, i.e., $u_i < R$ if $1 \leq i \leq k$ and $u_i \geq R$ if $k < i \leq n$, then $$\label{weights} w_i = \begin{cases} \pi\big[F_U(u_i)\big] - \pi\big[F_U(u_{i-1})\big], & \text{if} \quad i \in [1, k] \\ \pi\big[1-F_U(u_{i-1})\big] - \pi\big[1-F_U(u_i)\big], & \text{otherwise} \end{cases}$$ where we let $F_U(u_0) = 0$ for ease of notation. In what follows, we will adopt the representations for $V(\cdot)$ and $\pi(\cdot)$ as in [@tversky1992advances] and [@prelec1998probability], given by $$\label{V_function} V(u) = \begin{cases} {(u-R)}^{{\beta}^+}, & \text{if} \quad u \geq R \\ -\lambda{(R-u)}^{{\beta}^-}, & \text{otherwise} \end{cases}$$ $$\label{pi_function} \pi(p) = e^{-{[-\text{ln}(p)]}^{\alpha}}$$ It is clear that in contrast to $U^o$, $U^s_R$ is centered on $R$, loss aversion is captured by choosing $\lambda > 1$, and diminishing sensitivity by choosing $0< {\beta}^+, {\beta}^- < 1$. The probability distortion is quantified by choosing $0 < \alpha < 1$. The extension from (\[CPT\_calculate\_discrete\]) to the continuous case of $U^s_R$ is $$\label{CPT_calculate} U^s_R = \int_{-\infty}^{R} V(u) \frac{d}{du}\Big\{\pi\big[F_U(u)\big]\Big\}du + \int_{R}^{\infty} V(u)\frac{d}{du}\Big\{-\pi\big[1-F_U(u)\big]\Big\}du$$ CPT based Passenger Behavioral Model in SMoDS {#formulation} ============================================= The overall passenger behavioral model that we will derive in this section consists of a subjectively perceived utility $U^s_R$ and a subjective probability of acceptance $p^s_R$, both of which will be determined using CPT. The interpretation of risk attitudes, reference points, subjective weighting of probability distributions, and key properties of CPT in the SMoDS context are the topics of Sections \[preliminaries\] through \[properties\]. Objective and Subjective Utilities {#preliminaries} ---------------------------------- The starting point of deriving $U^s_R$ for the SMoDS is the determination of possible outcomes of its objective utility. In order to accommodate the stochastic aspects of travel times, the possible realization of the objective utility $u$ in (\[objective\_utility\]) is replaced by a random variable $$\label{define_X} U = X + b \gamma$$ where $b \gamma$ depends on the tariff from the SMoDS ride offer and is deterministic once the offer is given, and $$\label{calculate_X} X=a_1T_{\text{walk}}+a_2T_{\text{wait}}+a_3T_{\text{ride}}+c$$ captures the uncertainty in travel times and is stochastic. Each term of the travel times is assumed to lie within a known interval specified by the SMoDS offer, defined as $T_{\text{walk}} \in [\underline{t_{\text{walk}}}, \overline{t_{\text{walk}}}], T_{\text{wait}} \in [\underline{t_{\text{wait}}}, \overline{t_{\text{wait}}}], T_{\text{ride}} \in [\underline{t_{\text{ride}}}, \overline{t_{\text{ride}}}]$. From these bounds one can determine $\underline{x}$ and $\overline{x}$, which correspond to the worst and the best cases of the travel times, respectively, and $X \in [\underline{x}, \overline{x}]$ with the CDF $F_X(x)$. Note that $F_X(x) = F_U(x+b\gamma)$ from (\[define\_X\]). With $U$ defined in (\[define\_X\])-(\[calculate\_X\]), the subjective utility $U^s_R$ is calculated via (\[CPT\_calculate\_discrete\])-(\[calculate\_X\]), and objective utility $U^o$ as in (\[EUT\_calculate\_discrete\]). The dependence of $U^s_R$ on $R$ is described in Section \[references\]. Interpretation of Risk Attitudes {#attitudes} -------------------------------- As has been shown in (\[discrete\_choice\_model\]), the evaluation of the probability of acceptance requires the utility of the alternative transportation options available to the passenger. Without loss of generality, each passenger is assumed to choose between two options, the SMoDS and another option such as public transportation, UberX, which is considered as a certain prospect[^3] therefore with objective utility being a constant $A^o \in \mathbb{R}$. The objective probability of acceptance is given by $$\label{prob_a_binary_objective} p ^ o = \frac{e ^ {U ^ o}}{e^{U^o}+e^{A^o}}$$ where $A^o$ can be calculated using (\[objective\_utility\]). The subjective probability of acceptance is given by $$\label{prob_a_binary_subjective} p^s_R=\frac{e^{U^s_R}}{e^{U^s_R}+e^{A^s_R}}$$ where $A^s_R$ denotes the subjective utility of the alternative perceived by the passenger, which can be derived via (\[objective\_utility\]), (\[CPT\_calculate\_discrete\]), and (\[V\_function\]). We now interpret the risk attitudes of passengers based on the above objective and subjective probabilities of acceptance. Since the alternative is certain, a higher probability of acceptance indicates an attitude that is more risk seeking. A passenger who is inclined to choose $p^o$ is regarded as rational. If $p^s_R > p^o$, a passenger is said to be risk seeking compared with rational passengers, and for any two references $R_1$ and $R_2$, if $p^s_{R_1} < p^s_{R_2}$, the passenger with reference $R_1$ is said to be more risk averse than the passenger with reference $R_2$, and risk seeking if the inequality is reversed. Reference Points {#references} ---------------- The central parameter related to CPT is $R$, and is discussed in this subsection. Three different categories are considered: 1. *Static reference points*: These correspond to any fixed quantities that are independent on the SMoDS offer. Examples include the objective utility of the alternative, i.e., $R = A ^ o$, or the utility of making the trip itself, independent of the transportation modes, to the passengers. 2. *Dynamic reference points*: Here $R$ is dependent on the uncertain prospect itself. In the SMoDS context, $R$ can be chosen as $R = \tilde{x} + b\gamma$, where $\tilde{x}$ could be $\underline{x}, \overline{x}, \mathbb{E}_{f_X}(X)$, or any statistics preferred by the passenger. All of these examples however still correspond to deterministic references. 3. *Stochastic reference points*: Instead of the above two categories, it is possible for the reference point itself to vary stochastically. However, little evidence has been found that supports the usage of this case [@baillon2017searching], and hence we do not consider it in the rest of the paper. Subjective Weighting of Probability Distributions {#distribution} ------------------------------------------------- In this subsection, we discuss the subjective perception of a probability distribution $f_X(x)$ by the passenger. $f_X(x)$ denotes Probability Mass Function (PMF) if $X$ is discrete, or Probability Density Function (PDF) if $X$ is continuous. In the current problem, $f_X(x)$ represents the passenger’s prediction on how long the actual travel times will be within the given intervals offered by the SMoDS server. Therefore $f_X(x)$ is objective and based upon the passenger’s prior experience and assessments of demand at the time of request. In what follows, we address the subjective perception of $f_X(x)$ in both discrete and continuous cases. ### Continuous Distributions In some cases, the underlying distribution can be a truncated Normal distribution of the form $$\label{define_normal} f_X^n(x) = \frac{1}{Z^n}\frac{1}{\sqrt{2\pi{\sigma}^2}} e ^ {- \frac{{(x-\mu)}^2}{2{\sigma}^2}}, \, x \in [\underline{x}, \overline{x}]$$ where $\mu=\frac{\underline{x} + \overline{x}}{2}$ and $\sigma = \overline{x} - \underline{x}$ denote the mean and standard deviation, respectively, and $Z^n = \int_{\underline{x}}^{\overline{x}} \frac{1}{\sqrt{2\pi{\sigma}^2}} e ^ {- \frac{{(x-\mu)}^2}{2{\sigma}^2}} dx > 0$ is defined for normalization. In some other cases, a truncated exponential distribution may be valid. These are given by $$\label{define_exponential_optimistic} f_X^{e, o}(x) = \frac{1}{Z^{e, o}} {\lambda}^o e^{-{\lambda}^o(\overline{x}-x)}, \, x \in [\underline{x}, \overline{x}]$$ $$\label{define_exponential_pessimistic} f_X^{e, p}(x) = \frac{1}{Z^{e, p}} {\lambda}^p e^{-{\lambda}^p(x-\underline{x})}, \, x \in [\underline{x}, \overline{x}]$$ where ${\lambda}^o = {\lambda}^p = \frac{1}{\overline{x} - \underline{x}}$, and $Z^{e, o} = \int_{\underline{x}}^{\overline{x}} {\lambda}^o e^{-{\lambda}^o(\overline{x}-x)} dx, Z^{e, p} = \int_{\underline{x}}^{\overline{x}} {\lambda}^p e^{-{\lambda}^p(x-\underline{x})} > 0$ are normalization constants. (\[define\_exponential\_optimistic\]) and (\[define\_exponential\_pessimistic\]) correspond to an optimistic and a pessimistic subcase, since the corresponding mode is at $\overline{x}$ and $\underline{x}$, respectively. ### Discrete Distributions A reasonable choice for this case is a truncated Poisson distribution of the form $$\label{define_Poisson} f_X^P(x)= \begin{cases} \frac{1}{Z^P}\frac{{{(\lambda}^P)}^k e^{-{\lambda}^P}}{k!}, \, & \text{if} \, x = \overline{x} - k \frac{\overline{x}-\underline{x}}{K}\\ 0, & \text{otherwise} \end{cases}$$ where $K \in {\mathbb{Z}_{>0}}$ and $k \in \{0, \, \dots, \, K\}$ denote the maximum and the actual number of possible delays, respectively, ${\lambda}^P>0$, and $Z^P = \sum_{k=0}^{K} \frac{{{(\lambda}^P)}^k e^{-{\lambda}^P}}{k!} > 0$ is the normalization constant. The truncated Poisson distribution reflects the number of possible delays due to accommodating new passengers during the ride. Each additional delay is assumed to result in the same marginal increase in travel times, hence the support of $f_X^P(x)$ is $(K+1)$ disjoint points uniformly spaced in $[\underline{x}, \overline{x}]$. The values of $K$ and ${\lambda}^P$ will be specified in Section \[experiments\]. With the objective probability distributions $f_X(x)$ defined in (\[define\_normal\])-(\[define\_Poisson\]), and the reference $R$ specified, the subjective probability weighting can be derived using (\[weights\]), and (\[pi\_function\])-(\[CPT\_calculate\]). In turn, $U^s_R$ and $p^s_R$ can be derived using (\[CPT\_calculate\_discrete\]), (\[CPT\_calculate\]) and (\[prob\_a\_binary\_subjective\]), which completely specify the behavioral model of an SMoDS passenger. Key Properties of CPT based Behavioral Model {#properties} -------------------------------------------- With the subjective utilities, risk attitudes, reference points, and subjective weighting of probability distributions delineated as above, we now derive four properties of the overall passenger behavioral model. The first two properties are related to static and dynamic references, and are stated in Property \[monotonic\_static\] and \[monotonic\_dynamic\] below. These are helpful in determining the dynamic tariff $\gamma$ that allows $p^s_R$ to reach $p^*$, the desired probability of acceptance. \[monotonic\_static\] Given any static reference point $R \in \mathbb{R}$, $p^s_R$ strictly decreases with $\gamma$. \[monotonic\_dynamic\] Given any dynamic reference point in the form of $R = \tilde{x}+b\gamma, \tilde{x} \in \mathbb{R}$, $p^s_R$ strictly decreases with $\gamma$. Let $\bar{U} = \mathbb{E}_{f_U}(U)$ and $\bar{X} = {\mathbb{E}}_{f_X}(X)$, the third and fourth property stated in Property \[existence\_lambda\] and \[prob\_a\_for\_mixed\] are related to $U^s_{\bar{U}}$ and $p^s_{\bar{U}}$, respectively. \[existence\_lambda\] Given any uncertain prospect, there exists a ${\lambda}^*$, such that $\forall \lambda > {\lambda}^*$, $U^s_{\bar{U}} < 0$. \[prob\_a\_for\_mixed\] For any uncertain prospect, given that $\lambda$ is sufficiently large such that $U^s_{\bar{U}} < 0$, within the price range $\gamma \in [\underline{\gamma}, \overline{\gamma})$, where $\underline{\gamma}$ satisfies $\bar{X} + b \underline{\gamma} = A^o$, and $\overline{\gamma}$ satisfies ${\big[A^o - (\bar{X} + b \overline{\gamma})\big]} ^ {{\beta}^+} - U^s_{\bar{U}} = A^o - (\bar{X} + b \overline{\gamma})$, $p^s_{\bar{U}} < p^o$. Implications of CPT using Computational Experiments {#experiments} =================================================== In this section, three different implications are drawn using computational experiments in order to illustrate subjective decision making of passengers, and how they can be utilized to develop the dynamic pricing strategy for the SMoDS. Determination of Parameters {#parameterization} --------------------------- The discussions in Sections \[background\] through \[formulation\] show that a number of parameters related to the CPT framework have to be determined. These include $\alpha, {\beta}^+, {\beta}^-, \lambda$ defined in $V(\cdot)$ and $\pi(\cdot)$, $a_1, a_2, a_3, b, c$ utility coefficients defined in (\[objective\_utility\]), $t_{\text{walk}}, t_{\text{wait}}, t_{\text{ride}}$ travel times of both the SMoDS and the alternative, and the tariff of the alternative. ![image](table.pdf){width="90.00000%"} Table 1 summarizes the values of the parameters that we used in order to carry out the studies reported in this section. In particular, $\alpha$ was estimated from a recent survey study on passenger preferences under risk regarding transportation options conducted in Singapore involving $1,142$ participants with various demographics [@wang2018risk], and $\beta^+, \beta^-, \lambda$ are from [@tversky1992advances]. In what follows, UberX is regarded as the alternative. The utility coefficients $[a_1, a_2, a_3, b, c]$ of both SMoDS and UberX were estimated from the same survey study in [@wang2018risk]. A dynamic routing problem of sixteen passengers using real request data from San Francisco was considered (see [@annaswamy2018transactive] for details), and the request from the $6^{\text{th}}$ passenger was used for the computational experiments in this section. The AltMin algorithm developed in [@guan2019dynamicrouting] was applied to derive the route and therefore the corresponding travel times of the SMoDS. The constraints on the possible delay were set to be at most 4 minutes of extra waiting and riding, respectively. For the same request, the travel times and price of UberX was retrieved from [@UberEarn32:online]. Using the utility coefficients, travel times and price listed in Table 1, the objective utility of UberX $A^o = -5.17$, and $\underline{x} = -3.47$, $\overline{x}=-3.07$ of the SMoDS are calculated, using (\[objective\_utility\]) and (\[calculate\_X\]). Note that $A^o, \underline{x}, \overline{x}$ are negative as they represent travel costs. With the above numerical values in place, we explore the three implications: (i) fourfold pattern of risk attitudes, (ii) strong aversion of mixed prospects, and (iii) self reference. Fourfold Pattern of Risk Attitudes {#fourfoldpattern} ---------------------------------- The fourfold pattern of risk attitudes is regarded as “the most distinctive implication of prospect theory” by Tversky and Kahneman [@tversky1992advances], which states that when facing an uncertain prospect, the risk attitudes of individuals can be grouped into four categories: 1. Risk averse over high probability gains. 2. Risk seeking over high probability losses. 3. Risk seeking over low probability gains. 4. Risk averse over low probability losses. These risk attitudes are often used to justify subjective decision making of individuals for problems such as settlements of civil lawsuits, desperate treatments of terminal illnesses, playing lotteries, and getting insurance coverage. We now illustrate the fourfold pattern in the SMoDS context using the following scenario, which corresponds to the classic setup for the analysis of the fourfold pattern [@tversky1992advances]: Individuals decide between two options, a certain prospect and an uncertain prospect with two outcomes. The uncertain prospect is the SMoDS, which we assume obeys a truncated Poisson distribution with $K = 1$, i.e., the passenger is subject to at most one delay. Therefore, the two possible outcomes of the SMoDS are $(\underline{x} + b\gamma) $ and $(\overline{x} + b \gamma)$. The corresponding probabilities can be determined using (\[define\_Poisson\]) as $$\label{fourfold} f_X^P(\underline{x}) = \frac{{\lambda}^P}{{\lambda}^P+1}, \quad f_X^P(\overline{x}) = \frac{1}{{\lambda}^P+1}$$ The four scenarios above are realized through suitable choices of $R$ and ${\lambda}^P$ as follows. A dynamic reference point $R$ is chosen to be either $(\underline{x} + b\gamma)$ or $(\overline{x} + b\gamma)$, the SMoDS is a gain if $R = \underline{x} + b\gamma $ and a loss if $R = \overline{x} + b \gamma$. The SMoDS is considered high probability or low probability when the outcome that is not regarded as the reference can be realized with a probability of $p_{\text{NR}}$ or $(1-p_{\text{NR}})$ respectively, where $p_{\text{NR}}$ is close to 1. In the computational experiments presented in Fig. \[fig:fourfold\], $p_{\text{NR}}=0.95$. Moreover, the range of the tariff is chosen as follows $$\label{price_in_fourfold} \begin{cases} \underline{x} + b\gamma < A^o & \text{if} \; R = \underline{x} + b\gamma \\ \overline{x} + b\gamma > A^o & \text{if} \; R = \overline{x} + b\gamma \end{cases}$$ such that the objective utility of the certain prospect, $A^o$, lies in the same gain or loss regime as the SMoDS and therefore represents a reasonable alternative to the SMoDS. With the uncertain and the certain prospect defined in the SMoDS context above, we illustrate the fourfold pattern in Fig. \[fig:fourfold\] using four quadrants. According to the fourfold pattern (a)-(d), the diagonal quadrants should correspond to risk averse behavior while the off-diagonal ones are risk seeking. In each quadrant, we plot a metric defined as $\text{RA}=(U^o - A^o) - (U^s_R- A^s_R)$ with respect to the tariff $\gamma$. This metric captures the Relative Attractiveness that the uncertain prospect has over the certain prospect for rational individuals versus individuals modeled with CPT. This follows since according to (\[prob\_a\_binary\_objective\]) and (\[prob\_a\_binary\_subjective\]), $\text{RA} >0 \Rightarrow p^o > p^s_R$. In Fig. \[fig:fourfold\], we note that $\text{RA}>0$ corresponds to all regions where the blue curve is above zero and indicates risk averse attitudes, as rational individuals have higher probability to accept the uncertain prospect than irrational ones. Similarly, $\text{RA} < 0$ corresponds to the blue line being below zero and denotes risk seeking attitudes. In each quadrant, two subplots are provided, where the subplot on the right corresponds to a specific set of parameters $\beta^+ = \beta^- = \lambda = 1$ which completely removes the role of $V(\cdot)$, while the subplot on the left corresponds to all CPT parameters chosen as in Table 1, and therefore a general CPT model. And as explained before, each quadrant corresponds to a specific choice of $R$ and ${\lambda}^P$, which together determine if an outcome is a gain or loss, and is high or low probability. ![Illustration of the fourfold pattern of risk attitudes in the SMoDS context.[]{data-label="fig:fourfold"}](fourfold.pdf){width="90.00000%"} The most important observation from Fig. \[fig:fourfold\] comes from the differences between the left and right subplots in each of the four quadrants. For example, from Fig. \[fig:fourfold\](a), all risk attitudes in the right subplot correspond to $\text{RA}>0$ and therefore risk averse, while those on the left are only risk averse for a certain price range. That is, the four fold pattern is violated in the left subplot. The same trend is exhibited in all four quadrants. This is because, the fourfold pattern is due to the interplay between $\pi(\cdot)$ and $V(\cdot)$ and is valid only when the magnitude of $\pi(\cdot)$ is sufficiently large relative to that of $V(\cdot)$, such that probability distortion dominates [@harbaugh2009fourfold]. This corresponds to the right subplots[^4] as well as the left subplots within certain price ranges. The implication that we obtain from the analysis of the fourfold pattern of risk attitudes is that the resulting four categories can suitably inform the dynamic pricing strategy in the SMoDS, through the left subplots. That is, it allows a quantification of two qualitative statements (1) the presence of risk seeking passengers gives flexibility in increasing tariffs, and (2) the presence of risk averse passengers requires additional constraints on tariffs. Strong Risk Aversion over Mixed Prospects {#mixed} ----------------------------------------- The other implication of the CPT framework is strong risk aversion over mixed prospects. A mixed prospect is defined as an uncertain prospect whose portfolio of possible outcomes involves both gains and losses [@kahneman2013prospect; @abdellaoui2008tractable]. Clearly, the uncertain prospect is always mixed when $R$ corresponds to its expectation. The strong risk aversion of mixed prospects stems from loss aversion, as the impact of the loss component often dominates its gain counterpart. This implication will be illustrated below in the SMoDS context using two different interpretations. The first interpretation follows from Property \[existence\_lambda\], which essentially states that when $R = \bar{U}$, the subjective utility is strictly negative for a sufficiently large $\lambda$. Therefore, with $R = \bar{U}$ and such a $\lambda$, the uncertain prospect is subjectively perceived as a strict loss. This has been verified numerically using the distributions stated in Section. \[distribution\] with ${\lambda}^* = 2.25$ as chosen in Table 1. Since the objective utility relative to the expectation is neutral, hence strong aversion is exhibited. The second interpretation follows from Property \[prob\_a\_for\_mixed\], which essentially states that when Property \[existence\_lambda\] holds, within the tariff range $[\underline{{\lambda}}, \overline{{\lambda}})$, the uncertain prospect is less likely to be accepted by the CPT inclined passengers compared with the rational ones, as $p^s_{\bar{U}} < p^o$. ![Comparison of $p^s_{\bar{U}}$ and $p^o$. For fair comparison, the tariff range of $\gamma \geq \frac{A^o - \bar{X}}{b}$ is plotted, where the alternative is non-loss.[]{data-label="fig:mixed"}](mixed.pdf){width="90.00000%"} Fig. \[fig:mixed\] illustrates Property \[prob\_a\_for\_mixed\] with $f_X(x)$ obeying a Normal distribution. With the numerical values in Table 1, we can compute $\underline{\gamma} \approx \$11$ and $\overline{\gamma} \approx \$20$. It is clear from the left subplot that within this price range, passengers exhibit strong risk aversion over the SMoDS, as the orange curve is strictly above the blue one. It is interesting to note that when ${\beta}^+ = 1$, which corresponds to the case when passengers are risk neutral in the gain regime, $\overline{\gamma} \rightarrow \infty$ (see Fig. \[fig:mixed\](b)). The implication regarding strong risk aversion over mixed prospects is as follows: As the SMoDS has significant uncertainty, for passengers who regard the expected service quality as the reference, and when the alternative is relatively a non-loss prospect, strong risk aversion is exhibited. Hence the SMoDS is strictly less attractive to these passengers when compared to rational ones. Therefore, the dynamic tariffs may need to be suitably designed by the SMoDS server so as to compensate for these perceived losses. Rebates and subsidies may be a few typical examples. Self Reference {#self} -------------- In this section, we compare $p^s_{\bar{U}}$ with $p^s_{A^o}$. The four different distributions defined in (\[define\_normal\])-(\[define\_Poisson\]) are all considered. In each case, how these two probabilities vary with the tariff $\gamma$ were evaluated. The results are shown in Fig. \[fig:comparison\]. ![Comparison of $p^s_{\bar{U}}$ with $p^s_{A^o}$ using four different $f_X(x)$ in Section \[distribution\]. In the truncated Poisson distribution, the parameters are set as ${\lambda}^P = 4$ and $K = 5$.[]{data-label="fig:comparison"}](comparison.pdf){width="90.00000%"} Fig. \[fig:comparison\] illustrates that for all four distributions, $p^s_{\bar{U}} \geq p^s_{A^o}, \forall \gamma$, which implies that the SMoDS is always more attractive when the reference is the expectation of itself, rather than the alternative. $p^s_{\bar{U}} = p^s_{A^o}$ when $\gamma = \frac{A^o - \bar{X}}{b}$ therefore $\bar{U} = A^o$ hence the two reference points coincide. The following summarizes the third implication inferred from Fig. \[fig:comparison\]: $\bar{U}$ is essentially the rational counterpart of the uncertain prospect. Therefore, it could be argued that, when deciding between two prospects, the chance to accept one prospect is always higher if this prospect itself is regarded as the reference, compared with the case where the alternative is considered as the reference. This is due to loss aversion, i.e., $\lambda > 1$, and can be explained thus: When one prospect is regarded as the reference, by definition, it would never be perceived as a loss and therefore not experience the magnified perception out of losses, whereas the alternative may be subject to being regarded as a loss and therefore can experience this skewed perception. In contrast, if the alternative is chosen as the reference, the roles are reversed[^5]. Moreover, the statement is in fact intuitive as those passengers who regard the expectation as the reference have in some sense already subscribed to the SMoDS, hence are naturally inclined to exhibit a higher probability of acceptance and therefore have higher willingness to pay. This partially explains the reason why converting customers from competitors is typically more difficult than maintaining the current customer base. The last observation from Fig. \[fig:comparison\] is the invariance of the comparison with the underlying probability distributions, which implies that the above implication on self reference are fairly general. Dynamic Tariff Design {#design} --------------------- With the above analytical properties of CPT based passenger behavioral model, we propose the following algorithm for determining the dynamic tariff. As mentioned at the beginning of Section \[background\], the goal is for the actual probability of acceptance $p^s_R$ to reach the desired value $p^*$. We note from (\[prob\_a\_binary\_objective\]) that $p^s_R$ is a function of $U^s_R$ and $A^s_R$, which in turn is a function of $U$ following (\[CPT\_calculate\_discrete\]) through (\[CPT\_calculate\]). Finally, (\[define\_X\]) shows that $U$ is a function of $\gamma$. By combining these equations, we can derive the relationship between $p^s_R$ and $\gamma$ as $p^s_R = f(\gamma)$. According to Property \[monotonic\_static\] and \[monotonic\_dynamic\]), $f(\cdot)$ is strictly monotonic. This in turn implies that the desired dynamic tariff that leads to $p^*$ is given by $\gamma = f^{-1}(p^*)$. Other Remarks {#remarks} ------------- In this entire section, the survey data collected from passengers in Singapore [@wang2018risk] has been used for the utility coefficients in Table 1. The generality of the above observations and implications can be quantified using the parameter Value of Time (VOT), which equates to $\frac{a_2}{b}$. In the Singapore survey, $\text{VOT} = 0.22$ and $0.77$ \[\$/min\] for the SMoDS and UberX respectively, and VOT $= 0.40$ \[\$/min\] for business travelers in the US [@Rogoff2014VOT], both of which are of the same order of magnitude. This implies that the construction of our synthetic data that combines two different sources, one from Singapore and one from the US, is a reasonable excise. Another point worth noting is that we have examined the CPT based passenger behavioral model depends on relative pricing rather than the absolute values. Such an examination helps in applying the CPT framework we have proposed and the corresponding observations and implications obtained in this paper in a broader set of problems in the SMoDS. Concluding Remarks {#conclusions} ================== In this paper, we have proposed a dynamic pricing strategy for a SMoDS using Cumulative Prospect Theory, and builds on our previous work in [@guan2019dynamicrouting] and [@annaswamy2018transactive]. The proposed dynamic pricing strategy together with dynamic routing via the AltMin algorithm [@guan2019dynamicrouting], provide a complete solution to shared mobility on demand that corresponds to an ideal combination of flexibility, convenience, and affordability. The basic principles of CPT and the derivation of the passenger behavioral model in the SMoDS context were described in detail. The three implications of CPT, including the fourfold pattern of risk attitudes, strong risk aversion over mixed prospects, and self reference, on the dynamic pricing strategy of the SMoDS were delineated via computational experiments. The observations and implications obtained in this paper provide a quantitative framework to analyze subjective decision making of passengers in the SMoDS context and can be generalized to a broader set of socio-technical systems. Future works will concentrate on the development of $H(\cdot, \cdot | \cdot)$ which is able to achieve robust regulations of $\text{EWT}(t)$ round ${\text{EWT}}^*$, and the investigation of suitable ${\text{EWT}}^*$ scaling that result in an optimal combination of revenue and ridership for the SMoDS platform. The integration of dynamic pricing directly into dynamic routing, and the extension to the case where the server has little information regarding $f_X(x)$ are topics for future investigation as well. Acknowledgments {#acknowledgments .unnumbered} =============== The authors are grateful to Prof. Jinhua Zhao and Dr. Shenhao Wang from MIT Urban Mobility Lab for valuable suggestions and discussions. This work was supported by the Ford-MIT Alliance. Appendix: Proofs of Properties {#appendix-proofs-of-properties .unnumbered} ============================== We prove by definition. $\forall {\gamma}_1, {\gamma}_2 \in {\mathbb{R}}$, ${\gamma}_1 < {\gamma}_2$, and $R \in \mathbb{R}$, we firstly compare $U^s_R({\gamma}_1)$ with $U^s_R({\gamma}_2)$. $\forall u(\gamma)$ such that $u({\gamma}_1) = x+b{\gamma}_1 < R$ or $u({\gamma}_2) = x+b{\gamma}_2 \geq R$, the contribution to $U^s_R(\gamma)$ strictly decrease since $V[u(\gamma)]$ strictly decreases and the weighting remains the same. $\forall u(\gamma)$ such that $u({\gamma}_1) = x+b{\gamma}_1 \geq R$ and $u({\gamma}_2) = x+b{\gamma}_2 < R$, the contribution to $U^s_R(\gamma)$ strictly decreases since $V[u(\gamma)]$ turns from nonnegative to negative and the weighting is positive. Hence $U^s_R({\gamma}_1) > U^s_R({\gamma}_2)$, $A^s_R({\gamma}_1) = A^s_R({\gamma}_2)$, therefore $p^s_R({\gamma}_1) > p^s_R({\gamma}_2)$. Since ${\gamma}_1$ and ${\gamma}_2$ are arbitrarily chosen, $p^s_R(\gamma)$ strictly decreases with $\gamma$. We prove by definition. $\forall {\gamma}_1, {\gamma}_2 \in {\mathbb{R}}, {\gamma}_1 < {\gamma}_2$, and $R = \tilde{x}+b\gamma, \tilde{x} \in \mathbb{R}$, according to (\[V\_function\]), $A^s_R({\gamma}_1) < A^s_R({\gamma}_2)$. To calculate $U^s_R(\gamma)$, all possible outcomes $u(\gamma) = x+b\gamma$ shift the same amount $b|{\gamma}_1-{\gamma}_2|$ as $R $ does, hence the contributions to $U^s_R(\gamma)$ remain the same as both the weighing and $V[u(\gamma)]$ remain the same, therefore $U^s_R({\gamma}_1) = U^s_R({\gamma}_2)$, hence $p^s_R({\gamma}_1) > p^s_R({\gamma}_2)$. Since ${\gamma}_1$ and ${\gamma}_2$ are arbitrarily chosen, $p^s_R(\gamma)$ strictly deceases with $\gamma$. We prove the case where $U$ is discrete. According to (\[CPT\_calculate\_discrete\])-(\[V\_function\]), $U^s_{\bar{U}}(\lambda) = -\Big[\sum_{i=1}^k w_i {(\bar{U}-u_i)}^{{\beta}^-}\Big] \lambda + \sum_{i=k+1}^n w_i {(u_i - \bar{U})}^{{\beta}^+}$. Since $U$ is uncertain, $ -\Big\{\sum_{i=1}^k w_i {(\bar{U}-u_i)}^{{\beta}^-}\Big\} < 0$, one could simply choose ${\lambda}^* = \frac{\sum_{i=k+1}^n w_i {(u_i - \bar{U})}^{{\beta}^+}}{\sum_{i=1}^k w_i {(\bar{U}-u_i)}^{{\beta}^-}} $. The proof of the continuous case follows the same procedure. Denote $\overline{{\Delta}^o} = A^o - [\bar{X} + b \overline{\gamma}]$ for ease of notation. We firstly prove that there exists a unique $\overline{\gamma}$, such that $\overline{\gamma} > \underline{\gamma}$ and ${\overline{{\Delta}^o}} ^ {{\beta}^+} - U^s_{\bar{U}} = \overline{{\Delta}^o}$. Moreover, $\forall \gamma \in [\underline{\gamma}, \overline{\gamma})$, $\overline{{\Delta}^o} ^ {{\beta}^+} - U^s_{\bar{U}} > \overline{{\Delta}^o}$. Since $U^s_{\bar{U}} < 0$, and $\overline{\gamma} > \underline{\gamma}$, therefore $\overline{{\Delta}^o} > 1$. Within the range $\overline{{\Delta}^o} \in (1, \infty)$, $\Big(\overline{{\Delta}^o} - { \overline{{\Delta}^o}} ^ {{\beta}^+}\Big)$ strictly increases, hence there exists a unique $\overline{{\Delta}^o}$, therefore a unique $\overline{\gamma}$, such that ${\overline{{\Delta}^o}} ^ {{\beta}^+} - U^s_{\bar{U}} = \overline{{\Delta}^o}$. In addition, $\forall \gamma \in [\underline{\gamma}, \overline{\gamma})$, $\overline{{\Delta}^o} ^ {{\beta}^+} - U^s_{\bar{U}} > \overline{{\Delta}^o}$ since $\overline{{\Delta}^o} - { \overline{{\Delta}^o}} ^ {{\beta}^+}$ strictly increases. Secondly, since $p^s_{\bar{U}} = \frac{e ^ {U^s_{\bar{U}}}}{e^{U^s_{\bar{U}}} + e^{A^s_{\bar{U}}}} = \frac{1}{1 + e^{A^s_{\bar{U}} - U^s_{\bar{U}}}}$, and $p^o = \frac{e ^ {U^o}}{e^{U^o} + e^{A^o}} = \frac{1}{1 + e^{A^o - U^o}}$, therefore $p^s_{\bar{U}} < p^o \iff (A^s_{\bar{U}} - U^s_{\bar{U}}) > A^o - U^o$. Since $\gamma \geq \underline{\gamma}$, and $R={\bar{U}}$, therefore $A^s_{\bar{U}} = {[A^o - {\bar{U}}]}^{{\beta}^+}$. Since $U^s_{\bar{U}} < 0$, and $U^o = {\bar{U}}$ by definition, hence $(A^s_{\bar{U}} - U^s_{\bar{U}}) > A^o - U^o \iff {[A^o - {\bar{U}}]}^{{\beta}^+} - U^s_{\bar{U}} > A^o - {\bar{U}}$. Since $\gamma \in [\underline{\gamma}, \overline{\gamma})$, the inequality holds. [^1]: Corresponding author. Email: guany@mit.edu. [^2]: The parametrization of $R$ is marked explicitly in the subscript as the remaining discussions are heavily related to the impact of reference points. [^3]: The source of uncertainty such as unexpected traffic jams are small compared with that of the SMoDS, hence assumed to be negligible. [^4]: The subplot on the right in each quadrant corresponds to the case where individuals are risk neutral in the gain or loss regimes separately, and loss neutral, then $\pi(\cdot)$ alone is sufficient to generate the fourfold pattern. [^5]: Other effects of CPT due to $\alpha, {\beta}^+, {\beta}^- < 1$ may result in complicated nonlinearities which might alleviate loss aversion. Therefore, this statement is valid when $\lambda$ is sufficiently large, such that loss aversion dominates, which is the case with the CPT parameters listed in Table 1.
--- author: - | [Loïc Marrec$^{1,2}$ and Sarika Jalan $^{1,3}$ ]{}\ [*$^1 $ Complex Systems Lab, Indian Institute of Technology Indore, Simrol Campus, Khandwa Road, Simrol, Indore 453552, India\ $^2$ Université Paris-Sud, 91405 Orsay Cedex, France\ $^3$ Centre for Biosciences and Biomedical Engineering, Indian Institute of Technology Indore, Simrol Campus, Khandwa Road, Simrol, Indore 453552, India*]{} title: | Supplementary material for\ ”Analysing degeneracies in networks spectra” --- This Supplementary Material is meant solely to provide the reader with an understanding of “Analysing degeneracies in complex networks” thanks to examples. Then, we propose a derivation of the relation which links nodes belonging to a same $K*S$ structure, namely: $$\left\{ \begin{array}{ll} \sum_{i\in K_p} v_{i}=0 \mbox{ with } v_{i}\neq0 \mbox{ and }p=1,2,...,n_{K*S} \\ v_{j \in V\setminus \{K_1\cup K_2 \cup ... \cup K_{n_{K*S}}\}}=0 \end{array} \right. \label{Relation}$$ We also focus on the relation verified by nodes belonging to a linear combination, namely: $$\left\{ \begin{array}{ll} \sum_{i\in (L.C)_p} v_{i}=0 \mbox{ with } v_{i}\neq0 \mbox{ and }p=1,2,...,n_{L.C}\\ v_{j \in V\setminus \{(L.C)_1\cup (L.C)_2 \cup ... \cup (L.C)_{n_{L.C}}\}}=0 \end{array} \right. \label{Relation2}$$ Useful examples of graphs ========================= ![`(a)` has degeneracy at $0$ but no stars and `(b)` has degeneracy at $-1$ but no cliques.[]{data-label="Stars_cliques"}](diagram1.eps){width="40.00000%"} We mentioned in the manuscript that stars and cliques are not sufficient to explain the occurrence of $0$ and $-1$ eigenvalues, respectively. Graphs in Figure \[Stars\_cliques\] enable to illustrate this point. In order to go further, we focus on cliques. They consist of structures for which every pair of nodes is connected by a unique edge. Few examples are represented in Figure \[Cliques\] for different number of nodes. Typically, the spectra of such graphs exhibit $N-1$ degeneracies for $-1$ eigenvalue, with $N$ the number of nodes. However, if we take the third graph of Figure \[Cliques\] and we cut one of its edges, we will see that two $-1$ eigenvalues are retained whereas the globally connected structure is destroyed. (Figure \[Cut\_edge\]) This kind of observations motivated to search for other reasons behind occurrence of degeneracy in networks spectra. ![Complete graphs for different sizes. `(a)`, `(b)` and `(c)` have two, three and four $-1$ eigenvalues, respectively.[]{data-label="Cliques"}](diagram4.eps){width="50.00000%"} ![Complete graph of five nodes for which we cut one edge. The resulting graph exhibits two $-1$ eigenvalues.[]{data-label="Cut_edge"}](diagram5.eps){width="50.00000%"} Derivation of Eq (\[Relation\]) =============================== In order to prove the equation (\[Relation\]), we consider the eigen-equation: $$(A+I)\textbf{v}=0$$ In the following, we take $R_{1}=R_{2}$. It will be easy to extend to $R_{1}=R_{2}=...=R_{n}$. $$\begin{pmatrix} 1 & 1 & \cdots & a_{1,N} \\ 1 & 1 & \cdots & a_{1,N} \\ \vdots & \vdots & \ddots & \vdots \\ a_{1,N} & a_{1,N} & \cdots & 1 \end{pmatrix} \begin{pmatrix} v_{1} \\ v_{2} \\ \vdots \\ v_{N} \end{pmatrix} =0 \label{EQ2}$$ Thanks to the two first rows, we can write: $$v_{1}+v_{2}+\sum_{i=3}^{N}a_{1i}v_{i}=0 \label{A}$$ By extending to the general case, we obtain: $$\sum_{i\in K}v_{i}+\sum_{j\in S}v_{j}=0$$ We can note that if **v** is an eigenvector associated to 0 eigenvalue of $A+I$ matrix, **v** will also be an eigenvector associated to 0 eigenvalue of $(A+I)^{2}$, $(A+I)^{3}$, etc. So: $$(A+I)^{2}\textbf{v}=0$$ $$\begin{pmatrix} 2+\sum_{i=3}^{N}a_{1i}^{2} & 2+\sum_{i=3}^{N}a_{1i}^{2} & \cdots & 3a_{1N}+\sum_{i=3}^{N-1} a_{1i}a_{iN} \\ 2+\sum_{i=3}^{N}a_{1i}^{2} & 2+\sum_{i=3}^{N}a_{1i}^{2} & \cdots & 3a_{1N}+\sum_{i=3}^{N-1} a_{1i}a_{iN} \\ \vdots & \vdots & \ddots & \vdots \\ 3a_{1N}+\sum_{i=3}^{N-1} a_{3i}a_{iN} & 3a_{1N}+\sum_{i=3}^{N-1} a_{1i}a_{iN} & \cdots & 1+a_{1N}^2+\sum_{i=3}^{N-1}a_{iN}^{2} \end{pmatrix} \begin{pmatrix} v_{1} \\ v_{2} \\ \vdots \\ v_{N} \end{pmatrix} =0 \label{EQ2}$$ The two first rows enable to write: $$(2+\sum_{i=3}^{N}a_{1i}^{2})(v_{1}+v_{2})+\sum_{i=3}^{N}((3a_{1i}+\sum_{j=3,j\neq i}^{N} a_{1j}a_{ji})v_{i})=0 \label{A2}$$ It is obvious that the trivial case $a_{13}=a_{14}=...=a_{1N}=0$ lead to the equation (\[Relation\]). The trivial case corresponds to a isolated complete subgraph. By using the equations (\[A\]) and (\[A2\]), we obtain the following relations: $$\left\{ \begin{array}{ll} v_{1}+v_{2}=(2+\sum_{i=3}^{N}a_{1i}^{2})(v_{1}+v_{2}) \\ \sum_{i=3}^{N}a_{1i}v_{i}=\sum_{i=3}^{N}((3a_{1i}+\sum_{j=3,j\neq i}^{N} a_{1j}a_{ji})v_{i}) \\ \end{array} \right. \label{SYST1}$$ $$\left\{ \begin{array}{ll} 1+\sum_{i=3}^{N}a_{1i}^{2}=0 \\ 2a_{1i}+\sum_{j=3,j\neq i}^{N} a_{1j}a_{ji}=0 \\ \end{array} \right. \label{SYST2}$$ However, in the non-trivial case, $1+\sum_{i=3}^{N}a_{1i}^{2}\neq 0$ and $2a_{1i}+\sum_{j=3,j\neq i}^{N} a_{1j}a_{ji}\neq 0$. So, the system (\[SYST2\]) is absurd. That leads to: $$\left\{ \begin{array}{ll} v_{1}+v_{2}=0 \\ v_{3}=v_{4}=...=v_{N}=0 \\ \end{array} \right. \label{SYST3}$$ As a result, the equation (\[Relation\]) is verified. Eq (\[Relation\]) and (\[Relation2\]) through row equivalence and Gaussian elimination ====================================================================================== As we have written in the manuscript, the conditions (ii) and (iii) make the rank of $A-\lambda I$ decrease. More particularly, if $r$ is the number of linear combinations of rows and $N$ the size of the matrix, one obtains rank$(A-\lambda I)$=$N-r$. Mathematically, it results from the rank-nullity theorem according to which: $$\mbox{rank}(A-\lambda I)+\mbox{null}(A-\lambda I)=N \\$$ with null the dimension of null space of $A-\lambda I$. Knowing that, it seems likely that we can take advantage of the row echelon form using Gaussian elimination. Indeed, we will see that it sheds light on Eq. \[Relation\] and \[Relation2\]. Before starting, we should recall that the three elementary row operations (swap the positions of two rows, multiply a row by a nonzero scalar and add to one row a scalar multiple of another) do not affect the solution set if the matrix is associated to a system of linear equations. First, we consider the condition (ii) through a simple but typical example. We take a duplicate node structure such as two nodes have the same neighbor. This kind of graph makes appear one 0 eigenvalue. In this case, we work on $A-\lambda I=A-0.I=A$ where $R_{1}=R_{2}$. $$A= \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix}$$ Let us apply $L_{1} \leftrightarrow L_{3}$ and $L'_{3} \leftarrow L'_{3}-L_{2}$: $$A \sim \begin{pmatrix} 1 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \label{A+I}$$ The previous matrix is a row echelon form. Since one row has all its entries equal to $0$, it clearly shows that rank$(A)=2$ whereas $N=3$. Let us come back to the eigen-equation with this equivalent form: $$\begin{pmatrix} 1 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} v_{1} \\ v_{2} \\ v_{3} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}$$ It follows that: $v_{1}+v_{2}=0$ and $v_{3}=0$. So Eq. \[Relation\] is verified. Now, we focus on the condition (iii) using the example reported by Figure 1 (b) of the manuscript for which $R_{1}+R_{4}=R_{2}+R_{5}$. It results one 1 eigenvalue and that is why we are going to work on $A+I$. $$A+I= \begin{pmatrix} 1 & 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0\\ 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1 & 1 \\ \end{pmatrix}$$ Let us apply once more the Gaussian elimination. One obtains: $$A+I \sim \begin{pmatrix} 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix}$$ As it was the case in the previous section, this row equivalent matrix enables to find that rank$(A+I)=4$ whereas $N=5$. Let us consider the eigen-equation with this equivalent form: $$\begin{pmatrix} 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix} \begin{pmatrix} v_{1} \\ v_{2} \\ v_{3} \\ v_{4} \\ v_{5} \\ \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ \end{pmatrix}$$ It follows that $v_{1}+v_{2}+v_{4}+v_{5}=0$ and $v_{3}=0$. We recover Eq. \[Relation2\]. Application examples of Eqs. (\[Relation\]) and (\[Relation2\]) =============================================================== Since we have understood why Eq. (\[Relation\]) and (\[Relation2\]) occur, let us apply them to other examples.\ First, let us take an example to illustrate the relation (\[Relation\]): only two rows of an adjacency matrix verify the condition (ii) such as $R_{1}=R_{2}$ and all the other ones are linearly independent. Then, Eq. \[Relation\] tells us that $v_{1,2} \neq 0$, $v_{1}+v_{2}=0$ and $v_{i}=0$ for $i \neq 1,2$ in every eigenvector associated to the studied degenerate eigenvalue.\ Then, let us take an example for the relation (\[Relation2\]): let us assume that an adjacency matrix satisfies the relation $R_{1}+R_{2}=R_{3}+R_{4}$ and all the other rows are linearly independent. Then, Eq. \[Relation2\] shows that $v_{1,2,3,4} \neq 0$, $v_{1}+v_{2}+v_{3}+v_{4}=0$ and $v_{i}=0$ for $i \neq 1,2,3,4$ in every eigenvector associated to the studied degenerate eigenvalue. In other words, this means that the rows $i$ with $v_i \neq 0$ are linearly dependent if and only if they belong to a same structure, namely a same linear combination of rows.\ In the case where there are two different linear combinations of rows as follows $R_{1}+R_{2}=R_{3}$ and $R_{4}+R_{5}=R_{6}$, so $v_{1,2,3,4,5,6} \neq 0$, $v_{1}+v_{2}+v_{3}=0$, $v_{4}+v_{5}+v_{6}=0$ and $v_{i}=0$ for $i \neq 1,2,...,6$.
--- abstract: 'Human actions comprise of joint motion of articulated body parts or “gestures”. Human skeleton is intuitively represented as a sparse graph with joints as nodes and natural connections between them as edges. Graph convolutional networks have been used to recognize actions from skeletal videos. We introduce a part-based graph convolutional network (PB-GCN) for this task, inspired by Deformable Part-based Models (DPMs). We divide the skeleton graph into four subgraphs with joints shared across them and learn a recognition model using a part-based graph convolutional network. We show that such a model improves performance of recognition, compared to a model using entire skeleton graph. Instead of using 3D joint coordinates as node features, we show that using relative coordinates and temporal displacements boosts performance. Our model achieves state-of-the-art performance on two challenging benchmark datasets NTURGB+D and HDM05, for skeletal action recognition.' bibliography: - 'egbib.bib' title: 'Part-based Graph Convolutional Network for Action Recognition' --- Introduction {#sec:intro} ============ Recognizing human actions in videos is necessary for understanding them. Video modalities such as RGB, depth and skeleton provide different types of information for understanding human actions. The S-video (or Skeletal modality) provides 3D joint locations, which is a relatively high level information compared to RGB or depth. With the release of several multi-modal datasets [@Shahroudy_2016_CVPR; @liu2017pku; @7350781], action recognition from S-video has gained significant traction recently [@liu2016spatio; @song2017end; @liu2017global; @zhang2017geometric; @ke2017new]. Graph convolutions [@niepert2016learning; @defferrard2016convolutional; @kipf2016semi] have been used to learn high level features from arbitrary graph structure. State-of-the-art action recognition from S-videos [@yan2018spatial; @li2018spatio] use graph convolutions, wherein the whole skeleton is treated as a single graph. It is, however, natural to think of human skeleton as a combination of multiple body parts. A body-part based representation can learn the importance of each part and their relations across space and time. We present a model using part-based graph convolutional network for recognizing actions from S-videos, using a novel part-based graph convolution scheme. The model attains better performance for recognition than a model entire skeleton as a single graph. Current models for skeletal action recognition [@yan2018spatial; @li2018spatio] use 3D coordinates as features at each vertex. Geometric features such as relative joint coordinates and motion features such as temporal displacements can be more informative for action recognition. Optical flow helps in action recognition from RGB videos [@wang2016temporal] and Manhattan line map helps in generating 3D layout from single image [@zou2018layoutnet]. Geometric feature [@zhang2017geometric] and kinematic features [@zanfir2013moving] have been used for skeletal action recognition before. Inspired by these observations, we use a geometric feature that encodes relative joint coordinates and motion feature that encodes temporal displacements at each vertex in our part-based graph convolution model to significant impact. The major contributions of this paper are: (i) Formulation of a general part-based graph convolutional network (PB-GCN) which can be learned for any graph with well-known properties and its application to recognize actions from S-videos, (ii) Use of geometric and motion features in place of 3D joint locations at each vertex to boost recognition performance, and (iii) Exceeding the state-of-the-art on challenging benchmark datasets NTURGB+D and HDM05. The overview of our representation and signals is shown in Figure \[fig:overview\]. [cc]{} ------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------- ![[]{data-label="fig:overview"}](images/camera_ready/rel_coord.pdf "fig:"){width="15.00000%" height="25.00000%"} ![[]{data-label="fig:overview"}](images/camera_ready/disps.pdf "fig:"){width="15.00000%" height="25.00000%"} ------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------- & [c]{} ![[]{data-label="fig:overview"}](images/camera_ready/axial_appendicular.pdf "fig:"){width="15.00000%" height="25.00000%"}\ \ &\ -------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------- ![[]{data-label="fig:overview"}](images/camera_ready/upper_appendicular.pdf "fig:"){width="10.00000%"} ![[]{data-label="fig:overview"}](images/camera_ready/upper_axial.pdf "fig:"){width="10.00000%"} ![[]{data-label="fig:overview"}](images/camera_ready/lower_appendicular.pdf "fig:"){width="10.00000%"} ![[]{data-label="fig:overview"}](images/camera_ready/lower_axial.pdf "fig:"){width="10.00000%"} -------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------- & ------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------- ![[]{data-label="fig:overview"}](images/camera_ready/upper_left_appendicular.pdf "fig:"){width="5.00000%"} ![[]{data-label="fig:overview"}](images/camera_ready/upper_axial.pdf "fig:"){width="10.00000%"} ![[]{data-label="fig:overview"}](images/camera_ready/upper_right_appendicular.pdf "fig:"){width="5.00000%"} ![[]{data-label="fig:overview"}](images/camera_ready/lower_left_appendicular.pdf "fig:"){width="5.00000%"} ![[]{data-label="fig:overview"}](images/camera_ready/lower_axial.pdf "fig:"){width="10.00000%"} ![[]{data-label="fig:overview"}](images/camera_ready/lower_right_appendicular.pdf "fig:"){width="3.00000%"} ------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------- \ & Related Work {#sec:relatedwork} ============ Non graph-based methods {#sec:2_1} ----------------------- Skeletal action recognition has been approached using techniques such as handcrafted feature encodings, complex LSTM networks, image encodings with pretrained CNNs and non-euclidean methods based on manifolds. Non-deep learning methods worked well initially and proved usefulness of several extracted information from S-videos such as joint angles [@ofli2014sequence], distances [@xia2012view] and kinematic features [@zanfir2013moving]. These methods learn from hand designed features using shallow models which do not model spatio-temporal properties of actions very well and constrain learning capacity. ------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------- ![[]{data-label="fig:gconv"}](images/camera_ready/gconv_01.pdf "fig:"){width="30.00000%"} ![[]{data-label="fig:gconv"}](images/camera_ready/gconv_02.pdf "fig:"){width="30.00000%"} ![[]{data-label="fig:gconv"}](images/camera_ready/gconv_03.pdf "fig:"){width="30.00000%"} ------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------- On the other hand, LSTM-based methods were used because S-videos can be thought of as time sequences of features. Spatio-temporal LSTMs [@liu2016spatio; @liu2017global], attention-based LSTM [@song2017end] and simple LSTM networks with part-based skeleton representation [@tao2015moving; @7298714] have been used. These methods either use complex LSTM models which have to be trained very carefully or use part-based representation with a simple LSTM model. We propose a part-based graph convolutional network that has good learning capacity and uses a part-based representation, inheriting the good qualities of both types of aforementioned approaches. Image encodings of skeletons were proposed to facilitate usage of Imagenet pretrained CNNs to extract spatio-temporal features. Ke [@ke2017new] generate images using relative coordinates while Du [@7486569] and Li [@li20183d] proposed a body part-based image encoding. Due to inherent differences in information in such image encodings and RGB images, it is almost impossible to interpret the learned filters. In contrast, our method is intuitive as it uses a graph-based representation for human skeleton. Manifold learning techniques have been used for skeletal action recognition, where actions are represented as curves on Lie groups [@vemulapalli2014human] and Riemannian manifold [@devanne20153]. Deep learning on these manifolds is difficult [@huang2017deep] while deep learning on graphs (also a manifold) has developed recently [@defferrard2016convolutional; @kipf2016semi]. Our method uses a human skeleton graph and learns a model using part-based graph convolutional network, exploiting the benefits of deep learning on graphs. Graph-based methods {#sec:2_2} ------------------- Representing S-videos as skeleton graph sequences for recognizing actions had not been explored until recently. Li and Leung [@li2017graph] construct graphs using a statistical variance measure dependent on joint distances and match them for recognition. Recently, Yan [@yan2018spatial] and Li [@li2018spatio] proposed a spatio-temporal graph convolutional network for action recognition from S-videos. Both the methods construct graphs where the human skeleton is treated as a single graph. Our formulation explores a partitioned skeleton graph with a part-based graph convolutional network and we show that it improves recognition performance. Also, we use relative coordinates and temporal displacements as features at each vertex instead of 3D joint coordinates (see Figure \[fig:overview\](a)) which improves action recognition performance. Background {#sec:background} ========== A graph is defined as $\mathcal{G} = (\mathcal{V}, \hspace{0.1cm} \mathcal{E})$ where $\mathcal{V}$ is the set of vertices and $\mathcal{E} \subseteq (\mathcal{V} \times \mathcal{V})$ is the set of edges. $\mathbf{A}$ is the graph adjacency matrix having $\mathbf{A}(i,j) = w, \hspace{0.1cm} w \in \mathbb{R} \setminus \{0\}$ if $(v_i, v_j) \in \mathcal{E}$ and $\mathbf{A}(i,j) = 0$ otherwise. $\mathcal{N}_k: v \rightarrow \mathcal{V}$ defines the set of vertices $\mathcal{V}$ in $k$-neighborhood of $v$ which includes neighbors having shortest path length atmost $k$ from vertex $v$. A labeling function $\mathbf{L}: \mathcal{V} \rightarrow \{0,1,\ldots,\mathcal{L}-1\}$ assigns a label to each vertex in a vertex set $\mathcal{V}$, where $\mathcal{L}$ is the number of unique labels. The adjacency matrix is normalized using a degree matrix as: $$\begin{aligned} \mathcal{D}(i,i) = \sum_{j} \mathbf{A}(i,j); \hspace{0.2cm} \mathbf{A}^{\mathbf{norm}} = \mathcal{D}^{-1/2}\mathbf{A}\mathcal{D}^{-1/2} \label{eq:normadj}\end{aligned}$$ Graph convolutions can be formulated using spectral graph theory [@defferrard2016convolutional] or spatial convolution [@niepert2016learning] on graphs. We focus on spatial convolutions in this paper as they resemble convolutions on regular grid graphs like RGB images [@niepert2016learning]. A graph CNN can then be formed by stacking multiple graph convolution units. Graph convolution (shown in Figure \[fig:gconv\]) can be defined as [@niepert2016learning]: $$\begin{aligned} \mathbf{Y}(v_i) &= \sum_{v_j \in \mathcal{N}_k(v_i)} \mathbf{W}(\mathbf{L}(v_j)) \mathbf{X}(v_j) \label{eq:npconv}\end{aligned}$$ where, $v_i$ is the root vertex at which the convolution is centered (like center pixel in an image convolution), $\mathbf{W}(\cdot)$ is a filter weight vector of size of $\mathcal{L}$ indexed by the label assigned to neighbor $v_j$ in the $k$-neighborhood $\mathcal{N}_k(v_i)$, $\mathbf{X}(v_j)$ is the input feature at $v_j$ and $\mathbf{Y}(v_i)$ is the convolved output feature at root vertex $v_i$. Equation \[eq:npconv\] can be written in terms of adjacency matrix as: $$\begin{aligned} \mathbf{Y}(v_i) &= \sum_{j} \hspace{0.1cm} \mathbf{A}^{\mathbf{norm}}(i,j) \hspace{0.1cm} \mathbf{W}(\mathbf{L}(v_j)) \hspace{0.1cm} \mathbf{X}(v_j) \label{eq:npconvadj}\end{aligned}$$ $\mathbf{A}^{\mathbf{norm}}(i,j)$ basically defines the neighbors at distance $1$ and hence, Equation \[eq:npconv\] captures a more general form of convolution by using $k$-order neighborhood $\mathcal{N}_k(v_i)$. [@c@c@c@]{} ------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------ ![[]{data-label="fig:pstgcn"}](images/camera_ready/sneigh.pdf "fig:"){width="12.00000%"} ![[]{data-label="fig:pstgcn"}](images/camera_ready/tneigh.pdf "fig:"){width="12.00000%"} ------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------ & [c]{} ![[]{data-label="fig:pstgcn"}](images/camera_ready/neigh_st.pdf "fig:"){width="15.00000%"}\ & [c]{} ![[]{data-label="fig:pstgcn"}](images/camera_ready/headntorso.pdf "fig:"){width="15.00000%"}\ \ & &\ [c]{} ![[]{data-label="fig:pstgcn"}](images/camera_ready/sconv.pdf "fig:"){width="15.00000%"}\ \ & [c]{} ![[]{data-label="fig:pstgcn"}](images/camera_ready/fagg.pdf "fig:"){width="15.00000%"}\ \ & [c]{} ![[]{data-label="fig:pstgcn"}](images/camera_ready/tconv.pdf "fig:"){width="15.00000%"}\ \ \ & & Part-based Graph {#sec:3_1} ---------------- Graphs representing real world manifolds can often be thought of as being made up of several parts. For instance, a graph representing a complex molecule consists of several simple structures, such as structure of a protein biomolecule, which can be divided into several polypeptide chains that make up the complex. Similarly, human body can be visualized as connected rigid parts, much like a deformable part-based model [@Felzenszwalb:2005]. The graph of the skeleton of human body can be divided into parts, where each subgraph represents a part of the human body. In general, a part-based graph can be constructed as a combination of subgraphs where each subgraph has certain properties that define it. Let us consider that a graph $\mathcal{G}$ has been divided into $n$ partitions. Formally: $$\begin{aligned} \mathcal{G} = \bigcup_{p \in \{1,\ldots,n\}} \mathcal{P}_p \hspace{0.1cm} | \hspace{0.1cm} \mathcal{P}_p = (\mathcal{V}_p, \mathcal{E}_p) \label{eq:partgraph}\end{aligned}$$ $\mathcal{P}_p$ is the partition (or subgraph) $p$ of the graph $\mathcal{G}$. We consider scenarios in which the partitions can share vertices or have edges connecting them. We proceed to explain how the part-based graph convolution is defined for the part-based graph. Part-based Graph Convolutions {#sec:3_2} ----------------------------- In essence, graph convolutions over parts are aimed at capturing high-level properties of parts and learn the relations between them. In a Deformable Part-based Model, different parts are identified and relations between them are learned through the deformation of the connections between them. Similarly, graph convolutions over a part identifies the properties of that subgraph and an aggregation across subgraphs learns the relations between them. For a part-based graph, convolutions for each part are performed separately and the results are combined using an aggregation function $\mathcal{F}_{agg}$. Using $\mathcal{F}_{agg}$ over edges across partitions: $$\begin{aligned} \mathbf{Y}_p(v_i) &= \sum_{v_j \in \mathcal{N}_{kp}(v_i)} \mathbf{W}_p(\mathbf{L}_p(v_j)) \mathbf{X}_p(v_j), \hspace{0.1cm} p \in \{1,\ldots,n\} \label{eq:pconv}\\ \mathbf{Y}(v_i) &= \mathcal{F}_{agg}(\mathbf{Y}_{p1}(v_i), \mathbf{Y}_{p2}(v_j)) \hspace{0.1cm} | \hspace{0.1cm} (v_i, v_j) \in \mathcal{E}_{(p1, p2)}, \hspace{0.1cm} (p1, p2) \in \{1,\ldots,n\} \times \{1,\ldots,n\} \label{eq:peagg}\end{aligned}$$ Using $\mathcal{F}_{agg}$ for common vertices across partitions: $$\begin{aligned} \mathbf{Y}(v_i) &= \mathcal{F}_{agg}(\mathbf{Y}_{p1}(v_i), \mathbf{Y}_{p2}(v_i)) \hspace{0.1cm} | \hspace{0.1cm} (p1, p2) \in \{1,\ldots,n\} \times \{1,\ldots,n\} \label{eq:pvagg}\end{aligned}$$ The convolution parameters $\mathbf{W}_p$ can be shared across parts or kept separate, while the neighbors of $v_i$ only in that part $(\mathcal{N}_{kp}(v_i))$ are considered. In order to combine the information across parts, the function $\mathcal{F}_{agg}$ combines information at shared vertices (equation \[eq:pvagg\]) or shares information through edges crossing parts (equation \[eq:peagg\], $\mathcal{E}_{(p1,p2)}$ contains all edges connecting parts p1 and p2), according to the partition configuration. A sophisticated $\mathcal{F}_{agg}$ can be employed to make the model powerful. Using graph convolutions, part-based graph models can learn rich representations and we demonstrate the strength of this model through application to action recognition from S-videos. Spatio-temporal Part-based Graph Convolutions {#sec:stmodel} ============================================= The S-videos are represented as spatio-temporal graphs. In order to include the temporal dimension, corresponding joints in each part are connected temporally. Figure \[fig:pstgcn\](b) shows the spatio-temporal graph for *torso* over *five* frames. Adapting select-assemble-normalize (<span style="font-variant:small-caps;">patchy-san</span>) proposed by Niepert [@niepert2016learning] we present an overview of convolution formulation for our spatio-temporal graph by extending ideas from section \[sec:3\_2\]. For in-depth understanding, we refer the reader to [@niepert2016learning]. We perform a spatial convolution on each partition following equation \[eq:pconv\], combine the convolved partitions using $\mathcal{F}_{agg}$ and perform temporal convolution on the graph obtained by aggregating the partitions. In effect, we spatially convolve each partition independently for each frame, aggregate them at each frame and perform temporal convolution on the temporal dimension of the aggregated graph. For a possible partitioning of human skeleton, this phenomenon is shown in Figure \[fig:pstgcn\](c) for spatial convolution for a vertex common to torso and head, \[fig:pstgcn\](d) for spatial convolutions in different frames, \[fig:pstgcn\](e) for applying $\mathcal{F}_{agg}$ on head + torso and \[fig:pstgcn\](f) for convolution on temporal dimension of the combined graph. We first define the spatial and temporal neighborhood of a vertex in spatio-temporal graph and assign labels to the vertices in the neighborhoods, which is required to perform convolutions. For each vertex, we use 1-neighborhood $(k = 1)$ for spatial dimension $(\mathcal{N}_1)$ as the skeleton graph is not very large and a $\tau$-neighborhood $(k = \tau)$ for the temporal dimension $(\mathcal{N}_{\tau})$. Figure \[fig:pstgcn\](a) (dashed polygons) shows the spatial & temporal neighborhood for a **root** vertex. The different neighborhood sets for our model are defined as ($\mathbf{d}(v_i, v_j)$ = length of shortest path between $v_i$ and $v_j$): $$\begin{aligned} \mathcal{N}_{1p}(v_i) &= \{v_j \hspace{0.1cm} | \hspace{0.1cm} \mathbf{d}(v_i, v_j) \hspace{0.1cm} \leq \hspace{0.1cm} 1, \hspace{0.1cm} v_i, v_j \in \mathcal{V}_p\} \label{eq:sneigh} \\ \mathcal{N}_{\tau}(v_{i{t_a}}) &= \{v_{i{t_b}} \hspace{0.1cm} | \hspace{0.1cm} \mathbf{d}(v_{i{t_a}}, v_{i{t_b}}) \hspace{0.1cm} \leq \hspace{0.1cm} \left\lfloor \frac{\tau}{2} \right\rfloor\} \label{eq:tneigh}\end{aligned}$$ where, $t_a \hspace{0.1cm} \& \hspace{0.1cm} t_b$ represent two time instants and $p \in \{1,\ldots,n\}$ is the partition index. The set of vertices $\mathcal{V}_p$ differs for each part, with some vertices shared between parts (Figure \[fig:overview\](c)). As temporal convolution is performed on the aggregated spatio-temporal graph, $\mathcal{N}_{\tau}$ is not part-specific. Figure \[fig:pstgcn\](a) shows the spatial and temporal neighborhoods for a **root** vertex in *torso*. For ordering vertices in the receptive fields (or neighborhoods), we use a single label spatially $(\mathbf{L}_S: \mathcal{V} \rightarrow \{0\})$ to weigh vertices in $\mathcal{N}_{1p}$ of each vertex equally and $\tau$ labels temporally $(\mathbf{L}_T: \mathcal{V} \rightarrow \{0,\ldots,\tau-1\})$ to weigh vertices across frames in $\mathcal{N}_{\tau}$ differently. The labeling functions are defined as: $$\begin{aligned} \mathbf{L}_{S}(v_{jt}) &= \{0 \hspace{0.1cm} | \hspace{0.1cm} v_{jt} \in \mathcal{N}_{1p}(v_{it})\} \label{eq:slabel} \\ \mathbf{L}_{T}(v_{i{t_b}}) &= \{((t_b - t_a) + \left\lfloor \frac{\tau}{2} \right\rfloor) \hspace{0.1cm} | \hspace{0.1cm} v_{i{t_b}} \in \mathcal{N}_{\tau}(v_{i{t_a}})\} \label{eq:tlabel}\end{aligned}$$ Using the labeled spatial and temporal receptive fields, we define the spatial and temporal convolutions as (adapted from [@kipf2016semi]): $$\begin{aligned} \mathbf{Y}_p(v_{it}) &= \sum_{v_{jt} \hspace{0.02cm} \in \hspace{0.02cm} \mathcal{N}_{1p}(v_{it})}\mathbf{A}_p(i, j) \hspace{0.05cm} \mathbf{Z}_p(v_{jt}) \hspace{0.1cm} | \hspace{0.1cm} p \in \{1,\ldots,n\} \label{eq:sconvpart} \\ \mathbf{Z}_p(v_{jt}) &= \mathbf{W}_{p}(\mathbf{L}_{S}(v_{jt})) \hspace{0.1cm} \mathbf{X}_p(v_{jt}) \label{eq:xtoz} \\ \mathbf{Y}_{S}(v_{it}) &= \mathcal{F}_{agg}(\{\mathbf{Y}_1(v_{it}), \ldots, \mathbf{Y}_n(v_{it})\}) \label{eq:sconvfagg} \\ \mathbf{Y}_{T}(v_{i{t_a}}) &= \sum_{v_{j{t_b}} \hspace{0.02cm} \in \hspace{0.02cm} \mathcal{N}_{\tau}(v_{i{t_a}})} \mathbf{W}_{T}(\mathbf{L}_{T}(v_{i{t_b}})) \hspace{0.05cm} \mathbf{Y}_{S}(v_{i{t_b}}) \label{eq:tconv}\end{aligned}$$ where, $\mathbf{A}_p$ is a normalized adjacency matrix as explained in section \[sec:background\] for part $p$. $\mathbf{L}_{S}$ for each part is same but $\mathcal{N}_{1p}$ is part-specific. $\mathbf{W}_{p} \in \mathbb{R}^{C^{\prime} \times C \times 1 \times 1}$ is a part-specific channel transform kernel (pointwise operation) and $\mathbf{W}_{T} \in \mathbb{R}^{C^{\prime} \times C^{\prime} \times \tau \times 1}$ is the temporal convolution kernel. $\mathbf{Z}_p$ is the output from applying $\mathbf{W}_p$ on input features $\mathbf{X}_p$ at each vertex. $\mathbf{Y}_{S}$ is the output obtained after aggregating all partition graphs at one frame and $\mathbf{Y}_{T}$ is the output after applying temporal convolution on $\mathbf{Y}_{S}$ output of $\tau$ frames. We use a weighted sum fusion as our $\mathcal{F}_{agg}$: $$\begin{aligned} \mathcal{F}_{agg}(\{\mathbf{Y}_1,\ldots,\mathbf{Y}_n\}) &= \sum_{i} \mathbf{W}_{agg}(i) \hspace{0.05cm} \mathbf{Y}_i \label{eq:fagg}\end{aligned}$$ Human skeleton can be divided into two major components: (1) Axial skeleton and (2) Appendicular skeleton. The body parts included in these two components are shown in Figure \[fig:overview\](b). Human skeleton can be divided into parts based on these components. Different division schemes are shown in Figure \[fig:overview\](b), \[fig:overview\](c) and \[fig:overview\](d) and we use these schemes for experiments to test our PB-GCN. For the final representation, we divide the human skeleton into *four* parts: **head**, **hands**, **torso** and **legs**, which corresponds to a division scheme where each of the axial and appendicular skeleton are divided into upper and lower components, as illustrated in Figure \[fig:overview\](c). We consider left and right parts of hands and legs together in order to be agnostic to *laterality* [@lateral] (handedness / footedness) of the human when performing an action. To show how being agnostic to laterality is helpful, we divide the upper and lower components of appendicular skeleton into left and right (shown in Figure \[fig:overview\](d)), resulting in six parts and show results on it. To cover all natural connections between joints in skeleton graph, we include an overlap of atleast one joint between two adjacent parts. For example, in Figure \[fig:overview\](c), shoulder joints are common between the head and hands. For the lower appendicular skeleton (viz. legs), we also include the joint at the base of spine to get a good overlap with lower axial skeleton. #### Architecture and Implementation We represent each subgraph by its adjacency matrix, normalized by corresponding degree matrix $\mathcal{D}$. Our model takes as input a tensor having features for each vertex in the spatio-temporal graph of S-video and outputs a vector of class scores for the video. The architecture of the graph convolutional network is similar to Yan [@yan2018spatial] and consists of $9$ spatio-temporal graph convolution units (each unit with the *four* $\mathbf{W}_{p}$ kernels, *one* $\mathbf{W}_{T}$ kernel and a residual) with an initial spatio-temporal head unit, based on a Resnet-like model [@he2016deep]. First three layers have 64 output channels, next three have 128 and last three have 256. We also use a learnable edge weight mask for learning edge weights in each subgraph [@yan2018spatial]. We use the Pytorch framework [@paszke2017automatic] for our implementation. The code and models are made publicly available: [`https://github.com/dracarys983/pb-gcn`](https://github.com/dracarys983/pb-gcn). Geometric & Kinematic Signals {#sec:signals} ============================= Yan [@yan2018spatial] use the 3D coordinates of each joint directly as the signal at each graph node. Relative coordinates [@zhang2017geometric; @ke2017new] and temporal displacements [@zanfir2013moving] of joints have been used earlier for action recognition. Derived information like optical flow and Manhattan line map has been found useful on RGB images also [@wang2016temporal; @zou2018layoutnet]. Even a CNN framework can be more effective and efficient if relevant derived information is supplied as input to the network. We use a signal at each node that combines temporal displacements across time and relative coordinates, with respect to shoulders and hips [@ke2017new]. This representation provides translation invariance to the representation [@verma2018feastnet] and improves skeletal action recognition performance significantly. Figure \[fig:overview\](a) illustrates the computation of the two signals for a single skeleton video frame. We show the effect of relative joint coordinates (geometric signal) and temporal displacements (kinematic signal) individually and the performance improvement obtained by using a combination of these signals for a baseline one-part model as well as our four part-based model in the Table \[tab:ablation\](b). The improvement in performance obtained using the geometric and kinematic signals is noteworthy. Experimental Setup and Results {#sec:exp_res} ============================== We use SGD as the optimizer and run the training for 80 epochs (NTURGB+D) / 120 epochs (HDM05). We set the initial learning rate to 0.1 and all the experiments are run on a cluster with $4$ Nvidia GTX 1080Ti GPUs. The batch size is set to 64. Learning rate decay schedule (set to decay by 0.1 at epochs 20, 50 and 70 for NTURGB+D, and at epoch 80 for HDM05) is finalized using a validation set. No augmentation is performed for any of the experiments, consistent with graph-based method [@yan2018spatial]. We perform ablation studies on the large-scale NTURGB+D dataset (shown in Table \[tab:ablation\]) and then compare with state-of-the-art on both HDM05 and NTURGB+D using the best configuration of our model (shown in Table \[tab:sota\]). [cc]{} & (b) Performance with various signals for\ & best & worst number of parts\ ------ ---------- ---------- CS CV One 79.4 87.9 Two 80.2 88.4 Four **82.8** **90.3** Six 81.4 89.1 ------ ---------- ---------- : []{data-label="tab:ablation"} & ------------------------------------ ---------- ---------- ---------- ---------- CS CV CS CV $J_{loc}$ 79.4 87.9 82.8 90.3 $\mathbf{D}_{R}$ 83.6 87.7 84.6 88.4 $\mathbf{D}_{T}$ 84.3 91.6 85.4 92.6 $\mathbf{D}_{R} || \mathbf{D}_{T}$ **85.6** **91.8** **87.5** **93.2** ------------------------------------ ---------- ---------- ---------- ---------- : []{data-label="tab:ablation"} Datasets {#sec:5_1} -------- #### NTURGB+D [@Shahroudy_2016_CVPR] This is currently the largest RGBD dataset for action recognition to the best of our knowledge. It has 56,880 video sequences shot with three Microsoft Kinect v2 cameras from different viewing angles. There are 60 classes among the action sequences and 3D coordinates of 25 joints are provided for each human skeleton tracked. There is a large variation in viewpoint, intra-class subjects and sequence lengths, which makes this dataset challenging. We remove 302 of the captured samples having missing or incomplete skeleton data. The protocol mentioned in Shahroudy [@Shahroudy_2016_CVPR] is followed for comparisons with previous methods. #### HDM05 [@Muller07documentationmocap] This dataset was captured by using an optical marker-based Vicon system. It contains 2337 action sequences ranging across 130 motion classes performed by *five* actors. This dataset currently has the largest number of motion classes. The actors are named “bd”, “bk”, “dg”, “mm” and “tr”, and 31 joints are annotated for each skeleton. This dataset is challenging due to intra-class variations induced by multiple realizations of same action and large number of motion classes. We follow the protocol given in [@huang2017riemannian] which is used by recent deep learning methods. Discussion {#sec:5_2} ---------- #### <span style="font-variant:small-caps;">Part-based graph model</span>: Our motivation to use a part-based graph model is derived primarily from the fact that human actions are made up of “gestures” which represent motion of a body part. The seminal success of DPMs [@Felzenszwalb:2005] in detecting humans in images reinforces the motivation further. We discuss the effect of proposed spatio-temporal part-based graph model below. #### (a) How many parts to have? We start with a coarse-grained scheme where entire skeleton is a single part and progress towards finer representations. The different partitions are, *two parts*: dividing skeleton into axial and appendicular skeleton, *four parts*: as explained in section \[sec:stmodel\] and *six parts*: Assigning left and right in hands and legs. The feature at each vertex in the input is 3D coordinate of the corresponding joint. From Table \[tab:ablation\](a), we can see that using two parts improves over one and four improves over two. This shows that partitioning the skeleton graph into subgraphs with useful properties helps. However, dividing upper and lower skeletons into left and right in four part scheme does not improve performance, as per our intuition about *laterality* mentioned in section \[sec:stmodel\]. This experiment suggests that part-based model improves performance over single part and being agnostic to laterality is helpful. Our final model uses the *four* part division of the human skeleton. #### (b) Comparison to graph-based models From Table \[tab:sota\](a) and Table \[tab:ablation\](b), it can be seen that our part-based model performs better than graph based model of Yan [@yan2018spatial] even when using $J_{loc}$ as the feature at each vertex. The graph construction in [@yan2018spatial] uses a spatial partitioning scheme for their final model which divides the skeleton graph egde set into several partitions, while the vertex set has no partitions and contains all the joints. The difference in our model is that we divide the *entire* skeleton into *smaller* parts similar to human body parts and hence we use different edge set and vertex set for each part. Compared to graph based model of Li [@li2018spatio], our model performs significantly better on NTURGB+D as well as HDM05. However, it is possible that this is because the number of layers in the network in [@li2018spatio] is much smaller (2 vs 9) compared to our model. Our model outperforms both the previous graph based models proposed for skeleton action recognition on the two datasets. #### <span style="font-variant:small-caps;">Geometric + Kinematic signals</span>: {#geometric-kinematic-signals} Providing an explicit cue to a convolutional network, such as optical flow when performing action recognition from RGB videos [@NIPS2014_5353], which is significant for the task at hand helps learn a richer representation by focusing on the cue. This motivates the use of geometric and kinematic features for skeletal action recognition. For the final configuration of our model, we concatenate the geometric and kinematic signals. #### (a) Kinematic: temporal displacements Temporal displacements provide information about the amount of motion happening between two frames. This information is synonymous to 3D scene flow of a very sparse set of points. We hypothesize that these displacements provide explicit motion information (like optical flow) which makes the model consider displacements as strong features and learn from them. Improvement in performance using this signal can be seen from Table \[tab:ablation\](b), for both four-part as well as one-part model across both splits of NTURGB+D. #### (b) Geometric: relative coordinates These provide translation invariant features as explained in [@verma2018feastnet] and they have been used effectively to encode skeletons by Ke [@ke2017new] into images. Also, Zhang [@zhang2017geometric] used relative coordinates as a geometric feature which performs much better than 3D joint locations using a simple stacked LSTM network. We can see improvements in performance provided by relative coordinates in Table \[tab:ablation\](b) for both global (one part) and four part-based models, which are the worst and best performing models according to Table \[tab:ablation\](a). [cc]{} (a) NTURGB+D & (b) HDM05\ ----------------------------- ---------- ---------- CS CV ST Attention [@song2017end] 73.4 81.2 GCA-LSTM [@liu2017global] 74.4 82.8 TCN [@kim2017interpretable] 74.3 83.1 VA-LSTM [@zhang2017view] 79.4 87.6 CNN + MTLN [@ke2017new] 79.6 84.8 Deep STGC [@li2018spatio] 74.9 86.3 STGCN [@yan2018spatial] 81.5 88.3 PB-GCN **87.5** **93.2** ----------------------------- ---------- ---------- : []{data-label="tab:sota"} & **Methods** **Accuracy** ----------------------------------- -------------------------- SPDNet [@huang2017riemannian] 61.45 $\pm$ 1.12 Lie Group [@vemulapalli2014human] 70.26 $\pm$ 2.89 LieNet [@huang2017deep] 75.78 $\pm$ 2.26 P-LSTM [@Shahroudy_2016_CVPR] 73.42 $\pm$ 2.05 Deep STGC [@li2018spatio] 85.29 $\pm$ 1.33 STGCN [@yan2018spatial] 82.13 $\pm$ 2.39 PB-GCN **88.17** $\pm$ **0.99** : []{data-label="tab:sota"} Comparison to state of the art {#sec:5_3} ------------------------------ #### <span style="font-variant:small-caps;">NTURGB+D</span>: For this dataset, we outperform all previous state-of-the-art methods by a large margin. Even without using the signals introduced in section \[sec:signals\], we outperform the previous methods which can be seen in Table \[tab:ablation\](b) ($J_{loc}$ results). We outperform the previous state-of-the-art graph based method of Yan [@yan2018spatial] (STGCN) which is also the state-of-the-art for skeleton based action recognition to the best of our knowledge, by a margin of \~6% and \~5% for the two protocols. #### <span style="font-variant:small-caps;">HDM05</span>: This is a \~20x smaller dataset compared to NTURGB+D but contains more than twice the number of classes in NTURGB+D. The length of sequences in this dataset is longer and some of the action classes have only one sequence [@Cho2014ClassifyingAV]. Using the protocol of [@huang2017riemannian] is therefore very challenging, on which we obtain state-of-the-art results using our model. We outperform the previous state-of-the-art Deep STGC [@li2018spatio], which is a network based on spectral graph convolutions for skeleton action recognition by \~3% at the mean accuracy. Conclusion {#sec:conclusion} ========== In this paper, we define a partition of skeleton graph on which spatio-temporal convolutions are formalized through a part-based GCN for the task of action recognition. Such a part-based GCN learns the relations between parts and understands the importance of each part in human actions more effectively than a model that considers entire body as a single graph. We also demonstrate the benefit of giving explicit cues to the convolutional model which are significant from the point of view of the task at hand, such as relative coordinates and temporal displacements for skeletal action recognition. As a result, our model achieves state-of-the-art performance on two challenging action recognition datasets. As a future work, we would like to explore the use of part-based graph model for tasks other than action recognition, such as object detection, measuring image similarity, etc.
--- abstract: 'In this Supporting Information document, we present further information and calculations supporting the conclusions of the main Letter. In particular, we discuss the level structure of the NV$^{-}$ center, and give an explicit derivation of the XY spin chain Hamiltonian used in the main text. Furthermore, we show that fault-tolerant two-qubit gates require unrealistically long $T_2$ times. Finally, we present our results for $T_1$ processes in longer chains and the effects of coupling strength disorder.' author: - Yuting Ping - 'Brendon W. Lovett' - 'Simon C. Benjamin' - 'Erik M. Gauger' title: 'Supporting Information for Practicality of spin chain ‘wiring’ in diamond quantum technologies' --- Electronic spin qubit of NV$^-$ defects ======================================= The nitrogen-vacancy colour defect in diamond consists of a substitutional nitrogen atom and an adjacent vacancy. In its negatively charged state, the NV$^-$ center traps an excess electron and possesses a paramagnetic ground state ($S=1$) with extraordinarily long spin lifetime. For each NV$^-$ center, the spin-triplet ground state $^3$A consists of the $m_s = 0$ and the degenerate (by C$_{3v}$ symmetry) $m_s = \pm 1$ sublevels, split by  [@oort88; @redman91]. An external magnetic field lifts the degeneracy between the $m_s = + 1$ and the $m_s = -1$ sublevels. The optical transitions between the ground and the excited $^{3}$E triplet states are predominantly spin-conserving [@davies76; @tamarat08], but an intersystem crossing (ISC) occurs via the singlet $^1$A state [@manson06; @rogers08] (see Fig. \[fig:levels\]). ![Simplified electronic structure of the NV$^{-}$ colour center. The zero-field splitting between the $m_s = 0$ and the $m_s = \pm 1$ ground triplet is , and transitions between different spin levels can be effected through resonant microwave pulses [@oort88; @redman91; @jelezko04]. The $^{1}$A singlet level is metastable and provides a route for intersystem crossing relaxation (red arrows) from the excited triplet to the ground state: Thick red lines have associated ISC rates that are several orders of magnitude faster than the thinner red line, and dashed transitions can be considered to be negligible [@manson06; @rogers08]. []{data-label="fig:levels"}](FigureS1.eps){width="2.3in"} To use the electron spin of an NV$^-$ center as a quantum bit, one can, for example, encode the states $\ket{0}$ and $\ket{1}$ into the $m_s = 0$ and $m_s = +1$ sublevels of the ground triplet, respectively. Optical pumping then polarizes the qubit into the $\ket{0}$ state, and resonant microwave pulses enable single qubit operations, such as the creation of a coherent superposition state $(\ket{0} + \ket{1}) / \sqrt{2}$ [@jelezko04; @maurer10]. The spin can also be read out optically with a fluorescence technique [@jelezko04; @maurer10]. Effective System Hamiltonian ============================ Following Ref. [@yao12] we derive the effective system Hamiltonian for the chain shown in Fig. 1 of the main text. We assume the presence of a constant magnetic field of strength $B$ in the $z$-direction which is defined by the symmetry axis of the NV$^-$ center, i.e. the \[111\] crystal axis. The Hamiltonians for individual NV$^-$ and N defects are given by [@childress06; @hanson06], respectively, $$\begin{aligned} H_{\text{NV}} & = g_e \mu_B B S^{\text{NV}}_z - g_n \mu_n B I_z \nonumber \\ & + D (S^{\text{NV}}_z)^2 + A_{\text{NV}} {\mathbf I} \cdot {\mathbf S^{\text{NV}}}, \label{eq:NVraw} \\ H_{\text{N}} & = g_e \mu_B B S_z - g_n \mu_n B I_z + {\mathbf S} \cdot{\mathbf A}_{\text{N}} \cdot {\mathbf I}~, \label{eq:Nraw}\end{aligned}$$ where $g_e$ ($g_n$) is the electron (nuclear) g-factor, $\mu_B$ ($\mu_n$) is the Bohr (nuclear) magneton, $A_{\text{NV}}$ (${\mathbf A}_{\text{N}}$) denotes the hyperfine coupling constant (tensor), $D$ is the zero-field splitting for the NV$^-$ center, and ${\mathbf S}^{(\text{NV})}$ (${\mathbf I}$) is the full electronic (nuclear) spin operator. For each NV$^-$ center, we encode the qubit basis states $\ket{0}$ and $\ket{1}$ in the $m_s = 0$ and $m_s = +1$ sublevels of the ground triplet (see Fig. \[fig:levels\]), respectively. Expressed in the computational basis of the qubit, we can approximate Eq. (\[eq:NVraw\]) as: $$H^{\text{qubit}}_{\text{NV}} = \frac{\omega^{\text{NV}}_0}{2} {\ensuremath{\mathbf{\sigma}_z}}+ \frac{A_{\text{NV}}}{2} I_z {\ensuremath{\mathbf{\sigma}_z}}~, \label{eq:HNV}$$ where the electronic Zeeman energy $\omega^{\text{NV}}_0 := D + g_e \mu_B B $, and ${\ensuremath{\mathbf{\sigma}_z}}$ is the usual Pauli $z$-operator acting on the qubit. The effective Hamiltonian above does not include the heavily suppressed hyperfine spin-flip terms, nor the negligible nuclear Zeeman term ($A_{\text{NV}}, g_n \mu_n B \ll \omega^{\text{NV}}_0$) [@childress06]. For the $i^{\text{th}}$ nitrogen defect with $S=1/2$, Eq. (\[eq:Nraw\]) can be similarly approximated as $$H^{i}_{\text{N}} = \frac{\omega_0}{2} {\ensuremath{\mathbf{\sigma}_z}}^i + \frac{A^i_{\text{N}\parallel}}{2} I^i_z {\ensuremath{\mathbf{\sigma}_z}}^i~, \label{eq:HN}$$ where $\omega_0 := g_e \mu_B B \approx \unit{10}{\giga\hertz}$, and $\sigma^i_z$ acts on the spin qubit of the $i^{\text{th}}$ N impurity. The hyperfine couplings $A^i_{\text{N}\parallel}$ depend on the [*Jahn-Teller*]{} orientation of each N defect, and can take two possible values $- 118.9$ MHz and $- 159.7$ MHz [@cox94; @kedkaew08; @yao12]. The magnetic dipole-dipole interaction between two electron spins $i$ and $j$ (both N defects, or one N and one NV$^-$ center) is generically given by $$\begin{aligned} H^{ij}_{\text{dip}} &= \frac{\mu_0 g^2_e \mu^2_B}{4 \pi r^3} \left( {\mathbf S^i} \cdot {\mathbf S^j} - 3 ({\mathbf S^i} \cdot \mathbf{\hat{r}})({\mathbf S^j} \cdot \mathbf{ \hat{r}}) \right) \nonumber \\ &= \frac{\mu_0 g^2_e \mu^2_B}{4 \pi r^3} \left(S_x^iS_x^j + S_y^iS_y^j - 2 S_z^iS_z^j\ \right) \nonumber \\ & \simeq - \frac{\mu_0 g^2_e \mu^2_B}{2 \pi r^3} S_z^iS_z^j~, \label{eq:dipole}\end{aligned}$$ where $\mu_0$ is the vacuum magnetic permeability, $r$ is the separation between the spins, and $\mathbf{\hat{r}}$ denotes the unit vector connecting the two spins, here assumed to be parallel to the $z$-direction (as is the case in Fig. 1 of the main text). In the last step, we have neglected the spin-flip terms as before [^1], since $\mu_0 g^2_e \mu^2_B / (4 \pi r^3) \simeq \unit{52}{\kilo\hertz} \ll A_{\text{NV}}, A^i_{\text{N}\parallel}$ for a spacing of $r= \unit{10}{\nano\meter}$ [@childress06; @yao12]. We can therefore write the Hamiltonian for a pair of spins as [@cordes87] $$\begin{aligned} H_{\text{N},\text{N}} & = \kappa\ \sigma_z^1 \sigma_z^2 + \sum_{i=1,2} \frac{( \omega_0 + \delta_i )}{2}\ \sigma_z^i , \label{eq:HNN} \\ H_{\text{N},\text{NV}} & = g\ \sigma_z^0 \sigma_z^1 + \frac{( \omega^{\text{NV}}_0 + \delta )}{2}\ \sigma_z^0 + \frac{(\omega_0 + \delta_1)}{2}\ \sigma_z^1~, \label{eq:HNNV}\end{aligned}$$ where the hyperfine terms $\delta := \pm A_{\text{NV}}/2$ and $\delta_i := \pm A^i_{\text{N}\parallel}/2$ ($i= 1, 2, ..., N$); the dipolar coupling strengths are $\kappa := - \mu_0 g^2_e \mu^2_B/ (8 \pi r_{\text{N},\text{N}}^3)$ and $g := - \mu_0 g^2_e \mu^2_B/(8 \pi r_{\text{N},\text{NV}}^3)$. Eqns. (\[eq:HNN\]) and (\[eq:HNNV\]) are easily generalised to a nearest-neighbor coupled chain. We assume that the entire chain is driven by the following resonant global fields $$\begin{aligned} H_{\text{drive}} & = \sum_{i=1}^{N} \Omega\ {\ensuremath{\mathbf{\sigma}_x}}^i \cos \omega_i t \nonumber \\ & + \sum_{j=0, N+1} \Omega_0\ {\ensuremath{\mathbf{\sigma}_x}}^j \cos (\omega^{\text{NV}}_0 + \delta) t ~, \label{eq:driving}\end{aligned}$$ where all four possible frequencies $\omega_i = \omega_0 + \delta_i$ \[see Eqns. (\[eq:HN\]), (\[eq:HNN\]) and (\[eq:HNNV\])\] are applied to address the nitrogen impurities [@yao12]. The field intensities $\Omega_0, \Omega$ are chosen to fit in the hierarchy $\vert \kappa \vert \ll \Omega, \Omega_0 \ll \omega_i, \omega^{\text{NV}}_0 + \delta$. Making a rotating wave approximation in the usual rotating frame (see Appendix) and adopting the rotated basis $(x, y ,z) \rightarrow (z, -y, x)$ [@yao12] then yields an effective XY interaction model for the chain: $$\begin{aligned} \hspace{-2.5mm} H_{\text{eff}} & = \sum_{i=1}^{N-1} \kappa (\sigma_+^i \sigma_-^{i+1} + \sigma_-^i \sigma_+^{i+1}) \nonumber \\ & +\sum_{j=0, N} g (\sigma_+^j \sigma_-^{j+1} + \sigma_-^j \sigma_+^{j+1}) ~, \label{eq:eff}\end{aligned}$$ where $\sigma^i_{\pm} = ({\ensuremath{\mathbf{\sigma}_x}}^i \pm i {\ensuremath{\mathbf{\sigma}_y}}^i)/2$. Eq. ([\[eq:eff\]]{}) is the effective system Hamiltonian given as Eq. (1) of the main text. Controlling the magnitude $\Omega_0$ can also effectively tune the coupling $g$ between the NV$^-$ center spins and the N defect spin chain [@yao12]. Note that $\kappa$ and $g$ in Eq. (\[eq:eff\]) are negative, however, we shall take absolute values for both couplings, since any global phase for the entire spin chain due to the sign of the interaction is irrelevant. Decoherence model ================= We model the time evolution of our system with a standard Lindblad master equation [@breuer02] : $$\begin{aligned} \dot{\rho} = &-i \left[ H_{\text{eff}}, \rho \right] \nonumber \\ &+ \sum_{i=1}^{N} \gamma_i \left( L_i \rho L_i^{\dagger} - \frac{1}{2} \left( L_i^{\dagger} L_i \rho + \rho L_i^{\dagger} L_i \right) \right)~, \label{eq:me}\end{aligned}$$ where $\rho$ is the density matrix of the channel including the NV$^{-}$ center register spins, $H_{\text{eff}}$ is the effective system Hamiltonian (\[eq:eff\]), the $\gamma_i$ are the noise rates and the $L_i$ the noise operators. For spin-flip noise (i.e. $T_1$-like processes) all noise rates are $\gamma \equiv \gamma_i = 1 / T_1$ and we use noise operators $L_i = \sigma_z^i$ (since the computational basis is rotated from the physical basis according to $x \to z$). There is one operator for each N impurity acting independently on its spin, a choice corresponding to the case where the source of the noise is predominantly local to each spin. This reflects the spatial extent of the spin chain, in which each channel spin can be considered as interacting with its own environment, e.g. the nuclear spin bath, nearby defect sites and local phonons due to lattice distortion. Similarly, for pure dephasing noise, we use $L_i = \sigma_x^i$ with associated rates $\gamma = 1 / T_2$. Interestingly, Eq. (\[eq:me\]) only involves the three parameters $g, \kappa$ and $\gamma$; and the dynamics of the systems is invariant under a suitable rescaling of the coupling strength, unit of time and noise rate. Therefore, coherence time thresholds can be easily obtained for coupling strengths different from the ones presented in this paper. For example, if one is interested in $g$ and $\kappa$ that are only half as large, the noise rate simply needs to be halved and the corresponding coherence time doubled. This rescaling then gives rise to the same entangling capacity $E_F$ of the channel, and also correctly captures the quantum to classical transition. The $T_2$ time for the N defect spins is unlikely to substantially exceed that of the NV$^{-}$ centers, since the coherence time of both types of spin is ultimately limited by the same physical processes: interaction with additional electronic defect spins and with the nuclear spin bath [@benjamin09]. Experimental evidence for the NV$^{-}$ and N spin $T_2$ being limited by the same spin bath has been reported in Ref. [@takahashi08]. Optimal control strategies can mitigate the loss of coherence due to a small set of interacting quantum systems, e.g. small environments of up to six additional spins were addressed by Ref. [@grace07]. However, as the complexity and size of the environment increase, these techniques become more difficult to implement and the coherence time will necessarily begin to decrease [@grace07]. Optimal control can thus help to combat one source of decoherence but will not be able to overcome other unavoidable (Markovian) decoherence channels supported by the relatively large diamond crystal required for the envisaged architecture. $T_2$ thresholds for fault tolerance ==================================== We simulate the quantum state transfer (QST) for spin chains with odd $N$ in the weak coupling regime, $g=\kappa / (10\sqrt{N})$, and for a intra-chain spacing. After the transfer the acquired phase $\pm 1$ of the target state is corrected to enable a direct comparison with the initial state (in practice the phase would be cancelled by employing a two-round protocol [@yao12; @markiewicz09; @yao11]). To evaluate the fidelity of the transfer process, we use the measure $F^2(\rho, \sigma) = \text{Tr} \left(\sqrt{\sqrt{\rho} \sigma \sqrt{\rho}} \right)^2$ [@jozsa94]. The N defect spins are subjected to independent physical $T_2$ processes, realised as spin flips in the computational basis. ![Fidelity $F^2$ of the transferred state $\rho$ with respect to the input $\sigma = \ket{+}\bra{+}$, through the $N=3$ chain as a function of the transfer time $\tau$. The initial state is $\ket{+}_0\ket{000}\ket{0}_{N+1}$, and we operate in the weak coupling regime $g=\kappa/(10\sqrt{N})$ for $\kappa = \unit{26}{\kilo\hertz}$. The nitrogen spins experience independent dephasing at a rate $\gamma = 1/T_2$. Whilst the curves seem to coincide in the main plot, the inset shows a zoomed-in view near the maximum on the scale relevant for fault tolerance.[]{data-label="fig:ftqc"}](FigureS2.eps){width="3.4in"} Fig. \[fig:ftqc\] shows that meeting a fault-tolerance threshold of order $F^2 \geq 99\%$ requires unrealistically long coherence times of the defect spins. More specifically, the shortest non-trivial $N=3$ chain achieves a sufficiently high fidelity only for $T_2 \geq \unit{54}{\milli\second}$ whereas this number increases to $T_2 \geq \unit{88}{\milli\second}$ for $N=5$. As described in the main text, in the weak coupling regime NV$^-$ center excitations tunnel through the (single) zero-energy mode of the chain, and off-resonant coupling to other modes is negligible. This enables high-fidelity quantum state transfer for long enough coherence times. However, because transfer time $\tau \sim 1 / g$ is longer for weaker $g$, the state transfer is more susceptible to decoherence, and it may thus be advantageous overall to use stronger $g$ at the cost of coupling to several modes and sacrificing some theoretical fidelity. In the following table we list the $T_2$ times in milliseconds required for achieving an error rate below $1\%$ for different $g / \kappa$ ratios with $\kappa = \unit{26}{\kilo\hertz}$: $g/\kappa$ $0.1 / \sqrt{N}$ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 ------------ ------------------ ----- ----- ----- ----- ----- ----- ----- ----- ----- --- $N=3$ 54 31 77 – 19 – – 31 6.8 10 – $N=5$ 88 43 30 25 – – – 28 16 – – where ‘–’ denotes that the desired fidelity is never achieved for the initial states $\ket{+}_0\ket{000}\ket{0}_{N+1}$ and $\ket{+}_0\ket{00000}\ket{0}_{N+1}$, respectively. The fluctuating behavior seen in this table is consistent with the results reported in Ref. [@yao11]. We conclude that for the studied chains fault-tolerant quantum computation demands coherence times of several milliseconds for the $N=3$ chain and a few tens of milliseconds for the more interesting $N=5$ chain, even when dropping the weak coupling constraint and in the absence of any other imperfections. $T_1$ Process in Longer Chains ============================== In the main text, we show the effects of $T_1$ and $T_2$ processes for chains of length $N=3$ and $5$. Considering only $T_1$ processes and restricting ourselves to the zero and single excitation (computational) subspace [^2] significantly reduces the numerical complexity, allowing us study longer chains. $ \begin{array}{cc} \hspace{-1.1mm} \subfigure[\ $g=$]{\includegraphics[width=0.495\linewidth]{FigureS3a.eps}} & \hspace{-0.9mm} \subfigure[\ $g=$]{\includegraphics[width=0.495\linewidth]{FigureS3b.eps}} \end{array}$ Fig. \[fig:long\] illustrates that a sizeable amount entanglement can be transmitted through a $N=15$ chain for $T_1$ times as short as a millisecond, both for strong and weak coupling of the NV$^-$ centers to the chain. When the bit flip rate gets small, $T_1 \gtrsim \unit{0.1}{\second}$, the weak coupling approach possesses a much higher entangling power, despite taking substantially longer. For $T_1 \sim \unit{10}{\milli\second}$ [@maurer12], however, both approaches are comparable. Taking into account the increased robustness of the strong coupling case against $T_2$ processes (see the main text) suggests that overall $g \approx \kappa$ is likely the better choice for tackling decoherence. ![Plot of the transfer time $\tau$ (the first peaks in plots like Fig. \[fig:long\](a)) and the resulted $E_F$ values between the distant spin and the $(N+1)^{\text{th}}$ NV$^-$ qubit as a function of the channel length $N$, with fixed intra-chain spacing $r_{\text{N,N}} = 10$nm ($g=\kappa = \unit{26}{\kilo\hertz}$), under an independent phase-flip model (in the rotated basis $x \leftrightarrow z$) on each nitrogen spin with practically relevant rate $\gamma = 1/T_1 = \unit{100}{\hertz}$ ($T_1= \unit{10}{\milli\second}$). The channel spins are initially all in the state $\ket{0}$, and interact with their nearest neighbors only.[]{data-label="fig:trend"}](FigureS4.eps){width="2.6in"} Fig. \[fig:trend\] considers different chain lengths with $g=\kappa$ coupling, showing the optimal transfer duration $\tau$ and maximally achievable $E_F$ for each case under an independent bit flip model with $T_1 = \unit{10}{\milli\second}$. Unsurprisingly, the performance of the chain is more affected for longer chains, although the reduction is much less drastic than for $T_2$ noise, and a finite amount of entanglement can be transferred even for long chains. Coupling-strength disorder ========================== In this section, we simulate disorder in the intra-chain coupling strength $\kappa$, as would arise from imprecisions when implanting the nitrogen impurity. For numerical convenience, we once more restrict our calculations to the single excitation subspace. We assume the spacings between the neighboring spins obey a Gaussian distribution around the mean value $r_{\text{N,N}} = \unit{10}{\nano\meter}$ ($\kappa = \unit{26}{\kilo\hertz}$). Our results are averages over a hundred independent runs for each data point. ![Maximally achievable $E_F$ between the ancilla and the remote NV$^-$ qubit for different chain lengths $N$. The intra-chain spacings are assumed to follow a Gaussian distribution with mean $r_{\text{N,N}} = \unit{10}{\nano\meter}$ with a standard deviation corresponding to $5\%$ disorder ($\sim$ 15$\%$ disorder in the intra-chain coupling strength $\kappa$). The channel spins are initialised in the state $\ket{0}$, and each data point is the result of an average of 100 independent runs. The error bars indicate a $95\%$ confidence interval. The ‘nearest neighor’ case uses the more practically relevant strong coupling regime $g=\kappa_{\text{mean}}$. For ‘beyond nearest neighbor’ coupling, all pairwise spin couplings are included in accordance with the dipolar $1/r^3$ distance dependence.[]{data-label="fig:kappadisorder"}](FigureS5.eps){width="2.65in"} Fig. \[fig:kappadisorder\] shows that even a sizeable 15% spread in the $\kappa$ distribution does not have a catastrophic impact on the entangling capacity of the chains. In fact, the reduction of achievable $E_F$ is rather similar to that obtained from a $T_1$ time around . While the difference is not huge, the nearest-neighbor coupled chain proves consistently more robust. However, for the short chains considered in the main text, a small amount of disorder in $\kappa$ is unlikely to be the limiting factor preventing entanglement distribution. Ultimately, we expect the major challenge for this protocol is attaining sufficiently long $T_2$ times to overcome the limitations discussed in the main Letter. Appendix: Rotating Wave Approximations {#appendix-rotating-wave-approximations .unnumbered} ====================================== Under the additional driving field of Eq. (\[eq:driving\]), the total Hamiltonian for two nitrogen electronic spin qubits reads $$\begin{aligned} &H^{\text{tot}}_{\text{N},\text{N}} = \kappa\ {\ensuremath{\mathbf{\sigma}_z}}^1 {\ensuremath{\mathbf{\sigma}_z}}^2 + \sum_{i=1,2} \frac{\omega_i}{2} {\ensuremath{\mathbf{\sigma}_z}}^i + \sum_{i=1,2} \Omega\ {\ensuremath{\mathbf{\sigma}_x}}^i \cos \omega_i t \label{eq:HNNtot} \\ \nonumber & = \left( {\begin{array}{cccc} \kappa + \frac{\omega_1 + \omega_2}{2} & \Omega \cos \omega_2 t & \Omega \cos \omega_1 t & 0 \\ \Omega \cos \omega_2 t & - \kappa + \frac{\omega_1 - \omega_2}{2} & 0 & \Omega \cos \omega_1 t \\ \Omega \cos \omega_1 t & 0 & - \kappa + \frac{\omega_2 - \omega_1}{2} & \Omega \cos \omega_2 t \\ 0 & \Omega \cos \omega_1 t & \Omega \cos \omega_2 t & \kappa - \frac{\omega_1 + \omega_2}{2} \\ \end{array}}\right)~.\end{aligned}$$ The unitary transformation for moving into the rotating frame is given by $$U = \left( {\begin{array}{cccc} e^{i \theta_1 t} & 0 & 0 & 0 \\ 0 & e^{i \theta_2 t} & 0 & 0 \\ 0 & 0 & e^{i \theta_3 t} & 0 \\ 0 & 0 & 0 & e^{i \theta_4 t} \\ \end{array}}\right)~, \label{eq:unitary}$$ where $\theta_{1 (4)} = (-) \frac{\omega_1 + \omega_2}{2}$ and $\theta_{2 (3)} = (-) \frac{\omega_1 - \omega_2}{2}$. Applying the transformation $H_{\text{RF}} = i \dot{U} U^{\dagger} + U H U^{\dagger} $ and making the usual rotating wave approximation (RWA), justified since $\Omega, |\kappa| \ll \omega_i$, yields: $$\begin{aligned} H_{\text{RWA}} & = \left( {\begin{array}{cccc} \kappa & \frac{\Omega}{2} & \frac{\Omega}{2} & 0 \\ \frac{\Omega}{2} & - \kappa & 0 & \frac{\Omega}{2} \\ \frac{\Omega}{2} & 0 & - \kappa & \frac{\Omega}{2}\\ 0 & \frac{\Omega}{2} & \frac{\Omega}{2} & \kappa \\ \end{array}}\right) \nonumber \\ & = \kappa\ {\ensuremath{\mathbf{\sigma}_z}}^1 {\ensuremath{\mathbf{\sigma}_z}}^2 + \sum_{i=1,2} \frac{\Omega}{2} {\ensuremath{\mathbf{\sigma}_x}}^i~. \label{eq:rf}\end{aligned}$$ In the rotated basis with $(x, y, z) \rightarrow (z, -y, x)$ (i.e. $\ket{0 (1)} \rightarrow \ket{+ (-)} =(\ket{0} \pm \ket{1})/\sqrt{2}$), the Hamiltonian (\[eq:rf\]) then becomes $$\begin{aligned} H_{\text{RF}} & = \kappa\ {\ensuremath{\mathbf{\sigma}_x}}^1 {\ensuremath{\mathbf{\sigma}_x}}^2 + \sum_{i=1,2} \frac{\Omega}{2} {\ensuremath{\mathbf{\sigma}_z}}^i \nonumber \\ & = \kappa\ (\sigma_+^1 + \sigma_-^1) (\sigma_+^2 + \sigma_-^2) + \sum_{i=1,2} \frac{\Omega}{2} {\ensuremath{\mathbf{\sigma}_z}}^i \nonumber \\ & \simeq \kappa\ (\sigma_+^1 \sigma_-^2 + \sigma_-^1 \sigma_+^2) + \sum_{i=1,2} \frac{\Omega}{2} {\ensuremath{\mathbf{\sigma}_z}}^i~. \label{eq:eff1}\end{aligned}$$ Here, a second RWA was made in the last line by neglecting the non-spin-conserving terms, which is valid when $|\kappa| \ll \Omega$. The same procedure can be generalised to a longer chain straightforwardly, allowing us to arrive at the desired nearest-neighbor interaction Hamiltonian (\[eq:eff\]). [99]{} E. van Oort, N. B. Manson, and M. Glasbeek, J. Phys. C: Solid State Phys. [**21**]{}, 4385 (1988). D. A. Redman [*et al.*]{}, Phys. Rev. Lett. [**67**]{}, 3420 (1991). G. Davies and M. F. Hamer, Proc. R. Soc. London A [**348**]{}, 285 (1976). Ph. Tamarat [*et al.*]{}, New J. Phys. [**10**]{}, 045004 (2008). N. B. Manson, J. P. Harrison, and M. J. Sellars, Phys. Rev. B [**74**]{}, 104303 (2006). L. J. Rogers [*et al.*]{}, New J. Phys. [**10**]{}, 103024 (2008). F. Jelezko [*et al.*]{}, Phys. Rev. Lett. [**92**]{}, 076401 (2004). P. C. Maurer [*et al.*]{}, Nature Phys. [**6**]{}, 912 (2010). N. Y. Yao [*et al.*]{}, Nature Comms. [**3**]{}, 800 (2012). L. Childress [*et al.*]{}, Science [**314**]{}, 281 (2006). R. Hanson [*et al.*]{}, Phys. Rev. Lett. [**97**]{}, 087601 (2006). A. Cox, M. E. Newton, and J. M. Baker, J. Phys.: Condens. Matter [**6**]{}, 551 (1994). C. Kedkaew [*et al.*]{}, Int. J. Mod. Phys. B [**22**]{}, 4740 (2008). J. C. Cordes, J. Phys. B: At. Mol. Phys. [**20**]{}, 1433 (1987). H. -P. Breuer and F. Petruccione, *The Theory of Open Quantum Systems* (Oxford, 2002). S. C. Benjamin, B. W. Lovett, and J. M. Smith, Laser Phontonics Rev. [**3**]{}, 556 (2009). S. Takahashi [*et al.*]{}, Phys. Rev. Lett. [**101**]{}, 047601 (2008). M. Grace [*et al.*]{}, J. Phys. B [**40**]{}, S103 (2007). M. Markiewicz and M. Wiesniak, Phys. Rev. A [**79**]{}, 054304 (2009). N. Y. Yao [*et al.*]{}, Phys. Rev. Lett. [**106**]{}, 040505 (2011). R. Jozsa, J. Mod. Optic. [**41**]{}, 2315 (1994). P. C. Maurer [*et al.*]{}, Science [**336**]{}, 1283 (2012): $T_1$ time of 7.5 ms observed for NV$^-$ center electron spin at room temperature. [^1]: In this study we consider a chain that is aligned with the external magnetic field (the $z$-direction). This leads to an effective dipole coupling strength that is increased by a factor of two compared to Ref. [@yao12]. However, to support register architectures with the desired two- or three-dimensional lattices  [@yao12], connections perpendicular to the applied field will also be required. [^2]: i.e. no more than one spin may be found to be in the $\ket{1}$ while all others are in the $\ket{0}$ state if a measurement were performed
--- author: - Sarmistha Banik and Debades Bandyopadhyay title: 'Dense Matter in Neutron Star: Lessons from GW170817' --- Introduction {#sec:1} ============ S. Chandrasekhar predicted the mass limit for the first family of compact astrophysical objects known as White Dwarfs [@chandra31]. Next L.D. Landau should be credited for his idea about the second family of compact objects, as ’giant nucleus’ [@land32]. After the discovery of neutrons, it was realised that the second family might be neutron stars [@bz34]. First pulsar was discovered in 1967 [@hewis68]. We are celebrating 50 years of the discover of first pulsar in 2017. What could be a better celebration than finding the neutron star merger event GW170817 [@abbott]. This stands out as a very important discovery in the history of mankind. Neutron star merger event GW170817 was discovered both in gravitational waves and light. The gravitational wave signal was observed in LIGO detectors [@abbott]. A short Gamma Ray Burst (sGRB) was recorded 1.7 s after the merger by the Fermi-GBM [@abbott2]. This, for the first time, established a link between a neutron star merger event and sGRB. Later electromagnetic signals in visible, ultra-violate and infra-red bands were detected from the ejected matter which formed a ’kilonova’. GW170817 is a boon to the nuclear astrophysics community because it allows to probe compositions and EoS in neutron star and r-process nucleosynthesis in the ejected neutron rich matter. The merger event provides crucial information about the remnant and neutron stars in the binary. The chirp mass is estimated to be 1.188$^{+0.004}_{-0.002}$ M$_{\odot}$. Assuming low spins as found from observations of neutron stars in our Galaxy, individual neutron star mass in the binary ranges 1.17-1.60 M$_{\odot}$. The massive remnant formed in the merger has a mass 2.74$^{+0.04}_{-0.01}$ [@abbott]. The outstanding question is what happened to the massive remnant formed in GW170817. The prompt collapse of it to a black hole is ruled out because large amount of matter was ejected. In this situation, either the remnant is a long lived massive neutron star or it collapsed to a black hole. Recent x-ray observation using the Chandra observatory indicates that the massive remnant might be a black hole [@pool]. It is possible to estimate the upper limit on the maximum maximum ($M_{max}^{TOV}$) of the non-rotating neutron star if the remnant becomes a black hole through delayed collapse. Different groups have determined the upper limit on $M_{max}^{TOV}$ from the multimessenger observation of GW170817 as well as from numerical relativity [@metz; @rezo; @shap]. All these estimates converge to the same value of $\sim 2.16$ M$_{\odot}$ for the upper limit on $M_{max}^{TOV}$. It is already known from the observations of neutron stars that the most massive neutron star has a 2.01 M$_{\odot}$ which sets the lower limit on $M_{max}^{TOV}$ [@anto]. All these information tell us that the maximum mass of non-rotating neutron stars should be in the range $2.01<M_{max}^{TOV}<2.16$. This constraint on $M_{max}^{TOV}$ might severely restrict EoS models. This motivates us to carry out a comparative study of EoS models involving Banik, Hempel and Bandyopadhyay (BHB) EoS with hyperons in the density dependent relativistic hadron (DDRH) field theory [@typ; @apj14]. We organise the article in the following way. We introduce the density dependent hadron field theory and BHB$\Lambda \phi$ EoS in Section 2. Results are discussed in Section 3. We conclude in section 4. Equation of State for Neutron Star Matter {#sec:2} ========================================= Equation of state is an important microphysical input for the study of core-collapse supernovae (CCSN), neutron stars and neutron star mergers [@char15; @rad17]. For CCSN and neutron star merger simulations, an EoS is a function of three parameters - density, temperature and proton fraction. These parameters vary over wide range of values. For example, density varies from $10^2 - 10^{15}$ g/cm$^3$, temperature from 0 to 150 MeV and proton fraction from 0 to 0.6. In this study, we focus on neutron star EoSs which are derived from the EoS constructed for CCSN and neutron star merger simulations. Particular, we describe here the BHB EoS and adopt the same for our calculation [@apj14]. The compositions of matter in CCSN and neutron star changes with density, temperature and proton fraction. Below the saturation density (2.7$\times 10^{14}$ g/cm$^3$) and low temperature, nuclei and nuclear clusters are present and make the matter inhomogeneous. In this case, non-uniform matter is made of light and heavy nuclei, nucleons and leptons in thermodynamic equilibrium. Matter above the saturation density is uniform. Several novel phases of matter such as hyperons, kaon condensate or quarks might appear at higher densities. We discuss both (non-)uniform matter in the following subsections. Non-uniform matter {#subsec:2} ------------------ Here the in-homogeneous matter is described by an extended version of the Nuclear Statistical Equilibrium (NSE) model that was developed by Hempel and Schaffner (HS) [@hs]. The extended NSE model takes into account interactions among nucleons, interaction of nuclei or nuclear clusters with the surrounding medium. Furthermore, the Coulomb interaction is considered. Interactions among unbound nucleons are treated in the relativistic mean filed (RMF) approximation using a density dependent relativistic hadron field theory. Nuclei are considered as classical particles described by the Maxwell-Boltzmann statistics. Binding energies of thousands of nuclei entering into the calculation are obtained from the nuclear mass data table [@audi03]. When experimental values are not available, theoretically calculated values are exploited [@moller95]. Medium modifications of nuclei or nuclear clusters due to the screening of Coulomb energies of background electrons as well as corrections due to excited states and excluded volume effects are taken into account in this calculation. The total canonical partition function of the in-homogeneous matter is given by, $$\begin{aligned} Z(T,V,\{N_i\})=Z_{nuc}~\prod_{A,Z}Z_{A,Z}~Z_{Coul} .\end{aligned}$$ Here $Z_{nuc}$, $Z_{A,Z}$, $Z_{Coul}$ represent partition functions corresponding to the contributions of unbound nucleons, nuclei and Coulomb interaction, respectively. The free energy density is defined as $$\begin{aligned} f&=&\sum_{A,Z} f_{A,Z}^0(T,n_{A,Z})+f_{Coul}(n_e,n_{A,Z})+\xi f_{nuc}^0(T,n'_n,n'_p)-T\sum_{A,Z} n_{A,Z} \mathrm{ln}{\kappa}\; , \label{fe}\end{aligned}$$ where the first term gives the contribution of non-interacting nuclei, $f_{Coul}$ corresponds to the Coulomb energy, the contribution of interacting nucleons $f_{nuc}^0$ is multiplied by the available volume fraction of nucleons $\xi$, $n_n^{'}$ and $n_p^{'}$ are local neutron and proton number densities and the last term goes to infinity when available volume fraction of nuclei ($\kappa$) is zero near the saturation density. The number density of nuclei is given by the modified Saha equation [@apj14; @hs], $$\begin{aligned} &&n_{A,Z}=\kappa~g_{A,Z}(T)\left(\frac{M_{A,Z} T}{2\pi}\right)^{3/2}\exp\left(\frac{(A-Z)\mu_{n}^0+Z\mu_{p}^0-M_{A,Z}-E^{Coul}_{A,Z}-P^0_{nuc}V_{A,Z}}T\right) \; , \label{eq_naz}\end{aligned}$$ where the meaning of different quantities in the equation can be found from Ref.[@apj14; @hs]. Finally, the pressure is calculated as mentioned in Ref.[@apj14; @hs]. Density dependent field theory for dense matter ----------------------------------------------- We calculate the EoS of uniform matter above the saturation density at finite temperature within the frame of a density dependent relativistic hadron field theory [@typ; @apj14]. In this case, the dense matter is made of neutrons, protons, hyperons and electrons. Being the lightest hyperons, $\Lambda$ hyperons populate the dense matter first. Furthermore, heavier hyperons such as $\Sigma$ and $\Xi$ are excluded from this calculation because very little is known about their interaction in nuclear medium experimentally. The starting point here is the Lagrangian density for baryon-baryon interaction mediated by the exchange of $\sigma$, $\omega$ and $\rho$ mesons. The interaction among $\Lambda$ hyperons is taken into account by the exchange of $\phi$ mesons [@apj14] as described by the Lagrangian density, \[subsec:2\] $$\begin{aligned} \label{eq_lag_b} {\cal L}_B &=& \sum_B \bar\psi_{B}\left(i\gamma_\mu{\partial^\mu} - m_B + g_{\sigma B} \sigma - g_{\omega B} \gamma_\mu \omega^\mu - g_{\phi B} \gamma_\mu \phi^\mu - g_{\rho B} \gamma_\mu{\mbox{\boldmath $\tau$}}_B \cdot {\mbox{\boldmath $\rho$}}^\mu \right)\psi_B\nonumber\\ && + \frac{1}{2}\left( \partial_\mu \sigma\partial^\mu \sigma - m_\sigma^2 \sigma^2\right) -\frac{1}{4} \omega_{\mu\nu}\omega^{\mu\nu}\nonumber\\ &&+\frac{1}{2}m_\omega^2 \omega_\mu \omega^\mu -\frac{1}{4} \phi_{\mu\nu}\phi^{\mu\nu} +\frac{1}{2}m_\phi^2 \phi_\mu \phi^\mu \nonumber\\ &&- \frac{1}{4}{\mbox {\boldmath $\rho$}}_{\mu\nu} \cdot {\mbox {\boldmath $\rho$}}^{\mu\nu} + \frac{1}{2}m_\rho^2 {\mbox {\boldmath $\rho$}}_\mu \cdot {\mbox {\boldmath $\rho$}}^\mu.\end{aligned}$$ Here $\psi_B$ denotes the baryon octets, ${\mbox{\boldmath $\tau_{B}$}}$ is the isospin operator and $g$s are density dependent meson-baryon couplings. It is to be noted that $\phi$ mesons are mediated among $\Lambda$ hyperons only. The pressure is given by [@apj14], $$\begin{aligned} P &=& -\frac{1}{2}m_\sigma^2 \sigma^2 + \frac{1}{2} m_\omega^2 \omega_0^2 + \frac{1}{2} m_\rho^2 \rho_{03}^2 + \frac{1}{2} m_\phi^2 \phi_0^2 + \Sigma^r \sum_{B=n,p,\Lambda} n_B \nonumber \\ && + 2T \sum_{i=n,p,\Lambda} \int \frac{d^3 k}{(2\pi)^3} [ln(1 + e^{-\beta(E^* - \nu_i)}) + ln(1 + e^{-\beta(E^* + \nu_i)})] ~, \end{aligned}$$ where the temperature is defined as $\beta = 1/T$ and $E^* = \sqrt{(k^2 + m_i^{*2})}$. This involves the rearrangement term $\Sigma^{r}$ [@apj14; @hof] due to many-body correlations which is given by $$\label{eq_rear} \Sigma^{r}=\sum_B[-g_{\sigma B}' \sigma n^{s}_B + g_{\omega B}' \omega_0 n_B + g_{\rho B}'\tau_{3B} \rho_{03} n_B + g_{\phi B}' \phi_0 n_B ]~,$$ where $'$ denotes derivative with respect to baryon density of species B. The energy density is $$\begin{aligned} \epsilon &=& \frac{1}{2}m_\sigma^2 \sigma^2 + \frac{1}{2} m_\omega^2 \omega_0^2 + \frac{1}{2} m_\rho^2 \rho_{03}^2 + \frac{1}{2} m_\phi^2 \phi_0^2 \nonumber \\ && + 2 \sum_{i=n,p,\Lambda} \int \frac{d^3 k}{(2\pi)^3} E^* \left({\frac{1}{e^{\beta(E^*-\nu_i)} + 1}} + {\frac{1}{e^{\beta(E^*+\nu_i)} + 1}}\right)~. \end{aligned}$$ Parameters of the Lagrangian density are computed using available experimental data at the saturation density. Meson-nucleon couplings are determined by fitting the properties of finite nuclei using some functional forms of density dependent couplings [@typ]. This parameter set is known as the DD2. For vector meson couplings of $\Lambda$ hyperons, we exploit the SU(6) symmetry relations whereas the scalar coupling is obtained from $\Lambda$ hypernuclei data with a potential depth of $-30$ MeV at the saturation density [@sch96]. The descriptions of non-uniform and uniform matter are matched at the crust-core boundary in a thermodynamically consistent manner [@apj14]. Charge neutrality and $\beta$-equilibrium conditions are imposed for neutron star matter. ![Pressure versus energy density (EoS) is shown for the DD2, BHB$\Lambda\phi$ and SFHo EoS models.[]{data-label="fig1"}](eos.eps){width="6.0cm"} ![Mass-radius relationship is shown for the DD2 EoS in left panel and BHB$\Lambda\phi$ in the right panel. In both panels, the bottom curve represents the non-rotating sequence and the upper curve corresponds to the sequence of neutron stars uniformly rotating at their Keplerian frequencies.[]{data-label="fig2"}](rotstarf.eps "fig:"){width="5cm"}![Mass-radius relationship is shown for the DD2 EoS in left panel and BHB$\Lambda\phi$ in the right panel. In both panels, the bottom curve represents the non-rotating sequence and the upper curve corresponds to the sequence of neutron stars uniformly rotating at their Keplerian frequencies.[]{data-label="fig2"}](rotstar.eps "fig:"){width="5cm"} Maximum Mass of Neutron Star {#sec:3} ============================ Here we discuss the results of our calculation. As discussed in the preceding section, we consider neutron star matter made of neutrons, protons, $\Lambda$ hyperons and electrons in the DDRH model. The EoS corresponding to nucleons only matter is denoted as the DD2 whereas the EoS of dense matter involving $\Lambda$ hyperons is known as the BHB$\Lambda\phi$. We also include the SFHo nuclear EoS of Steiner et al. in this discussion [@stein]. Figure 1 displays the EoSs (pressure versus energy density) corresponding to the DD2, BHBH$\Lambda\phi$ and SFHo models. It shows that the DD2 EoS is the stiffest among the three. Further we note that the SFHo EoS was softer over a certain region of energy density but becomes stiffer at higher densities than the BHB$\Lambda\phi$. However, it follows from the structure calculation using the Tolman-Oppenheimer-Volkoff (TOV) equation that the overall SFHo EoS is softer compared with the BHB$\Lambda\phi$ EoS. Maximum masses of non-rotating neutron stars are 2.42, 2.11 and 2.06 M$_{\odot}$ corresponding to the DD2, BHB$\Lambda\phi$ and SFHo EoS, respectively. All these EoSs are compatible with the observed 2 M$_{\odot}$ neutron star [@anto]. We also compute the structures of rotating neutron stars using the LORENE library [@eric; @lorene]. Mass-radius relationships of (non)-rotating neutron stars are exhibited in Fig. 2. The sequences of non-rotating neutron stars (bottom curve) and uniformly rotating neutron stars (upper curve) at Keplerian frequencies are plotted for the DD2 EoS in the left panel and for the hyperon EoS BHB$\Lambda\phi$ in the right panel. Horizontal lines in both panels are fixed rest mass sequences. Those are denoted as normal and supramassive sequences. Rotating neutron stars evolve along those sequences keeping the total baryon mass conserved. The normal sequence finds its counterpart on the non-rotating star branch spinning down whereas neutron stars following the supramassive sequence would finally collapse into black holes. Any evolutionary sequence above the maximum mass rotating neutron star is known as the hypermassive sequence and a neutron star in this sequence would be stabilised only by differential rotation before collapsing into a black hole in a few tens of milli seconds. Recently, it was demonstrated that the relation between the maximum mass ($M_{max}^{Rot}$) of the rotating neutron star at the Keplerian frequency and that ($M_{max}^{TOV}$) of the non-rotating neutron star satisfied a universal relation [@breu; @ssl]. This relation is given by [@breu] $$M_{max}^{Rot} = 1.203 \pm 0.022 M_{max}^{TOV}~.$$ With this understanding of different evolutionary sequences respecting total baryon mass conservation, we discuss the fate of the massive remnant formed in merger event GW170817. The remnant could not be a hypermassive neutron star undergoing a prompt collapse to a black hole because a large amount of ejected matter was observed in the event [@metz]. This implies that the massive remnant existed for some duration. However, a long lived massive remnant is ruled out because of a sGRB sighted 1.7 s after the merger. It is inferred that the massive remnant collapsed to a black hole close to the maximum mass of a uniformly rotating sequence [@rezo; @shap]. This description might be intimately tied to the maximum mass of the non-rotating neutron star. It is estimated from the observation of neutron star merger event GW170817 assuming low dimensionless spins for the neutron stars in the binary that the total binary mass was $\sim$ 2.74 M$_{\odot}$. The mass loss from the merged object due to emissions of gravitational waves and neutrinos and ejected neutron rich matter amounts to $\sim$ 0.15 $\pm 0.03$ M$_{\odot}$ [@shib]. Consequently, the mass of the remnant reduced to $\sim$ 2.6 M$_{\odot}$. If we identify this mass of the remnant that might have collapsed into a black hole, with the maximum mass of the uniformly rotating neutron star at the Keplerian frequency i.e. M$_{max}^{Rot}$ of Eq. (8), an upper limit on the maximum mass of non-rotating neutron stars might be obtained [@rezo]. It follows from Eq. (8) that the upper limit is $\sim$ 2.16 M$_{\odot}$. It is already known from the observations of galactic pulsars that the lower limit on the maximum mass of non-rotating neutron stars is 2.01 M$_{\odot}$. All these information put together lead to $$\begin{aligned} %2.01 M_{\odot} \lesssim M_{max}^{TOV} \lesssim 2.16 M_{\odot}~. 2.01 M_{\odot} \leq M_{max}^{TOV} < 2.16 M_{\odot}~.\end{aligned}$$ Different groups converged almost to the same value of the upper limit from different analyses of GW170817 [@metz; @rezo; @shap; @shib]. ![Mass-radius relationships of non-rotating neutron stars are shown for the DD2, BHB$\Lambda\phi$ and SFHo EoSs. Horizontal lines denote the lower bound and upper bound on the maximum mass as given by Eq. (9).[]{data-label="fig3"}](mrgw.eps){width="5.5cm"} Constraint on EoS ----------------- We discuss the implications of the lower and upper limits of the maximum mass on EoSs. Mass-radius relationships corresponding to the DD2, BHB$\Lambda\phi$ and SFHo EoSs are plotted in Fig. 3. The lower and upper limits on the maximum mass are also indicated by two horizontal lines. It is evident from the figure that the BHB$\Lambda\phi$ and SFHo EoSs are consistent with both limits of the maximum mass. But this is not case with the DD2 EoS because it fails to satisfy the upper limit. It is to be noted that the DD2, BHB$\Lambda\phi$ and SFHo EoSs are being used for neutron star merger simulations by various groups [@rad17; @shib]. Next we perform a comparative study of different EoSs. Particularly, we look at the nuclear matter properties of all EoSs such as the saturation density ($n_0$), binding energy(E$_{0}$), incompressibility (K), symmetry energy (S) and its density slope (L). The nuclear matter properties of eight EoSs are recorded in Table 1. The last row of the table gives experimental values of nuclear matter properties [@hem]. First five of those EoSs for example, Lattimer-Swesty 200 (LS200) [@ls], Skyrme Lyon (SLy) [@han], Müller-Serot 1 (MS1) [@ms], Akmal-Pandharipande-Ravenhall 4 (APR4) [@apr] and hyperon EoS H4 [@owe] were used in the analysis of GW170817 [@abbott] because all of them satisfy the lower limit on the maximum mass. It is to be noted that all are nucleons only EoSs except the H4 EoS. A closer look at the nuclear matter properties at the saturation density of first five EoSs throw up important information about their behaviour at higher densities. It is evident from the Table that one or more observables of nuclear matter in case of LS220, MS1, APR4 and H4 EoSs are not consistent with the experimental values. This leads to a very soft or stiff EoS in those cases. For example, high values of incompressibility (K) for the APR4 and H4 make them stiffer EoSs. The threshold for the appearance of hyperons is shifted to a lower density for a very stiff EoS leading to the large population of hyperons in dense matter and resulting in a lower maximum mass neutron star as it is happening in case of the H4 EoS. For the LS220 and MS1 EoSs, the density slope (L) of symmetry energy is much higher than the experimental value. As a result, the maximum mass for the MS1 EoS is higher than the upper limit of 2.16 M$_{\odot}$. However, the interplay between a lower value of K and higher value of L for the LS220 EoS determines the maximum mass which falls well within the limits of Eq. (9). Though the SLy EoS is consistent with the experimental values and observational limits on the maximum mass, it is a non-relativistic EoS and superluminal behaviour could be a problem in this case at very high density (5-8 $n_0$) [@apr]. We have already discussed the last three EoSs of the Table. The nuclear matter properties of the DD2, SFHo and BHB$\Lambda\phi$ EoSs are in good agreement with the experimental values. However, it is concluded that the DD2 EoS is ruled out by Eq. (9). It is possible to further constrain EoSs using the measured tidal deformability from GW170817. Based on the tidal deformability of GW170817, the H4, APR4 and LS220 EoSs are excluded whereas the BHB$\Lambda\phi$ EoS is consistent with GW170817 data [@rad2]. [p[2cm]{}p[1.5cm]{}p[1.5cm]{}p[1.5cm]{}p[1.5cm]{}p[1.5cm]{}p[1.5cm]{}]{} EoS&$n_0$& E$_{0}$& K & S & L & M$_{max}$\ & [\[fm$^{-3}$\]]{} & \[MeV\] & \[MeV\] &\[MeV\] & \[MeV\] & \[M$_{\odot}$\]\ LS220 & 0.1550 &16.00 & 220 & 28.61 & 73.82 & 2.06\ SLy & 0.160 &15.97 & 230 & 32.00 & 45.94 & 2.05\ MS1 & 0.1484 &15.75 & 250 & 35.00 & 110.00 & 2.77\ APR4 & 0.160 &16.00 & 266 & 32.59 & 58.46 & 2.19\ H4 & 0.153 & 16.3 & 300 & 32.5 & 94.02 & 2.02\ DD2& 0.1491 & 16.02 & 243 & 31.67 & 55.04 & 2.42\ SFHo & 0.1583 & 16.19 & 245 & 31.57 & 47.10 & 2.06\ BHB$\Lambda\phi$& 0.1491 & 16.02 & 243 & 31.67 & 55.04 & 2.11\ Exp. & $\sim 0.15$ & $\sim 16$ & $240\pm10$ & $29.0-32.7$ & $40.5-61.9$ &2.01$\pm$0.04\ Summary and conclusion ====================== We have investigated the equations of state of dense matter within the framework of the density dependent relativistic hadron field theory. The nucleons only EoS is denoted as the DD2 whereas the $\Lambda$ hyperon EoS is known as the BHB$\Lambda\phi$. The neutron star merger event GW170817 gives an upper limit on the maximum mass of non-rotating neutron stars whereas the lower limit is already known from the observations of pulsars. The upper and lower limits severely constraint the EoS as we have found through a comparative study of eight EoSs and their nuclear matter properties. It is found that the BHB$\Lambda\phi$ EoS is consistent with both limits of the maximum mass and the tidal deformability of GW170817. S.B. and D.B. gratefully remember the support and encouragement that they received always from Professor Walter Greiner. [99.]{} S. Chandrasekhar, Astrophys. J., [**74,**]{} 81 (1931) L. D. Landau , Phys. Zs. Sowjet., [**1,**]{} 285 (1932) W. Baade and F. Zwicky, Phys. Rev., [**45,**]{} 138 (1934) A. Hewish, S. J. Bell, J. D. H. Pilkington, P. F. Scott and R. A. Collins, Nature, [**217,**]{} 709 (1968) B. P. Abbott et al., Phys. Rev. Lett. , [**119,**]{} 161101 (2017) B. P. Abbott et al., Astrophys. J. Lett. , [**848,**]{} L13 (2017) D. Pooley, P. Kumar and J. C. Wheeler, [**arXiv:1712.03240**]{} B. Margalit and B. D. Metzger, [**arXiv:1710.05938**]{} L. Rezzolla, E. R. Most and L. R. Weih, [**arXiv:1711.00314**]{} M. Ruiz, S. L. Shapiro and A. Tsokaros, [**arXiv:1711.00473**]{} J. Antoniadis et al., Science, [**340,**]{} 448 (2013) S. Typel, G. Röpke, T. Klähn, D. Blaschke and H. Wolter, Phys. Rev., [**C81,**]{} 015803 (2010) S. Banik, M. Hempel and D. Bandyopadhyay, Astrophys. J. Suppl., [**214,**]{} 22 (2014) P. Char, S. Banik and D. Bandyopadhyay, Astrophys. J., [**809,**]{} 116 (2015) D. Radice, S. Bernuzzi, W. Del Pozzo, L. F. Roberts and C. Ott, Astrophys. J., [**842,**]{} L10 (2017) M. Hempel and J. Schaffner-Bielich, Nucl. Phys., [**A837,**]{} 210 (2010) G. Audi, A. H. Wapstra and C. Thibault, Nucl. Phys. [**A729,**]{} 337 (2003). P. Moller, J. R. Nix, W. D. Myers and W. J. Swiatecki, At. Data Nucl. Data Tables [**59,**]{} 185 (1995). F. Hofmann, C. M. Keil and H. Lenske, Phys. Rev. [**C64,**]{} 025804 (2001). J. Schaffner and I. N. Mishustin, Phys. Rev., [**C53,**]{} 1416 (1996) A.  W. Steiner, M. Hempel and T. Fischer, Astrophys. J., [**774,**]{} 17 (2013) E. Gourgoulhon, P. Grandclément, J. -A. Marck, J. Novak and K. Taniguchi, “LORENE spectral methods differential equation solver”, Astrophysics Source Code Library **ascl:1608.018**, (2016). http://www.lorene.obspm.fr/ C. Breu and L. Rezzolla, MNRAS [**459,**]{} 646 (2016). S. S. Lenka, P. Char and S. Banik, Int. J. Mod. Phys. [**26,**]{} 1750127 (2017). M. Shibata, [**arXiv:1710.07579**]{} T. Fischer et al., Euro. Phys. J, [**A50,**]{} 46 (2014) J.  M. Lattimer and F. D. Swesty, Lattimer-Swesty eos web site, http://www.astro.sunysb.edu/dswesty/ lseos.html (1991–2012). F. Douchin and P. Haensel, Astron. Astrophys. [**380,**]{} 151 (2001) . H. Müller and B. D. Serot, Nucl. Phys. [**A606,**]{} 508 (1996). A. Akmal, V.  R. Pandharipande and D. G. Ravenhall, Phys. Rev. [**C73,**]{} 1804 (1998). B. D. Lackey, M. Nayyar and B. J. Owen, Phys. Rev. [**D73,**]{} 024021 (2006). D. Radice, A. Perego and F. Zappa, [**arXiv:1711.03647**]{}
--- abstract: 'We show that electron recombination using positively charged excitons in single quantum dots provides an efficient method to transfer entanglement from electron spins onto photon polarizations. We propose a scheme for the production of entangled four-photon states of GHZ type. From the GHZ state, two fully entangled photons can be obtained by a measurement of two photons in the linear polarization basis, even for quantum dots with observable fine structure splitting for neutral excitons and significant exciton spin decoherence. Because of the interplay of quantum mechanical selection rules and interference, maximally entangled electron pairs are converted into maximally entangled photon pairs with unity fidelity for a continuous set of observation directions. We describe the dynamics of the conversion process using a master-equation approach and show that the implementation of our scheme is feasible with current experimental techniques.' author: - 'Veronica Cerletti, Oliver Gywat, and Daniel Loss' title: 'Entanglement transfer from electron spins to photons in spin light-emitting diodes containing quantum dots' --- Introduction \[sec:Intro\] ========================== Spin light-emitting diodes (spin-LEDs), [@fiederling:1999a; @ohno:1999a; @awschalom:02; @pryor:03; @guendogdu:04; @seufert; @book:02; @kroutvar:2004a] in which electron recombination is accompanied by the emission of a photon with well-defined circular polarization, provide an efficient interface between electron spins and photons. The operation of such devices at the single-photon level would allow one to convert the quantum state of an electron encoded in its spin state into that of a photon with a wide range of possible applications. In view of quantum information schemes, converting spin into photon quantum states corresponds to a conversion of localized into flying qubits, which can be transmitted over long distances and could overcome limitations caused by the short-range nature of the electron exchange interaction. [@book:02] On a more fundamental level, the photon polarization can be readily measured experimentally such that an interface between spins and photons will allow one to measure quantum properties of the spin system via the photons generated on recombination. More specifically, entanglement of electron spins could be demonstrated not only in current noise [@burkard:2000; @egues:2002] but also via photon polarizations which allows one to test Bell’s inequalities. [@bell:65] In this work, we show that nonlocal spin-entangled electron pairs that recombine in single quantum dots contained in spatially separated spin-LEDs are converted into polarization-entangled photon states. In addition to its applications in quantum communication, this transfer can be used to characterize the output of an electron spin entangler [@andreev:01; @lesovik:01; @recher:02; @bena:02; @bouchiat:03; @saraga:03; @recher:03; @saraga2:04] in a setup as shown in Fig. \[fig:setup\]. Furthermore, such a setup acts as a deterministic source of polarization-entangled photon pairs. Recently, the decay of biexcitons in single quantum dots has been proposed for the production of entangled photons. [@benson:00; @moreau:01] However, several experiments [@kiraz:02; @santori:02; @stevenson:2002a; @zwiller:02; @ulrich:2003a] have only shown polarization correlation but not entanglement of the photons. The fine structure splitting $\delta_{\mathrm{ehx}}$ of the bright exciton ground state [@takagahara:00] has been identified to be crucial for the lack of entanglement: Firstly, the polarization-entangled photons are also entangled in energy if $\delta_{\mathrm{ehx}}$ is larger than the exciton linewidth. [@stace:2003a] Secondly, for $\delta_{\mathrm{ehx}}\neq 0$ the exciton spin relaxation rate due to phonons $1/T_{1,X}$ is enhanced [@tsitsishvili:2003a] and leads to an increased decoherence rate $1/T_{2,X} = 1/2T_{1,X} + 1/T_{\varphi,X}$, where $1/T_{\varphi,X}$ is the pure decoherence rate. To overcome these difficulties we propose to use positively charged excitons ($X^+$), for which $\delta_{\mathrm{ehx}}= 0$ up to small corrections. Moreover, we demonstrate that the antisymmetric hole ground state of the $X^+$ enables the production of entangled four-photon states. We study the transfer of entanglement for different photon emission directions by calculating the von Neumann entropy. Due to quantum mechanical interference, the fidelity of this process approaches unity not only for photon emission along the spin quantization axis, but for a continuous set of observation directions. The relaxation and decoherence of the electron spins in the leads is modeled using a master equation and it is quantified by the fidelity of the entangled state. ![(Color online) Schematic setup for the transfer of entanglement between electrons and photons. An electron entangler (gray box) injects a pair of spin-entangled electrons into two current leads. The electrons recombine individually in one quantum dot located in the left (L) and one in the right (R) spin-LED and give rise to the emission of two photons.[]{data-label="fig:setup"}](setup_fat_fig1.eps){width="7.5cm"} This work is organized as follows. In Sec. \[sec:Dynamics\] we describe the dynamics of the conversion process. In Sec. \[sec:Optic\] we focus on the microscopic expressions for the involved optical transitions, leading to entangled four-photon and two-photon states. In Sec. \[sec:Entanglement\] we quantify the entanglement of the two-photon state as a function of the emission angles. We conclude in Sec. \[sec:Concl\]. Dynamics of the conversion process \[sec:Dynamics\] =================================================== The effective Hamiltonian of the system is given by $$H=H_{L}+H_{R}+H_{\mathrm{rad}}+H_{\mathrm{int}},$$ where $H_{\mathrm{\alpha}}=\mathbf{p}^{2}/2m+V_{\mathrm{qd}}(\mathbf{r})$ is the Hamiltonian of the quantum dot $\alpha=L,R$ with confinement potential $V_{\mathrm{qd}}(\mathbf{r})$. The Hamiltonian of the radiation field is $H_{\mathrm{rad}}=\sum_{\mathbf{k},\lambda}\hbar\omega_{k}a_{\mathbf{k}\lambda}^{\dagger}a_{\mathbf{k}\lambda}$ and $H_{\mathrm{int}}=-e\mathbf{A\cdot p}/m_{0}c=H_{\mathrm{em}}+H.c.$ is the optical interaction term, which is linear in both the vector potential $\mathbf{A}$ and the electron momentum $\mathbf{p}$ and can be decomposed into a photon emission term $H_{\mathrm{em}}$ and its Hermitian conjugate. For simplicity, we assume that the dots $L$ and $R$ are identical, with cubic crystal structure and with aligned main crystal axes. We choose the $z$ axis parallel to the quantum dot growth direction (e.g., \[001\]). If the quantum dot confinement is stronger in the $z$ direction than in the $xy$ plane, $z$ defines the spin quantization axis and heavy-hole (hh) and light-hole (lh) states are energetically split by $\Delta_{\mathrm{hh-lh}}$ (typically $\Delta_{\mathrm{hh-lh}}\sim 10\, \mathrm{meV}$). We consider a hh ground state, with angular momentum projection $\pm 3/2$ in terms of electron quantum numbers. We further focus on the strong-confinement regime, where the dot radius is smaller than the exciton Bohr radius. The quantum dots in both spin-LEDs are prepared in a state ${|\chi_{\alpha}\rangle}$, where two excess holes occupy the lowest hh level in each dot. This initial state, which can be generated by applying an appropriate bias voltage across the LED, has several advantages. Firstly, electrons with arbitrary spin states can recombine optically, as demonstrated for electron spin detection in a recent experiment. [@guendogdu:04] Secondly, the $z$ component of the total hole spin vanishes. This is a consequence of the fact that in quantum dots the hh-lh exciton mixing due to the electron-hole exchange interaction $\Delta_{\mathrm{ehx}}$ is determined by a small parameter $\Delta_{\mathrm{ehx}}/\Delta_{\mathrm{hh-lh}}\sim 0.01$. Thus, injected spin-polarized electrons give rise to circularly polarized $X^+$ luminescence. This remains true for dots with asymmetric confinement in the $xy$ plane, in stark contrast to the case with an electron and only one hole in the dot, [@takagahara:00] where the good exciton eigenstates are horizontally polarized and are split in energy typically by $\delta_{\mathrm{ehx}}\sim 0.1\,\mathrm{meV}$. Thus, the electron-hole exchange interaction can be neutralized by initially providing [*two*]{} holes. Interband mixing (e.g., hh and lh states) in strongly anisotropic dots reduces the maximum circular polarization of photons emitted from spin-polarized electrons [@pryor:03] and reduces the fidelity of our scheme. However, because the interband transition probability for lh states is three times smaller than that for hh states, and hh-lh mixing is typically controlled by some small parameter in slightly elliptical dots, [@takagahara:00] we neglect lh transitions. Electron injection and photon emission -------------------------------------- We first describe the dynamics of the electron injection and recombination in the two dots using a master equation. The rate for the injection and the subsequent relaxation of electrons into the conduction band ground state in the dot $\alpha$ is denoted by $W_{e\alpha}$. It has been demonstrated that this entire process is spin conserving and occurs much faster than the optical recombination [@seufert; @guendogdu:04], which is described by the rates $W_{p\alpha}$. Typically, $W_{p\alpha}\sim 1\:(\mathrm{ns})^{-1}$ and $W_{e\alpha}\sim 0.1\:(\mathrm{ps})^{-1}$ for the incoherent transition rates. We solve the master equation for the classical occupation probabilities and obtain the probability that two photons are emitted after the injection of two electrons into the dots at $t=0$, $$P_{2p} = \prod_{\alpha = L,R}\frac{W_{e\alpha}(1-e^{-tW_{p\alpha}})- W_{p\alpha}(1-e^{-tW_{e\alpha}})}{W_{e\alpha}-W_{p\alpha}}.$$ For $W_{p\alpha} \ll W_{e\alpha}$, $P_{2p}\approx \prod_{\alpha = L,R}(1-e^{-tW_{p\alpha}})$. After photon emission, bipartite photon entanglement is achieved by a measurement of the hole spins as we describe below and the initial state is finally restored by injection of two holes into each of the two dots. We estimate the production rate of entangled photons in a setup to test some of the proposed electron entanglers.[@andreev:01; @lesovik:01; @recher:02; @bena:02; @bouchiat:03; @saraga:03; @recher:03; @saraga2:04] For example, electron spin singlets $|\Psi^-\rangle =({{|\! \uparrow\downarrow\rangle}}-{{|\! \downarrow\uparrow\rangle}})/\sqrt{2}$ are produced by the Andreev entangler [@andreev:01] with an average time separation $\Delta t \sim 10^{-5}\mathrm{s}$, while for the entangler based on three quantum dots, [@saraga:03] $\Delta t \sim 10^{-8}\mathrm{s}$. The two electrons of a singlet typically are injected into the current leads with a relative time delay $\tau \simeq 10^{-13}\mathrm{s}$ for both of these entanglers. Because $\tau, W_{p\alpha}^{-1} \ll \Delta t$, photons originating from a single pair of entangled electrons can be identified with high reliability. In the steady state, the generation rate of entangled photons is determined by the rate at which entangled electron pairs leave the entangler, $1/\Delta t$. Electron spin dynamics ---------------------- Relaxation and decoherence is taken into account for the two spins by the single-spin Bloch equation. [@burkard:2003] Given that the electrons are in different leads, they interact with different environments (during times $t$ and $t'$, respectively). Therefore, we consider different magnetic fields $\mathbf{h}$ and $\mathbf{h'}$, enclosing an angle $\beta$, each acting on an individual spin. We calculate the two-spin density matrix $\chi(t,t')$ and obtain for the singlet fidelity $f=4\langle\Psi^-\left|\chi(t,t')\right|\Psi^-\rangle$ (given in Ref. [@burkard:2003] for $t=t'$ and $\beta =0$), $$\begin{aligned} \nonumber f & = & 1-\mbox{cos}\beta\, a a'P P'+ e_1\left[e'_2\mbox{sin}^2\beta\,\mbox{cos}(h't') +e'_1\mbox{cos}^2\beta \right]\\ \nonumber & & + e_2e'_1\mbox{sin}^2\beta\,\mbox{cos}(ht) + e_2e'_2\left[2\,\mbox{cos}\beta\,\mbox{sin}(ht)\,\mbox{sin}(h't') \right.\\ & & + \left.\left(\mbox{cos}^2\beta\,+1\right)\mbox{cos}(ht)\,\mbox{cos}(h't') \right] ,\end{aligned}$$ where for the first (second) spin $e_i=e^{-t/T_i}$ ($e'_i=e^{-t'/T'_i}$), $a=1-e_1$ ( $a'=1-e'_1$), $P$ ($P'$) is the equilibrium polarization, and $T_2$ and $T_1$ ($T'_2$ and $T'_1$) are the spin decoherence and relaxation times, respectively. For $t \ll T_1,T_2$ and $t' \ll T'_1,T'_2$ (in bulk GaAs $T_2\sim 100\,\mathrm{ns}$ has been measured [@kikkawa:1998a] and, typically, $T_1\gg T_2$), the electrons form a nonlocal spin-entangled state after their injection into the dots $L$ and $R$ and after their subsequent relaxation to the single-electron orbital ground states $\phi_{c\alpha}(\mathbf{r}_{c\alpha},\sigma)$. A local rotation of one of the two spins in the leads (for $\mathbf{h} \neq \mathbf{h}'$) enables a transformation of $|\Psi^-\rangle$ into another (maximally entangled) Bell state $|\Psi^+\rangle =({{|\! \uparrow\downarrow\rangle}}+ {{|\! \downarrow\uparrow\rangle}})/\sqrt{2}$ or ${|\Phi^{\pm}\rangle}=({{|\! \uparrow\uparrow\rangle}}\pm{{|\! \downarrow\downarrow\rangle}})/\sqrt{2}$. This can be achieved, e.g., by controlling the local Rashba spin-orbit interaction in the current leads. [@egues:2002; @burkard:2003] Optical transitions \[sec:Optic\] ================================= The optical recombination processes of the two electrons occur independently, except for the entanglement of the spin wave functions. We consider one single branch $\alpha=L,R$ of the apparatus and omit the index $\alpha$. The state of the single quantum dot which is charged with two hhs in the orbital ground state and into which a single electron with spin $\sigma$ has been injected is given by $${|e,\sigma\rangle} = \int\mathrm{d}^{3}r_{c} \phi_{c}^{*}(\mathbf{r}_{c},\sigma) b_{c\sigma}^{\dagger}(\mathbf{r}_{c}){|\chi\rangle}. \label{eq:exstate}$$ Here, $b_{c\sigma}^{\dagger}(\mathbf{r}_{c})$ creates an electron with spin $S_{z}=\sigma/2=\pm1/2$ at $\mathbf{r}_c$ in the ground state of the dot, ${|\chi\rangle}=\sum_{\tau\neq\tau'}\int \mathrm{d}^{3}r_{v1} \mathrm{d}^{3}r_{v2}\phi_{v}(\mathbf{r}_{v1},\tau;\mathbf{r}_{v2},\tau') b_{v\tau}(\mathbf{r}_{v1})b_{v\tau'}(\mathbf{r}_{v2}){|g\rangle}$, where ${|g\rangle}$ is the electrostatically neutral ground state of the quantum dot, and $\phi_{v}(\mathbf{r}_{v1},\tau;\mathbf{r}_{v2},\tau')$ is the orbital part of the two-hole wave function. In the strong-confinement regime where Coulomb correlations are negligible, $\phi_{v}$ is a product of the single-particle valence band states. The labels $\tau,\,\tau'$ denote the hh spin component $S_{z} = \tau /2 = \pm1/2$ that factor out for angular momentum $J_{z}=\pm 3/2$. We now calculate the emission matrix element ${\langlef|}H_{\mathrm{em}}{|i\rangle}$ with initial state ${|i\rangle}={|e,\sigma\rangle}\otimes{|\dots,n_{\mathbf{k}\lambda},\dots\rangle}$ and final state ${|f\rangle}=b_{v\tau'}(\mathbf{r}_{v2}){|g\rangle}\otimes{|\dots,n_{\mathbf{k}\lambda}+1,\dots\rangle}$, where ${|\dots,n_{\mathbf{k}\lambda},\dots\rangle}$ is a Fock state of the electromagnetic field, typically the photon vacuum. Because of quantum mechanical selection rules, the optical transitions connect only states with the same spin such that $\tau'\neq\sigma$. In the envelope-function and dipole approximations, [@biexcitons] $$|{\langlef|}H_{\mathrm{em}}{|i\rangle}| = \frac{e}{m_{0}c}\, A_{0}(\omega_{k})\sqrt{n_{\mathbf{k}\lambda}+1}\, \left|\mathbf{e}_{\mathbf{k}\lambda}^{*}\cdot\mathbf{p}_{cv}^{*} C_{eh}\right|, \label{eq:emmatrixelement}$$ where $\mathbf{p}_{cv}^{*}=\mathbf{p}_{vc}$ is the inter-band momentum matrix element, $\mathbf{e}_{\mathbf{k}\lambda}$ is the unit polarization vector with $\lambda=\pm1$ for circular polarization $|\sigma_{\pm}\rangle$, $A_{0}(\omega_{k})=(\hbar/2\epsilon\epsilon_{0}\omega_{k}V)^{1/2}$, and $C_{eh}=\int\mathrm{d}^{3}r\,\psi_{c}^{*}(\mathbf{r},\sigma)\psi_{v}(\mathbf{r},\sigma)$, where $\psi_{n}$ is the envelope function of a carrier in the band $n=c,v$. For cubic symmetry, $\mathbf{e}^{*}_{\mathbf{k}\lambda}\cdot\mathbf{p}_{cv}^{*} = p_{cv}(\cos\theta-\sigma\lambda)e^{-i\sigma\phi}/2 \equiv p_{cv}m_{\sigma\lambda}(\theta,\phi)$, where $\theta$ and $\phi$ are the polar and the azimuthal angle of the photon emission direction, respectively. With the transition ${|e,\sigma\rangle}\rightarrow b_{v-\sigma}(\mathbf{r}_{v2}){|g\rangle}$, a photon $$|\sigma,\theta,\phi\rangle=N(\theta)(m_{\sigma,+1}(\theta,\phi)|\sigma_{+}\rangle+m_{\sigma,-1}(\theta,\phi)|\sigma_{-}\rangle) \label{eq:photonstate}$$ is emitted into the direction $(\theta , \phi)$. Here, $N(\theta)=[2/(1+\cos^{2}\theta)]^{1/2}$ is a normalization factor. Eq. (\[eq:photonstate\]) shows that for $\theta=0$, a spin-up ($\sigma=+1$) electron generates a $|\sigma_{-}\rangle$ photon, whereas a $|\sigma_{+}\rangle$ photon is obtained from a spin-down ($\sigma=-1$) electron. The admixture of the opposite circular polarization increases with $\theta$, leading to linear polarization for $\theta=\pi/2$. For $\theta\neq0$, the spin-inverted states $|+1,\theta,\phi\rangle$ and $|-1,\theta,\phi\rangle$ have interchanged coefficients for $|\sigma_{+}\rangle$ and $|\sigma_{-}\rangle$, up to a relative phase determined by the (global) phase factors $\exp{(-i\sigma\phi)}$. Note that in two-photon states the azimuthal angles thus can provide a [*relative*]{} phase, as we exploit below. Entangled four-photon state --------------------------- The two photons produced at recombination are entangled with the two holes which remain in the dots, due to the antisymmetric hole ground state. By injecting a pair of electrons with spins polarized in the $xy$ plane into the dots,[@twopairs] a four-photon state of the Greenberger-Horne-Zeilinger (GHZ) type [@peres:1998a] can be produced if $T_{1,X}$ and $T_{2,X}$ exceed the exciton lifetime $\tau_X$. For the two polarized electrons, only the electron spin orientation in $z$ direction which satisfies the optical selection rules contributes to the optical transition, respectively. For circularly polarized photons emitted along $z$, the electron Bell states give rise to the photon states $$\begin{aligned} {|\Psi^{\pm}\rangle} & \rightarrow & |\sigma_+ \sigma_- \sigma_- \sigma_+ \rangle \pm |\sigma_- \sigma_+ \sigma_+ \sigma_- \rangle,\label{eq:ghz1}\\ {|\Phi^{\pm}\rangle} & \rightarrow & |\sigma_- \sigma_- \sigma_+ \sigma_+ \rangle \pm |\sigma_+ \sigma_+ \sigma_- \sigma_- \rangle,\label{eq:ghz2}\end{aligned}$$ where the first two entries indicate the first photon pair (L,R) and the third and fourth entry the second photon pair (L,R), respectively. Normalization has been omitted for simplicity. Yet, the second photon pair is generated by neutral excitons and is thus exposed to the same problems as the biexciton decay cascade in asymmetric quantum dots. Here, a cavity can be used to maintain the GHZ state since the energy entanglement of the second photon pair can be erased, [@stace:2003a] and $\tau_X$ can be shortened due to the Purcell effect to reduce exciton polarization decoherence. Entangled two-photon state -------------------------- Full [*bipartite*]{} photon entanglement of the first photon pair is obtained, e.g., by directing the second photon pair via secondary optical paths to a linear polarization measurement which is performed [*before*]{} the first photon pair is measured, [@imamoglupc] see Fig. \[fig:entropy\] (a). Even different bases $\{|H\rangle ,\, |V\rangle\}$ and $\{|H'\rangle ,\, |V'\rangle\}$ can be chosen for the two photons of the second pair. Note that the electron-hole exchange interaction in elliptical dots assists this projection into linearly polarized eigenstates (along the major and the minor axis of the dots, respectively) already during the lifetime of the remaining two excitons. While the loss of (linear) polarization coherence is tolerable for these excitons, $T_{1,X}>\tau_X$ is required for entanglement of the first photon pair. This suggests that the scheme presented here can be realized with typical quantum dots, see Ref. [@tsitsishvili:2003a] and references therein. If the second photon pair is measured in the state $|HH'\rangle$ or $|VV'\rangle$, the electron Bell states have given rise to the two-photon states $$\begin{aligned} {|\Psi^{\pm}\rangle} & \rightarrow & |\!+\!\!1,\theta_{1},\phi_{1}\rangle_{L}\,|\!-\!\!1,\theta_{2},\phi_{2}\rangle_{R} \nonumber\\ & & \pm\,|\!-\!\!1,\theta_{1},\phi_{1}\rangle_{L}\,|\!+\!\!1,\theta_{2},\phi_{2}\rangle_{R},\label{eq:2photonstate1}\\ {|\Phi^{\pm}\rangle} & \rightarrow & |\!+\!\!1,\theta_{1},\phi_{1}\rangle_{L}\,|\!+\!\!1,\theta_{2},\phi_{2}\rangle_{R} \nonumber\\ & & \pm\,|\!-\!\!1,\theta_{1},\phi_{1}\rangle_{L}\,|\!-\!\!1,\theta_{2},\phi_{2}\rangle_{R}.\label{eq:2photonstate2}\end{aligned}$$ Here, normalization has been omitted for simplicity. If the second photon pair is measured as $|HV'\rangle$ or $|VH'\rangle$, $\pm$ is replaced by $\mp$ on the right-hand side of Eqs. (\[eq:2photonstate1\]) and (\[eq:2photonstate2\]). Obviously, above two-photon states (\[eq:2photonstate1\]) and (\[eq:2photonstate2\]) are maximally entangled for $\theta_{1}=\theta_{2}=0$. For $\theta_{1}=\theta_{2}\in(0,\pi/2)$, the total relative phase factor between the two-photon states in Eq. (\[eq:2photonstate1\]) is $\exp(i\gamma+2i\Delta\phi)$. Here, $\Delta\phi=\phi_{1}-\phi_{2}$, and the relative phase of the two-electron states is $\gamma=\pi$ for ${|\Psi^{-}\rangle}$ and $\gamma=0$ for ${|\Psi^{+}\rangle}$. For Eq. (\[eq:2photonstate2\]), the relative phase factor is $\exp[i\gamma+2i(\phi_{1}+\phi_{2})]$, with $\gamma=\pi$ for ${|\Phi^{-}\rangle}$ and $\gamma=0$ for ${|\Phi^{+}\rangle}$. By tuning the relative phase factors in Eqs. (\[eq:2photonstate1\]) and (\[eq:2photonstate2\]) to $-1$, two circularly polarized photons can be recovered for $\theta_1=\theta_2 \in (0,\pi/2)$ from the elliptically polarized single-photon states due to quantum mechanical interference.[@ghzent] Thus, maximal entanglement is transferred from two electron spins to the polarizations of two photons for certain ideal emission angles. For ${|\Psi^{-}\rangle}$ (${|\Psi^{+}\rangle}$), $\Delta\phi=0$ ($\Delta\phi=\pi/2$) needs to be satisfied $\mathrm{mod}\pi$, whereas the condition for ${|\Phi^{-}\rangle}$ (${|\Phi^{+}\rangle}$) is $\phi_{1}+\phi_{2}=0$ ($\phi_{1}+\phi_{2}=\pi/2$) $\mathrm{mod}\pi$. For $\theta_1=\theta_2 = \pi /2$ these two-photon states vanish completely due to destructive interference. ![(Color online) (a) Schematic setup to obtain bipartite entanglement of photons 1 and 2 by measuring the photons 3 and 4 of the GHZ state in bases of linear polarizations $H,V$ and $H',V'$, respectively (see the text). In (b) and (c), we show the von Neumann entropy (b) $E = E_{\mathrm{min}}$ and (c) $E = E_{\mathrm{max}}$ as a function of the polar angles $\theta_1$ and $\theta_2$ for photon emission. $E$ oscillates between (b) and (c) as a function of $\phi_1$ and $\phi_2$, as explained in the text. The photon-polarization entanglement is maximal for $\theta_1 = \theta_2 = 0$, whereas for $\theta_i = \pi /2$ entanglement is absent. In (c), $ E_{\mathrm{max}}=1$ for the continuous set of directions $\theta_{1}=\theta_{2}\in[0,\pi/2)$. []{data-label="fig:entropy"}](bell_and_entropy.eps){width="8cm"} Photon entanglement as a function of emission directions \[sec:Entanglement\] ============================================================================= For arbitrary emission directions of the two photons, the degree of polarization entanglement can be quantified by the von Neumann entropy $E=-\text{tr}_{2}(\tilde{\rho}\log_{2}\tilde{\rho})$. Here, $\tilde{\rho}=\text{tr}_1\rho$ is the reduced density matrix of the two-photon state $\rho$ with the trace $\text{tr}_1$ taken over photon 1. For a maximally entangled two-photon state $E=1$, while $E=0$ represents a pure state $\tilde{\rho}$ (which implies the absence of bipartite entanglement). If the two electrons recombine after times much shorter than the spin lifetimes $T_1,\,T'_1,\,T_2,\,T'_2$, $E$ oscillates for Eq. (\[eq:2photonstate1\]) as a function of $\Delta\phi$ of the two emitted photons between a minimal value, $$\begin{aligned} E_{\mathrm{min}} &=& \log_{2}(1+x_{1}x_{2}) - \frac{x_{1}x_{2}\log_{2}(x_{1}x_{2})}{1+x_{1}x_{2}},\end{aligned}$$ and a maximal value, $$E_{\mathrm{max}}=\log_{2}(x_{1}+x_{2})-\frac{x_{1}\log_{2}(x_{1})}{x_{1}+x_{2}}-\frac{x_{2}\log_{2}(x_{2})}{x_{1}+x_{2}},\label{eq:emax}$$ where $x_{i}=\mbox{cos}^2\theta_{i}$, which is (only) obtained for the ideal angles $\phi_1$ and $\phi_2$ mentioned above; see Fig. \[fig:entropy\] (b) and (c). For Eq. (\[eq:2photonstate2\]), $E$ oscillates between $E_{\mathrm{min}}$ and $E_{\mathrm{max}}$ as a function of $\phi_1 + \phi_2$. As expected, $E_{\mathrm{max}}=1$ for all $\theta_{1}=\theta_{2}\in[0,\pi/2)$. The discontinuity in $E_{\mathrm{max}}$ for $\theta_1=\theta_2=\pi /2$ is due to the vanishing two-photon state. Conclusions \[sec:Concl\] ========================= We have studied the transfer of entanglement from electron spins to photon polarizations. We have discussed the generation of entangled four-photon and two-photon states via the injection of spin-entangled electrons into quantum dots charged with two excess holes. We have proposed a scheme to achieve complete entanglement transfer from two electron spins to two photons. We have shown that this scheme can even be realized with quantum dots exhibiting an exciton exchange splitting. We have shown the dependence of the photon entanglement on the emission angles and identified the conditions for maximal entanglement. This offers the possibility to efficiently test Bell’s inequalities for electron spins. In addition, our results show that a continuous set of directions exist along which entanglement is maximal. Finally, similar schemes to produce entangled photons can be realized using two tunnel-coupled dots [@gywat] instead of two isolated dots. In such a setup, it is essential that tunnel coupling is provided for the conduction-band electrons, whereas the valence-band holes are not tunnel coupled and thus localized in the individual dots. After a positively charged exciton is created in each of the two dots, the spin entanglement is provided from the singlet ground state of the delocalized electrons and can be transferred to the photons, similarly as described in this work. We thank A. Imamoglu, G. Burkard, F. Meier, P. Recher, D. S. Saraga, V. N. Golovach, and D. V. Bulaev for discussions. We acknowledge support from DARPA, ARO, ONR, NCCR Nanoscience, and the Swiss NSF. [99]{} R. Fiederling, M. Keim, G. Reuscher, W. Ossau, G. Schmidt, A. Waag, and L. W. Molenkamp, Nature [**402**]{}, 787 (1999). Y. Ohno, D. K. Young, B. Beschoten, F. Matsukura, H. Ohno, and D. D. Awschalom, Nature [**402**]{}, 790 (1999). Y. Chye, M. E. White, E. Johnston-Halperin, B. D. Gerardot, D. D. Awschalom, and P. M. Petroff, Phys. Rev. B [**66**]{}, 201301(R) (2002). C. E. Pryor and M. E. Flatté, Phys. Rev. Lett. **91**, 257901 (2003). K. Gündogdu, K. C. Hall, T. F. Boggess, D. G. Deppe, and O. B. Shchekin, Appl. Phys. Lett. [**84**]{}, 2793 (2004). J. Seufert, G. Bacher, H. Schömig, A. Forchel, L. Hansen, G. Schmidt, and L. W. Molenkamp, Phys. Rev. B **69**, 035311 (2004). , edited by D. D. Awschalom, D. Loss, and N. Samarth (Springer-Verlag, Berlin, 2002). M. Kroutvar, Y. Ducommun, D. Heiss, M. Bichler, D. Schuh, G. Abstreiter, and J. J. Finley, Nature [**432**]{}, 81 (2004). G. Burkard, D. Loss, and E. V. Sukhorukov, Phys. Rev. B **61**, R16 303 (2000). J. C. Egues, G. Burkard, and D. Loss, Phys. Rev. Lett. **89**, 176401 (2002). J. Bell, Physics [**1**]{}, 195 (1965). P. Recher, E. V. Sukhorukov, and D. Loss, Phys. Rev. B [**63**]{}, 165314 (2001). G. B. Lesovik, T. Martin, and G. Blatter, Eur. Phys. J. B, [**24**]{}, 287 (2001). P. Recher and D. Loss, Phys. Rev. B [**65**]{}, 165327 (2002). C. Bena, S. Vishveshwara, L. Balents, and M. P. A. Fisher, Phys. Rev. Lett. **89**, 037901 (2002). V. Bouchiat, N. Chtchelkatchev, D. Feinberg, G. B. Lesovik, T. Martin, and J. Torrés, Nanotechnology [**14**]{}, 77 (2003). D. S. Saraga and D. Loss, Phys. Rev. Lett. [**90**]{}, 166803 (2003). P. Recher and D. Loss, Phys. Rev. Lett. [**91**]{}, 267003 (2003). D. S. Saraga, B. L. Altshuler, D. Loss, and R. M. Westervelt, Phys. Rev. Lett. **92**, 246803 (2004). O. Benson, C. Santori, M. Pelton, and Y. Yamamoto, Phys. Rev. Lett. **84**, 2513 (2000). E. Moreau, I. Robert, L. Manin, V. Thierry-Mieg, J. M. Gérard, and I. Abram, Phys. Rev. Lett. **87**, 183601 (2001). A. Kiraz, S. Fälth, C. Becher, B. Gayral, W. V. Schoenfeld, P. M. Petroff, L. Zhang, E. Hu, and A. Imamoglu, Phys. Rev. B **65**, 161303(R) (2002). C. Santori, D. Fattal, M. Pelton, G. S. Solomon, and Y. Yamamoto Phys. Rev. B **66**, 045308 (2002). R. M. Stevenson, R. M. Thompson, A. J. Shields, I. Farrer, B. E. Kardynal, D. A. Ritchie, and M. Pepper, Phys. Rev. B **66**, 081302(R) (2002). V. Zwiller, P. Jonsson, H. Blom, S. Jeppesen, M.-E. Pistol, L. Samuelson, A. A. Katznelson, E. Yu. Kotelnikov, V. Evtikhiev, and G. Björk, Phys. Rev. A **66**, 053814 (2002). S. M. Ulrich, S. Strauf, P. Michler, G. Bacher, and A. Forchel, Appl. Phys. Lett. **83**, 1848 (2003). T. Takagahara, Phys. Rev. B **62**, 16 840 (2000). T. M. Stace, G. J. Milburn, and C. H. W. Barnes, Phys. Rev. B **67**, 085317 (2003). E. Tsitsishvili, R. v. Baltz, and H. Kalt, Phys. Rev. B **67**, 205330 (2003). G. Burkard and D. Loss, Phys. Rev. Lett. **91**, 087903 (2003). J. M. Kikkawa and D. D. Awschalom, Phys. Rev. Lett. [**80**]{}, 4313 (1998). O. Gywat, G. Burkard, and D. Loss, Phys. Rev. B **65**, 205329 (2002). To switch between the production of entangled and polarized electron pairs, a double quantum dot can be used with tunable exchange splitting $J$ and to which an in-plane magnetic field $B_\perp$ is applied [@burkard:2000]. For $J$ smaller (larger) than the Zeeman energy, the two-electron ground state is a triplet with spins along $B_\perp$ (a singlet). Alternatively, for subsequent injection of [*two*]{} entangled electron pairs, $\pm \rightarrow +$ on the right-hand side of Eqs. (\[eq:ghz1\])–(\[eq:2photonstate2\]). A. Peres, [*Quantum Theory: Concepts and Methods*]{}, (Kluwer, Dordrecht, 1998). An alternative suggestion by A. Imamoglu (private communication) is to perform a Hadamard operation on the hh states which are left in the dots after emission of the first photon pair \[e.g., via an optical Raman transition, see A. Imamoglu, D. D. Awschalom, G. Burkard, D. P. DiVincenzo, D. Loss, M. Sherwin, and A. Small, Phys. Rev. Lett. **83**, 4204 (1999)\], followed by a hole-spin measurement along $z$, e.g., via state-selective absorption of circularly polarized photons. Such ideal angles can analogously be found for the four-photon GHZ states. O. Gywat, Ph.D. thesis, University of Basel, February 2005.
--- author: - 'P. Exner$^{a,b}$ and K. Yoshitomi$^{c}$' title: 'Eigenvalue asymptotics for the Schr[ö]{}dinger operator with a $\delta$-interaction on a punctured surface' --- > [*a) Department of Theoretical Physics, Nuclear Physics Institute,\ > Academy of Sciences, 25068 Řež, Czech Republic\ > b) Doppler Institute, Czech Technical University, Břehov[á]{} 7,\ > 11519 Prague, Czech Republic\ > c) Department of Mathematics, Tokyo Metropolitan University,\ > Minami-Ohsawa 1-1, Hachioji-shi, Tokyo 192-0397, Japan\ > exner@ujf.cas.cz, yositomi@comp.metro-u.ac.jp*]{} > > MSC numbers: 35J10, 81V99\ > KEYWORDS: Schrödinger operators, singular interaction, discrete spectrum, manifolds, perturbation > > [ Given $n\geq 2$, we put $r=\min\{\,i\in\mathbb{N};\: i>n/2\,\}$. Let $\Sigma$ be a compact, $C^{r}$-smooth surface in $\mathbb{R}^{n}$ which contains the origin. Let further $\{S_{\epsilon}\}_{0\le\epsilon<\eta}$ be a family of measurable subsets of $\Sigma$ such that $\sup_{x\in > S_{\epsilon}}|x|= {\mathcal O}(\epsilon)$ as $\epsilon\to 0$. We derive an asymptotic expansion for the discrete spectrum of the Schr[ö]{}dinger operator $-\Delta -\beta\delta(\cdot-\Sigma > \setminus S_{\epsilon})$ in $L^{2}(\mathbb{R}^{n})$, where $\beta$ is a positive constant, as $\epsilon\to 0$. An analogous result is given also for geometrically induced bound states due to a $\delta$ interaction supported by an infinite planar curve. ]{} Introduction ============ Schr[ö]{}dinger operators with $\delta$-interactions supported by subsets of a lower dimension in the configuration space have been studied by numerous authors – see, e.g., [@AGHH]–[@BEKS] and references therein. Recently such systems attracted a new attention as models of “leaky” quantum wires and similar structures; new results have been derived about a curvature-induced discrete spectrum [@EI; @EK1] and the strong-coupling asymptotics [@Ex; @EK2; @EY1; @EY2; @EY3]. The purpose of this paper is to discuss another question, namely how the discrete spectra of such operators behave with respect to a perturbation of the interaction support. Since the argument we are going to use can be formulated in any dimension, we consider here generally $n$-dimensional Schr[ö]{}dinger operators, $n\ge 2$, with a $\delta$-interaction supported by a punctured surface. On the other hand, we restrict our attention to the situation when the surface codimension is one and the Schr[ö]{}dinger operator in question is defined naturally by means of the appropriate quadratic form. Formally speaking, our result says that up to an error term the eigenvalue shift resulting from removing an $\epsilon$-neighbourhood of a surface point is the same as that of adding a repulsive $\delta$ interaction at this point with the coupling constant proportional to the puncture “area”. We will formulate this claim precisely in Theorem \[main\] below for any sufficiently smooth compact surface in $\mathbb{R}^n$ and prove it in Section 3. Furthermore, the compactness requirement is not essential in the argument; in Section 4 we will derive an analogous asymptotic formula for an infinite planar curve which is not a straight line but it is asymptotically straight in a suitable sense. The main result =============== Put $r:=\min\{\,i\in\mathbb{N};\: i>n/2\,\}$. Let $\Sigma$ be a compact, $C^{r}$-smooth surface in $\mathbb{R}^{n}$ which contains the origin, $0\in\Sigma$. Let further $\{S_{\epsilon}\}_{0\leq \epsilon<\eta}$ be a family of subsets of $\Sigma$ which obeys the following hypotheses: [(H.1)]{} The set $S_{\epsilon}$ is measurable with respect to the $(n\!-\!1)$-dimensional Lebesgue measure on $\Sigma$ for any $\epsilon\in [0,\eta)$. [(H.2)]{} $\:{\displaystyle \sup_{x\in S_{\epsilon}}}|x|= {\mathcal O}(\epsilon)$ as $\,\epsilon\to 0$. Next we fix $\beta>0$ and define for $0\leq\epsilon<\eta$ the quadratic form $q_{\epsilon}$ by $$q_{\epsilon}[u,v]:=(\nabla u,\nabla v) _{L^{2}(\mathbb{R}^{n})}-\beta \int_{\Sigma\setminus S_{\epsilon}} u(x)\overline{v(x)}\,dS\,, \quad u,v\in H^{1}(\mathbb{R}^{n})\,;$$ it is easily seen to be closed and bounded from below. Let $H_{\epsilon}$ be the self-adjoint operator associated with $q_{\epsilon}$. Since $\Sigma\setminus S_{\epsilon}$ is bounded, we have $$\sigma_\mathrm{ess}(H_{\epsilon})=[0,\infty) \quad \mathrm{ and}\quad \sharp \sigma_\mathrm{disc}(H_{0})<\infty\,.$$ By the min-max principle, there exists a unique $\beta^{*}\geq 0$ such that $\sigma_\mathrm{disc}(H_{0})$ is non-empty if $\beta>\beta^{*}$ while $\sigma_\mathrm{disc}(H_{0})=\emptyset$ for $\beta\leq\beta^{*}$. The critical coupling is dimension-dependent: a straightforward modification of the usual Birman-Schwinger argument using [@BEKS Lemma 2.3] shows that $\beta^{*}=0$ when $n=2$, while for $n\ge 3$ we have $\beta^{*}>0$ by [@BEKS Thm 4.2(iii)]. Since our aim is to derive asymptotic properties of the discrete spectrum, we will assume throughout that [(H.3)]{} $\:\beta>\beta^{*}.$ Let $N$ be the number of negative eigenvalues of $H_{0}$. Since $$0\leq q_{\epsilon}[u,u]-q_{0}[u,u]\to 0 \quad \mathrm{as} \quad \epsilon\to 0 \quad\mathrm{for}\quad u\in H^{1}(\mathbb{R}^{n})\,,$$ there exists $\eta^{\prime}\in (0,\eta)$ such that for $\epsilon\in (0,\eta^{\prime})$ the operator $H_{\epsilon}$ has exactly $N$ negative eigenvalues denoted by $\lambda_{1}(\epsilon)< \lambda_{2}(\epsilon) \leq\cdots\leq\lambda_{N}(\epsilon)$, and moreover $$\lambda_{j}(\epsilon)\to\lambda_{j}(0)\quad \mathrm{as} \quad\epsilon\to 0\quad\mathrm{for}\quad 1\leq j\leq N$$ (see [@Ka Chap. VIII, Thm 3.15]). Let $\{\varphi_{j} (x)\}^{N}_{j=1}$ be an orthonormal system of eigenfunctions of $H_{0}$ such that $H_{0}\varphi_{j}=\lambda_{j}(0)\varphi_{j}$ for $1\leq j\leq N$. Pick a sufficiently small $a>0$ so that the set $\{\,x\in\mathbb{R}^{n}:\: |x|<a\,\}\setminus\Sigma$ consists of two connected components, which we denote by $B_{\pm}$. We have $\varphi_{j}\in H^{r}(B_{\pm})$ by the elliptic regularity theorem (see [@A Sec. 10]), because the form domain $H^{1}(\mathbb{R}^{n})$ of $q_{0}$ is locally invariant under tangential translations along the surface $\Sigma$. Since $r>n/2$ by assumption, the Sobolev trace theorem implies that the function $\varphi_{j}$ is continuous on a $\Sigma$-neighbourhood of the origin. We also note that one can suppose without loss of generality that $\varphi_{1}(x)>0\,$ in $\mathbb{R}^{n}$. For a given $\mu\in\sigma_\mathrm{disc}(H_{0})$ we define $$\begin{aligned} m(\mu) &\!:=\!& \min\{1\leq j\leq N;\: \mu=\lambda_{j}(0)\}\,, \\ n(\mu) &\!:=\!& \max\{1\leq j\leq N;\: \mu=\lambda_{j}(0)\}\,, \\ C(\mu) &\!:=\!& \left(\,\varphi_{i}(0) \overline{\varphi_{j}(0)}\,\right)_{m(\mu)\leq i,j\leq n(\mu)}\,. \end{aligned}$$ Let $s_{m(\mu)}\leq s_{m(\mu)+1}\leq\cdots\leq s_{n(\mu)}$ be the eigenvalues of the matrix $C(\mu)$. In particular, if $\mu=\lambda_j(0)$ is a simple eigenvalue of $H_0$, we have $m(\mu)=n(\mu)=j$ and $s_j= |\varphi_{j}(0)|^2$. Our main result can be then stated as follows. \[main\] Adopt the assumptions (H.1)–(H.3). Let $\mu\in \sigma_\mathrm{disc}(H_{0})$, then the asymptotic formula $$\lambda_{j}(\epsilon)=\mu +\beta\, \mathrm{meas}_{\Sigma}(S_{\epsilon}) s_{j} +o(\epsilon^{n-1}) \quad\mathit{as}\quad \epsilon\to 0$$ holds for $m(\mu)\leq j\leq n(\mu)$, where $\mathrm{meas}_{\Sigma}(\cdot)$ stands for the $(n\!-\!1)$-dimensional Lebesgue measure on $\Sigma$. It should be stressed that our problem involves a singular perturbation and thus it cannot be reduced to the general asymptotic perturbation theory of quadratic forms described in [@Ka Sec. VIII.4]. Indeed, we have $$q_{\epsilon}[u,u]=q_{0}[u,u] +\beta\, \mathrm{meas}_{\Sigma}(S_{\epsilon}) |u(0)|^{2}+{\mathcal O} (\epsilon^{n}) \quad\mathrm{as}\quad\epsilon\to 0$$ for $u\in C^{\infty}_{0}(\mathbb{R}^{n})$ and the quadratic form $C^{\infty}_{0}(\mathbb{R}^{n}) \owns u\mapsto |u(0)|^{2}\in \mathbb{R}$ does not extend to a bounded form on $H^{1}(\mathbb{R}^{n})$, because the set $$\left\{\,u\in C^{\infty}_{0}(\mathbb{R}^{n});\: u=0\quad \mathrm{in\; a\; neighbourhood\, of\; the\; origin}\, \right\}$$ is dense in $H^{1}(\mathbb{R}^{n})$. We eliminate this difficulty by using the compactness of the map $H^{1}(\mathbb{R}^{n})\owns f\mapsto f|_{\Sigma}\in L^{2}(\Sigma)$, which will enable us to prove Theorem \[main\] along the lines of the asymptotic-perturbation theorem proof. Let us remark that our functional-analytic argument has a distinctive advantage over another technique employed in such situations, usually called the matching of asymptotic expansions – see [@Il] for a thorough review – since the latter typically requires a sort of self-similarity for the perturbation domains. Our technique needs no assumption of this type. Proof of Theorem \[main\] ========================= We denote $R(\zeta,\epsilon)= (H_{\epsilon}-\zeta)^{-1}$ for $\zeta\in\rho(H_{\epsilon})$ and $R(\zeta)=(H_{0}-\zeta)^{-1}$ for $\zeta\in\rho(H_{0})$. Put $\kappa:= {1\over 2}\,\mathrm{dist} (\{\mu\}, \sigma(H_{0})\setminus\{\mu\})$. Since $\lambda_{j}(\cdot)$ is continuous at the origin for $1\leq j\leq N$, there is an $\eta_{0}\in (0,\eta^{\prime})$ such that $$\begin{aligned} \lefteqn{ \sigma(H_{\epsilon})\cap[\mu-\kappa,\mu+\kappa] = \sigma(H_{\epsilon})\cap(\mu-\kappa/2,\mu+\kappa/2)}\\ && \phantom{AA} = \{\lambda_{m(\mu)}(\epsilon), \lambda_{m(\mu)+1}(\epsilon),\ldots, \lambda_{n(\mu)}(\epsilon)\}\end{aligned}$$ holds if $0<\epsilon \leq\eta_{0}$. Choosing the circle $C:=\{\,z\in\mathbb{C};\: |z-\mu|={3\over 4} \kappa\,\}$ we put $$w_{j}(\zeta,\epsilon):= R(\zeta,\epsilon)\varphi_{j}-R(\zeta)\varphi_{j} \quad \mathrm{for}\quad 0<\epsilon \leq\eta_{0}\,,\; \zeta\in C\,.$$ Our first aim is to check that $$\label{wdecay} \| w_{j}(\zeta,\epsilon)\|_ {H^{1}(\mathbb{R}^{n})}=\mathcal{O}(\epsilon^{(n-1)/2})\quad \mathrm{as}\quad\epsilon\to 0$$ holds uniformly with respect to $\zeta\in C$ for $m(\mu)\leq j\leq n(\mu)$. Notice that there exists a $K_{0}>0$ such that $$\Vert u\Vert^{2}_{H^{1}(\mathbb{R}^{n})}\leq 2\left|(q_{\epsilon}-\zeta)[u,u]\right|+K_{0}\Vert u\Vert^{2}_{L^{2}(\mathbb{R}^{n})}$$ for $\zeta\in C$, $u\in H^{1}(\mathbb{R}^{n})$, and $0<\epsilon\leq\eta_{0}$. This implies that there exists a $K_{1}>0$ such that $$\Vert R(\zeta,\epsilon)u\Vert_{H^{1}(\mathbb{R}^{n})}\leq K_{1} \Vert u\Vert_{L^{2}(\mathbb{R}^{n})}$$ for $\zeta\in C$, $u\in H^{1}(\mathbb{R}^{n})$, and $0<\epsilon\leq\eta_{0}$. Moreover, by the Sobolev trace theorem, there exists a constant $K_{2}>0$ such that $$\| u\|_{L^{2}(\Sigma)}\leq K_{2}\| u\|_{H^{1}(\mathbb{R}^{n})} \quad\mathrm{for}\quad u\in H^{1}(\mathbb{R}^{n})\,.$$ Combining these three estimates we get $$\begin{aligned} {}&{}&\| w_{j}(\zeta,\epsilon)\|^{2}_{H^{1}( \mathbb{R}^{n})}\nonumber\\ &\!\leq\!&2\left|(q_{\epsilon}-\zeta) [w_{j}(\zeta,\epsilon), w_{j}(\zeta,\epsilon)]\right|+K_{0}(w_{j}(\zeta,\epsilon), w_{j}(\zeta,\epsilon))_{L^{2}( \mathbb{R}^{n})}\nonumber\\ &\!=\!&2\left|-\beta\int_{S_{\epsilon}} R(\zeta)\varphi_{j} \overline{w_{j}(\zeta,\epsilon)}\,dS\right| +K_{0}(q_{0}-q_{\epsilon})[R(\zeta)\varphi_{j},R(\overline{\zeta},\epsilon) w_{j}(\zeta,\epsilon)]\nonumber\\ &\!\leq\!& \beta\Vert R(\zeta)\varphi_{j}\Vert_{L^{2}(S_{\epsilon})}( 2\Vert w_{j}(\zeta,\epsilon)\Vert _{L^{2}(S_{\epsilon})}+ K_{0}\Vert R(\overline{\zeta},\epsilon)w_{j}(\zeta,\epsilon)\Vert _{L^{2}(S_{\epsilon})})\nonumber\\ &\!=\!& \frac{4\beta}{3\kappa}\Vert\varphi_{j}\Vert_{L^{2}(S_{\epsilon})}( 2\Vert w_{j}(\zeta,\epsilon)\Vert _{L^{2}(S_{\epsilon})}+ K_{0}\Vert R(\overline{\zeta},\epsilon)w_{j}(\zeta,\epsilon)\Vert _{L^{2}(S_{\epsilon})})\nonumber\\ &\!\leq\!& \frac{4\beta}{3\kappa} K_{2}(2+K_{0}K_{1}) \Vert\varphi_{j}\Vert_{L^{2}(S_{\epsilon})} \| w_{j}(\zeta,\epsilon)\| _{ H^{1}(\mathbb{R}^{n})}\,.\label{west}\end{aligned}$$ Since $\| \varphi_{j}\|_{ L^{2}(S_{\epsilon})}=\mathcal{O} (\epsilon^{(n-1)/2})$ as $\epsilon\to 0$ by the assumptions (H.1), (H.2) and the continuity of $\varphi_{j}|_{\Sigma}$ at the origin, we arrive at the relation (\[wdecay\]). In the next step we are going to demonstrate that the convergence is in fact slightly faster, namely $$\label{wdecay2} \sup_{\zeta\in C} \| w_{j}(\zeta,\epsilon)\|_{ H^{1} (\mathbb{R}^{n})} =o(\epsilon^{(n-1)/2})\quad \mathrm{as}\quad\epsilon\to 0\,.$$ We will proceed by contradiction. Suppose that (\[wdecay2\]) does not hold; then there would exist a constant $\delta>0$, a sequence $\{\epsilon_{i}\}^{\infty}_{i=1}\subset (0,\eta_{0})$ which tends to zero, and $\{\zeta_{i}\}^{\infty}_{i=1} \subset C$ such that $$\label{contr} \epsilon_{i}^{-(n-1)/2}\| w_{j}(\zeta_{i},\epsilon_{i}) \|_{H^{1}(\mathbb{R}^{n})}\geq \delta\quad{\rm for\,\,all} \quad i\in\mathbb{N}\,.$$ Notice that the map $H^{1}(\mathbb{R}^{n})\owns f\mapsto f|_{\Sigma}\in L^{2}(\Sigma)$ is compact due to the boundedness of the map $H^{1}(\mathbb{R}^{n})\owns g\mapsto g|_{\Sigma}\in H^{1/2}(\Sigma)$ and the compactness of the imbedding $H^{1/2}(\Sigma)\owns h\mapsto h\in L^{2}(\Sigma)$ – cf. \[15, Chap. 1, Thms 8.3 and 16.1\]. Since the two sequences $$\left\{\epsilon_{i}^{-(n-1)/2}w_{j}(\zeta_{i},\epsilon_{i}) \right\}^{\infty}_{i=1}\quad{\rm and}\quad \left\{\epsilon_{i}^{-(n-1)/2}R(\overline{\zeta_{i}},\epsilon_{i}) w_{j}(\zeta_{i},\epsilon_{i}) \right\}^{\infty}_{i=1}$$ are bounded in $H^{1}(\mathbb{R}^{n})$, there is a subsequence $\{i(k)\}^{\infty}_{k=1}$ of $\{i\}^{\infty}_{i=1}$ such that $$\left\{ \epsilon_{i(k)}^{-(n-1)/2} w_{j}(\zeta_{i(k)}, \epsilon_{i(k)})\right\}^{\infty}_{k=1} \quad{\rm and}\quad \left\{ \epsilon_{i(k)}^{-(n-1)/2} R(\overline{\zeta_{i(k)}},\epsilon_{i(k)}) w_{j}(\zeta_{i(k)}, \epsilon_{i(k)})\right\}^{\infty}_{k=1}$$ converge in $L^{2} (\Sigma)$. Let us denote $$g:=\lim_{k\to\infty} \epsilon_{i(k)}^{-(n-1)/2}w_{j}(\zeta_{i(k)}, \epsilon_{i(k)})\in L^{2} (\Sigma)\,;$$ then we have $$\begin{aligned} \lefteqn{ \left\| \epsilon_{i(k)}^{-(n-1)/2}w_{j}(\zeta_{i(k)}, \epsilon_{i(k)})\right\|_{ L^{2}(S_{\epsilon_{i(k)}})} } \\ && \leq \left\| \epsilon_{i(k)}^{-(n-1)/2}w_{j}(\zeta_{i(k)}, \epsilon_{i(k)})-g \right\|_{L^{2}(\Sigma)} + \left( \int_{S_{\epsilon_{i(k)}}} |g(x)|^{2}\,dS \right)^{1/2}\to 0\end{aligned}$$ as $k\to\infty$. Similarly we obtain $$\left\| \epsilon_{i(k)}^{-(n-1)/2} R(\overline{\zeta_{i(k)}},\epsilon_{i(k)}) w_{j}(\zeta_{i(k)}, \epsilon_{i(k)})\right\|_{ L^{2}(S_{\epsilon_{i(k)}})} \to 0\quad{\rm as}\quad k\to\infty.$$ Combining these result with the inequalities (\[west\]) we infer that $$\epsilon_{i(k)}^{-(n-1)/2}\| w_{j}(\zeta_{i(k)}, \epsilon_{i(k)})\| _{H^{1}(\mathbb{R}^{n})}\to 0\quad\mathrm{as} \quad k\to\infty\,,$$ which violates the relation (\[contr\]); in this way we have proved (\[wdecay2\]). Now we denote by $P_{\epsilon}$ the spectral projection of $H_{\epsilon}$ associated with the interval $(\mu-3\kappa/4, \mu+3\kappa/4)$. It follows from (\[wdecay2\]) that $$\begin{aligned} P_{\epsilon}\varphi_{j}-\varphi_{j} &\!=\!& \frac{\sqrt{-1}}{2\pi} \oint_{|\zeta-\mu|=3\kappa/4}w_{j}(\zeta,\epsilon)\,d\zeta \\ &\!=\!& o(\epsilon^{(n-1)/2}) \quad\mathrm{in} \quad H^{1}(\mathbb{R}^{n}) \quad\mathrm{as}\quad\epsilon\to 0\end{aligned}$$ holds for $m(\mu)\leq j\leq n(\mu)$. Consequently, we have $$\begin{aligned} \lefteqn{(H_{\epsilon}P_{\epsilon}\varphi_{i},P_{\epsilon} \varphi_{j})_{L^{2}(\mathbb{R}^{n})}-\mu\delta_{i,j} -\beta\varphi_{i}(0) \overline{\varphi_{j}(0)}\, \mathrm{meas}_{\Sigma} (S_{\epsilon})} \nonumber \\ && = q_{\epsilon}[P_{\epsilon}\varphi_{i}, P_{\epsilon}\varphi_{j}]-q_{0}[\varphi_{i},\varphi_{j}] -\beta\varphi_{i}(0)\overline{\varphi_{j}(0)}\, \mathrm{meas}_{\Sigma}(S_{\epsilon}) \nonumber\\ && = q_{\epsilon}[\varphi_{i},\varphi_{j}] -q_{0}[\varphi_{i},\varphi_{j}]-q_{\epsilon}[ (I\!-\!P_{\epsilon})\varphi_{i},(I\!-\!P_{\epsilon})\varphi_{j}] \nonumber \\ && \phantom{A} -\beta\varphi_{i}(0)\overline{\varphi_{j}(0)} \,\mathrm{meas}_{\Sigma}(S_{\epsilon})\nonumber\\ && = -q_{\epsilon}[(I\!-\!P_{\epsilon})\varphi_{i}, (I\!-\!P_{\epsilon})\varphi_{j}] +\beta\int_{S_{\epsilon}}\varphi_{i}(x)\overline{ \varphi_{j}(x)}\,dS \nonumber \\ && \phantom{A} -\beta\varphi_{i}(0)\overline{\varphi_{j}(0)} \,\mathrm{meas}_{\Sigma}(S_{\epsilon})\nonumber\\ && = o(\epsilon^{n-1}) \label{qconv}\end{aligned}$$ and $$\label{Pconv} (P_{\epsilon}\varphi_{i},P_{\epsilon}\varphi_{j}) _{L^{2}(\mathbb{R}^{n})} =\delta_{i,j} +o(\epsilon^{n-1})$$ as $\epsilon\to 0$ for $m(\mu)\leq i,j\leq n(\mu)$, where we have used, in the last step of (\[qconv\]), the assumptions (H.1), (H.2), the continuity of the restrictions $\varphi_{i}|_{\Sigma}$ and $\varphi_{j}|_{\Sigma}$ at the origin, and the uniform boundedness of $q_{\epsilon}$ on $H^{1}(\mathbb{R}^{n})$ with respect to $0<\epsilon\leq\eta_{0}$. Let us now introduce the matrices $$\begin{aligned} L(\epsilon) &\!:=\!& ((H_{\epsilon}P_{\epsilon}\varphi_{i}, P_{\epsilon}\varphi_{j})_{L^{2}(\mathbb{R}^{n})}) _{m(\mu)\leq i,j\leq n(\mu)}\,, \\ M(\epsilon) &\!:=\!& ((P_{\epsilon}\varphi_{i}, P_{\epsilon}\varphi_{j}) _{L^{2}(\mathbb{R}^{n})}) _{m(\mu)\leq i,j\leq n(\mu)}\,.\end{aligned}$$ Since $\{P_{\epsilon}\varphi_{j}\}_{m(\mu)\leq j\leq n(\mu)}$ is a basis of the spectral subspace ${\rm Ran}\, P_{\epsilon}$, we see that $\lambda_{m(\mu)}(\epsilon), \lambda_{m(\mu)+1}(\epsilon),\ldots, \lambda_{n(\mu)}(\epsilon)$ are the eigenvalues of the matrix $L(\epsilon)M(\epsilon)^{-1}$, which by (\[qconv\]), (\[Pconv\]) is equal to $$L(\epsilon)M(\epsilon)^{-1}=\mu I+\beta\, \mathrm{meas}_{\Sigma}(S_{\epsilon})\, C(\mu)+o(\epsilon^{n-1}),$$ where $I$ stands for the identity matrix. This concludes the argument. Perturbation of an infinite curve ================================= As we have mentioned, the compactness of $\Sigma$ did not play an essential role in the above argument, and we can use the same technique for punctured noncompact manifolds of unit codimension as well, as long as the corresponding Hamiltonian has a discrete spectrum. At present this is known to be true in the case $n=2$ without restriction to the coupling constant $\beta$, see [@EI], and for $n=3$ and $\beta$ large enough [@EK2]. We shall thus consider “puncture" perturbations of infinite asymptotically straight curves. Let $\Lambda:\, \mathbb{R} \to\mathbb{R}^{2}$ be a $C^2$-smooth curve parameterized by its arc length. Fix $\beta>0$ and assume that $\Lambda(0)=0$. Given $\epsilon\geq 0$, we define $$t_{\epsilon}[u,v]:= (\nabla u,\nabla v)_{L^{2}(\mathbb{R}^{2})}-\beta \int_{\Lambda(\mathbb{R}\setminus (-\epsilon,\epsilon))} u(x)\overline{v(x)}\,dS\,, \quad u,v\in H^{1}(\mathbb{R}^{2})\,.$$ Let $T_{\epsilon}$ be the self-adjoint operator associated with the quadratic form $t_{\epsilon}$. We adopt the following assumptions about the curve $\Lambda$. [(H.4)]{} The curve $\Lambda$ is not a straight line. [(H.5)]{} There exists $c\in (0,1)$ such that $|\Lambda(s)-\Lambda (t)|\geq c|t-s|$ for $s,t\in \mathbb{R}$. [(H.6)]{} There exist $d>0$, $\rho>1/2$, and $w\in (0,1)$ such that the inequality $$1-\frac {|\Lambda(s)-\Lambda(s^{\prime})|} {|s-s^{\prime}|} \leq d\, \left\lbrack 1+|s+s^{\prime}|^{2\rho} \right\rbrack^{-1/2}$$ holds in the sector $\left\{\, (s,s^{\prime})\in\mathbb{R}^{2};\: w<\frac{s}{s^{\prime}}<w^{-1} \,\right\}$. From [@EI Prop 5.1 and Thm 5.2] we know that under these conditions $$\sigma_{\rm ess}(T_{0}) =[-\beta^{2}/4,\infty)\quad{\rm and}\quad 1\leq\sharp\sigma_\mathrm{disc}(T_{0})\leq\infty.$$ Let $K:=\{j\in\mathbb{N};\: j\leq\sharp\sigma_\mathrm{disc}(T_{0})\}$. For $j\in K$, we denote by $\kappa_{j}(\epsilon)$ the $j$-th eigenvalue of $T_{\epsilon}$ counted with multiplicity. The function $\kappa_{j}(\cdot)$ is monotone non-decreasing, continuous function in a neighbourhood of the origin. Let $\{\psi_{j}(x)\}_{j\in K}$ be an orthonormal system of eigenfunctions of $T_{0}$ such that $T_{0}\psi_{j} =\kappa_{j}(0)\psi_{j}$ for $j\in K$. Each function $\psi_{j}$ is continuous on $\Lambda$. For $\mu\in\sigma_\mathrm{disc}(T_{0})$, we define $$\begin{aligned} p(\mu) &\!:=\!& \min\left\{\,j\in K;\: \mu=\kappa_{j}(0) \,\right\}\,, \\ r(\mu) &\!:=\!& \max\left\{\,j\in K;\: \mu=\kappa_{j}(0)\,\right\}\,, \\ D(\mu) &\!:=\!& \left(\psi_{i}(0) \overline{\psi_{j}(0)} \right)_{p(\mu)\leq i,j\leq r(\mu)}\,. \end{aligned}$$ Let $e_{p(\mu)}\leq e_{p(\mu)+1}\leq\cdots\leq e_{r(\mu)}$ be the eigenvalues of the matrix $D(\mu)$. As in the compact case, if $\mu=\kappa_j(0)$ is a simple eigenvalue of $H_0$, we have $p(\mu)=r(\mu)=j$ and $e_j= |\psi_{j}(0)|^2$. The asymptotic behaviour now looks as follows. \[curve\] Assume that (H.4)–(H.6) and take $\mu\in \sigma_\mathrm{disc}(T_{0})$. Then $$\kappa_{j}(\epsilon)=\mu +2\beta e_{j}\epsilon +o(\epsilon)\quad\mathit{as}\quad\epsilon\to 0$$ holds for $p(\mu)\leq j\leq r(\mu)$. [*Proof*]{} is analogous to that of Theorem \[main\]. Let us mention in conclusion that the results derived here raise some interesting questions, for instance, what is the following term in the expansion, what the asymptotic behaviour looks like for non-smooth surfaces, and whether similar formulae are valid in the case of $\mathrm{codim\,}\Sigma=2,3$ when the corresponding generalized Schrödinger operator has to be defined by means of appropriate boundary conditions. Acknowledments {#acknowledments .unnumbered} -------------- The authors are grateful for the hospitality extended to them, P.E. in the Department of Mathematics, Tokyo Metropolitan University, and K.Y. in the Nuclear Physics Institute, AS CR; during these visits a part of this work was done. The research has been partially supported by GAAS and the Czech Ministry of Education within the projects A1048101 and ME482. Useful comments by the referees are also appreciated. [99]{} S. Agmon: [*Lectures on Elliptic Boundary Value Problems*]{}, Van Nostrand, Princeton 1965. S. Albeverio, F. Gesztesy, R. Høegh-Krohn, H. Holden: [*Solvable Models in Quantum Mechanics*]{}, Springer, Heidelberg 1988. S. Albeverio, P. Kurasov: [*Singular Perturbations of Differential Operators*]{}, London Mathematical Society Lecture Note Series 271, Cambridge Univ. Press 1999. J.F. Brasche, A. Teta: Spectral analysis and scattering theory for Schrödinger operators with an interaction supported by a regular curve, in [*Ideas and Methods in Quantum and Statistical Physics*]{}, Cambridge Univ. Press 1992; pp. 197-211. J.F. Brasche, P. Exner, Yu.A. Kuperin, P. Šeba: Schrödinger operators with singular interactions, [*J. Math. Anal. Appl.*]{} [**184**]{} (1994), 112-139. P. Exner: Spectral properties of Schrödinger operators with a strongly attractive $\delta$ interaction supported by a surface, in *Proceedings of the NSF Summer Research Conference (Mt. Holyoke 2002)*; AMS “Contemporary Mathematics" Series, Providence, R.I., 2003; to appear P. Exner, T. Ichinose: Geometrically induced spectrum in curved leaky wires, [*J. Phys.*]{} [**A34**]{} (2001), 1439-1450. P. Exner, S. Kondej: Curvature-induced bound states for a $\delta$ interaction supported by a curve in $\mathbb{R}^3$, [*Ann. H. Poincaré*]{} [**3**]{} (2002), 967-981. P. Exner, S. Kondej: Bound states due to a strong $\delta$ interaction supported by a curved surface, [*J. Phys.*]{} [**A36**]{} (2003), 443-457. P. Exner, K. Yoshitomi: Band gap of the Schrödinger operator with a strong $\delta$-interaction on a periodic curve, [*Ann. Henri Poincer[é]{}*]{} [**2**]{} (2001), 1139-1158. P. Exner, K. Yoshitomi: Asymptotics of eigenvalues of the Schrödinger operator with a strong $\delta$-interaction on a loop, [*J. Geom. Phys.*]{} [**41**]{} (2002), 344-358. P. Exner, K. Yoshitomi: Persistent currents for the 2D Schrödinger operator with a strong $\delta$-interaction on a loop, [*J. Phys.*]{} [**A35**]{} (2002), 3479-3487. A. Il’in: [*Matching of Asymptotic Expansions of Solutions of Boundary Value Problems*]{}, Translations of Mathematical Monographs, Vol. 102, American Mathematical Society, Providence, R.I., 1992. T. Kato: [*Perturbation Theory for Linear Operators*]{}, 2nd edition, Springer, Heidelberg 1976. J.L. Lions, E. Magenes: [*Non-Homogeneous Boundary Value Problems and Applications*]{}, Vol. I, Springer, Heidelberg 1972.
--- abstract: 'In the present paper, we study skew cyclic codes over the ring $F_{q}+vF_{q}+v^2F_{q}$, where $v^3=v,~q=p^m$ and $p$ is an odd prime. We investigate the structural properties of skew cyclic codes over $F_{q}+vF_{q}+v^2F_{q}$ using decomposition method. By defining a Gray map from $F_{q}+vF_{q}+v^2F_{q}$ to $F_{q}^3$, it has been proved that the Gray image of a skew cyclic code of length $n$ over $F_{q}+vF_{q}+v^2F_{q}$ is a skew $3$-quasi cyclic code of length $3n$ over $F_{q}$. Further, it is shown that the skew cyclic codes over $F_{q}+vF_{q}+v^2F_{q}$ are principally generated. Finally, the idempotent generators of skew cyclic codes over $F_{q}+vF_{q}+v^2F_{q}$ are also obtained.' --- \[section\] \[thm\][Corollary]{} \[thm\][Lemma]{} \[thm\][Proposition]{} \[thm\][Example]{} \[thm\][Definition]{} \[thm\][Remark]{} [****On skew cyclic codes over $F_q+vF_q+v^2F_q$****]{}\ \ Department of Mathematics\ Aligarh Muslim University\ Aligarh -202002(India)\ [*E-mails*]{} : [mashraf80@hotmail.com; mohdghulam202@gmail.com]{} Introduction ============ During the last decades of the twentieth century a great deal of attention has been given to the study of linear codes over finite rings because of their new role in algebraic coding theory and their successful applications. The class of cyclic codes is a very important class of linear codes from both theoretical and practical point of view which are easier to implement due to their rich algebraic structure. Cyclic codes have been studied for the last six decades. Based on these facts, cyclic codes have become one of the most important class in coding theory. A landmark paper by Hammons, et al. [@13] discovered that some good nonlinear codes over $\mathbb{Z}_2$ can be viewed as binary images under a Gray map of linear cyclic codes over $\mathbb{Z}_4$. But all this work is restricted to codes that are defined in a commutative ring.\ Boucher et al. [@31], [@32] and [@33] studied the structure of skew cyclic codes over a non commutative ring $F[x, \theta]$, called skew polynomial ring, where $F$ is a finite field and $\theta$ is a field automorphism of $F$. They generalized the class of linear and cyclic codes to the class of skew cyclic codes by using the ring $F[x, \theta]$, where the generator polynomials of skew cyclic codes come from the ring $F[x, \theta]$. They also gave some examples of skew cyclic codes with Hamming distances larger than the best known linear codes with the same parameters. Later on, Abualrub et al. [@27] and Bhaintwal [@30], defined skew quasi cyclic codes over these classes of rings. The main motivation of studying codes in this setting is that polynomials in skew polynomial rings exhibit many factorizations and hence there are many ideals in skew polynomial ring than in the commutative ring. But all this work is restricted to the condition that the order of the automorphism must be a factor of the length of the code. In [@38], Siap, et al. removed this condition and they studied the structural properties of skew cyclic codes of arbitrary length over finite fields. A lot of work has been done in this direction (see references [@28; @29; @34]).\ Recently, Jitman et al. [@37] defined skew constacyclic codes by defining the skew polynomial ring with coefficients from finite chain rings, especially the ring $F_{p^{m}}+uF_{p^{m}}$ where $u^2=0$. Further Gursoy et al. [@36] investigated the structural properties of skew cyclic codes through the decomposition method over $F_q+vF_q$, where $v^2=v$ and $q=p^m$. Very recently, the authors [@29] studied the structural properties of skew cyclic codes over the ring $F_3+vF_3$ with $v^2=1$ by considering the automorphism as; $\theta~:~v~\mapsto -v$. They proved that skew cyclic codes over $F_3+vF_3$ are equivalent to either cyclic codes or quasi cyclic codes. In the present paper, we study skew cyclic codes over the ring $F_{q}+vF_{q}+v^2F_{q}$, where $v^3=v,~q=p^m$ and $p$ is an odd prime by using the same technique as used by Gursoy et al. [@36] for the ring $F_q+vF_q$, where $v^2=v$ and $q=p^m$.\ Throughout the paper $R$ will denote the ring $F_{q}+vF_{q}+v^2F_{q}$ with $v^3=v,~q=p^m$ and $p$ is an odd prime. Consider the automorphism $\theta_t:R\longrightarrow R$ such that $\theta_t(a+vb+v^2c)=a^{p^t}+vb^{p^t}+v^2c^{p^t}$. It is to be noted that $\theta_1$ is the Frobenius automorphism of $F_{q}$ and $\theta_t=\theta_{1}^{t}$. In this paper, we will use the automorphism $\theta_t$ instead of the automorphism $v\mapsto 1-v$ which was used by Gao in [@34]. Preliminaries ============= Let $R=F_{q}+vF_{q}+v^2F_{q},$ where $q=p^m$ and $p$ is an odd prime. $R$ is a commutative and non-chain ring with characteristic $p$ which contains $q^3$ elements. The ring is endowed with the natural addition and multiplication with the property $v^3=v$ and it can be viewed as the quotient ring $F_q[v]/\langle v^3-v\rangle$. The elements of $R$ can be uniquely written as $a+vb+v^2c,$ where $a,~b,~c\in F_{q}$. It is a semi-local ring having three maximal ideals $\langle v\rangle,~\langle v-1\rangle$ and $\langle v+1\rangle.$\ Define a mapping $\theta_t:R\longrightarrow R$ such that $\theta_t(a+vb+v^2c)=a^{p^t}+vb^{p^t}+v^2c^{p^t},~\mbox{for all}~a,~b,~c\in F_{q}$. One can verify that $\theta_t$ is an automorphism on $R$ and $\theta_t=\theta_{1}^{t}$. This automorphism acts on $F_{q}$ as follows: $$\theta_t:F_{q}\longrightarrow F_{q}$$ $$a\mapsto a^{p^t}.$$ It may be noted that the order of this automorphism is $|\langle\theta_t\rangle|=m/t$ and the subring $F_{p^t}+vF_{p^t}+v^2F_{p^t}$ of $R$ is invariant under $\theta_t$.\ For a given automorphism $\theta_t$ of $R$, the set $R[x, \theta_t]=\{a_0+a_1x+a_2x^2+\cdots+a_nx^n|~a_i\in R, n\geq0\}$ of formal polynomials forms a ring under usual addition of polynomials and multiplication is defined by the rule $(ax^i)(bx^j)=a\theta_{t}^{i}(b)x^{i+j}$. The ring $R[x, \theta_t]$ is called skew polynomial ring over $R$. It can be easily seen that the ring $R[x, \theta_t]$ is non-commutative unless $\theta_t$ is the identity automorphism on $R$. Therefore, when an ideal of $R[x, \theta_t]$ is considered, one should specify whether it is a right ideal or a left ideal. The skew polynomial ring $R[x, \theta_t]$ is not left or right Euclidean. However, the division algorithm holds for some polynomials whose leading coefficients are invertible (for detail see references [@32] and [@37]). Gray map and linear codes over $R$ ================================== Gao [@35], studied linear codes over the ring $F_p+uF_p+u^2F_p,$ where $u^3=u$ and $p$ is an odd prime. Here, we generalize his study to linear codes over the ring $R$. Let $R^n$ be the set of all $n$-tuples over $R$, then a nonempty subset $C$ of $R^n$ is called a code of length $n$ over $R$. $C$ is called linear code of length $n$ over $R$ if it is an $R$-submodule of $R^n$. Elements of $C$ are called codewords and therefore each codeword $c$ in such a code $C$ is just an $n$-tuple of the form $x=(x_0, x_1, \cdots, x_{n-1})\in R^n.$\ The Hamming weight $w_H(x)$ of a codeword $x=(x_0, x_1, \cdots, x_{n-1})\in R^n$ is the number of nonzero components. The minimum weight $w_H(C)$ of a code $C$ is the smallest weight among all its nonzero codewords. For $x=(x_0, x_1, \cdots, x_{n-1}),~y=(y_0, y_1, \cdots, y_{n-1})\in R^n$,\ $d_H(x, y)=|\{i~|~x_i\neq y_i\}|$ is called the Hamming distance between $x$ and $y\in R^n$ and is denoted by $$d_H(x, y)=w_H(x-y).$$ The minimum Hamming distance between distinct pairs of codewords of a code $C$ is called the minimum distance of $C$ and is denoted by $d_H(C)$ or shortly $d_H$.\ Now, we define the Lee weight of an element $r=a+vb+v^2c\in R$ as follows: $$w_L(r)=w_H(a, a+b+c, a-b+c),$$ where $w_H$ denotes the usual Hamming weight on $F_q.$ Let $x=(x_0, x_1, \cdots, x_{n-1})$ be a vector in $R^n.$ Then the Lee weight of $x$ is the rational sum of Lee weights of its components, that is, $w_L(x)=\sum\limits_{i=0}^{n-1}w_L(x_i).$ For any elements $x, y\in R^n,$ the Lee distance is given by $d_L(x, y)=w_L(x-y).$ The minimum Lee distance of a code $C$ is the smallest nonzero Lee distance between all pairs of distinct codewords. The minimum Lee weight of $C$ is the smallest nonzero Lee weight among all codewords. If $C$ is linear, then the minimum Lee distance is the same as the minimum Lee weight.\ The Gray map $\phi$ from $R$ to $F_{q}^3$ is defined as $\phi(a+vb+v^2c)=(a, a+b+c, a-b+c)$. It can be easily seen that $\phi$ is linear. The Gray map $\phi$ can be extended to $R^n$ in a natural way, that is, $\phi:R^n\longrightarrow F_{q}^{3n}$ such that $\phi(x_0, x_1, \cdots, x_{n-1})=(a_0, a_0+b_0+c_0, a_0-b_0+c_0, \cdots, a_{n-1}, a_{n-1}+b_{n-1}+c_{n-1}, a_{n-1}-b_{n-1}+c_{n-1})$, where $x_i=a_i+vb_i+v^2c_i$ for $i=0, 1, \cdots, n-1$.\ The following property is obvious from the definition of the Gray map: The Gray map $\phi$ is a distance-preserving map or isometry from $R^n$(Lee distance) to $F_{q}^{3n}$(Hamming distance) and it is also $F_{q}$-linear. For a code $C$ over $R,$ define $$C_1=\{a\in F_{q}^n~|~a+vb+v^2c\in C~\mbox{some}~b, c\in F_{q}^n \},$$ $$C_2=\{a+b+c\in F_{q}^n~|~a+vb+v^2c\in C \},$$ and $$C_3=\{a-b+c\in F_{q}^n~|~a+vb+v^2c\in C \}.$$ If $C$ is linear code of length $n$ over $R$, then $C_1,~C_2$ and $C_3$ are all linear codes of length $n$ over $F_q.$ Moreover, the linear code $C$ of length $n$ over $R$ can be uniquely expressed as $$C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3.$$\ A generator matrix of $C$ is a matrix whose rows generate $C$. Let $$C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3$$ be a linear code of length $n$ over $R$ with generator matrix $G.$ Then $G$ can be written as $$G=\left( \begin{array}{ccccc} (1-v^2)G_1 \\ \\ \frac{p+1}{2}(v^2+v)G_2\\ \\ \frac{p+1}{2}(v^2-v)G_3 \end{array}\right),$$ where $G_1,~G_2$ and $G_3$ are the generator matrices of $C_1,~C_2$ and $C_3$ respectively.\ Let $x=(x_0, x_1, \cdots, x_{n-1})$ and $y=(y_0, y_1, \cdots, y_{n-1})$ be two elements of $R^n$. Then the Euclidean inner product of $x$ and $y$ in $R^n$ is defined as $$x\cdot y=x_0y_0+x_1y_1+\cdots+x_{n-1}y_{n-1}.$$ The dual code $C^\perp$ of $C$ is defined as $$C^\perp=\{x\in R^n|~x\cdot y=0,~\mbox{for~all}~y\in C\}.$$ A code $C$ is called self-orthogonal if $C\subseteq C^\perp$ and self dual if $C=C^\perp$.\ Now, we give some results on linear codes over $R$, which are the generalization of results on linear codes over $F_p+vF_p+v^2F_p$. So, we are omitting the proofs of the results. If $C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3$ is a linear code of length $n$ over $R$, then $\phi(C)=C_1\otimes C_2\otimes C_3$ and $|C|=|C_1||C_2||C_3|$. Let $C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3$ be a linear code of length $n$ over $R$, where $C_i$ is a linear code with dimension $k_i$ and minimum Hamming distance $d(C_i)$ for $i=1, 2, 3$. Then $\phi(C)$ is a linear code with parameters $[3n, k_1+k_2+k_3,~min\{d(C_1), d(C_2), d(C_3)\}]$ over $F_{q}$. One of the properties of the Gray map we defined is that it preserves the duality as given in the following lemma: Let $C^\perp$ be the dual code of $C$ over $R$. Then $\phi(C^\perp)=\phi(C)^\perp$. In particular, if $C$ is self-dual, then so is $\phi(C)$. [***[Proof.]{}***]{} Let $x_1=a_1+vb_1+v^2c_1$ and $x_2=a_2+vb_2+v^2c_2\in C$, where $a_1, b_1, c_1, a_2, b_2, c_2\in F_{q}^n$. Now by Euclidean inner product of $x_1$ and $x_2$, we have $$\begin{split} x_1\cdot x_2 &=(a_1+vb_1+v^2c_1)\cdot(a_2+vb_2+v^2c_2)\\ &=a_1a_2+v(a_1b_2+a_2b_1+b_1c_2+b_2c_1)+v^2(a_1c_2+a_2c_1+b_1b_2+c_1c_2).\\ \end{split}$$ Since $C$ is a self-dual code, $C= C^\perp$, we find that $a_1a_2=a_1b_2+a_2b_1+b_1c_2+b_2c_1=a_1c_2+a_2c_1+b_1b_2+c_1c_2=0$. Now $$\phi(x_1)\phi(x_2)=(a_1, a_1+b_1+c_1, a_1-b_1+c_1)(a_2, a_2+b_2+c_2, a_2-b_2+c_2)=0.$$ Thus $\phi(C^\perp)\subseteq \phi(C)^\perp$. On the other hand let $|C|={(q)}^{k_1+k_2+k_3}$ and $C$ is of length $n$. Then $\phi(C)$ has the parameters $[3n, k_1+k_2+k_3]$. Since $|\phi(C)|=|C|$, $|\phi(C)^\perp|={(q)}^{3n-(k_1+k_2+k_3)}$. Further $|\phi(C^\perp)|=|C^\perp|=q^{3n}/|C|=q^{3n-(k_1+k_2+k_3)}$. Hence $\phi(C^\perp)=\phi(C)^\perp$.\ In view of the previous lemma, the following theorem can be easily obtained: Let $C$ be a linear code of length $n$ over $R$ and let $\phi(C)=C_1\otimes C_2\otimes C_3$. Then $C$ can be uniquely expressed as $C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3$. Furthermore, if $\phi(C^\perp)=C_1^\perp\otimes C_2^\perp\otimes C_3^\perp$, then $C^\perp=(1-v^2)C_1^\perp\oplus\frac{p+1}{2}(v^2+v)C_2^\perp\oplus\frac{p+1}{2}(v^2-v)C_3^\perp$. Skew cyclic codes over $R$ ========================== In the present section, we study skew cyclic codes over $R$. Let $\theta_t$ be an automorphism on $R$ given by $\theta_t(a+vb+v^2c)=a^{p^t}+vb^{p^t}+v^2c^{p^t}$. Then a linear code $C$ of length $n$ over $R$ is called a skew cyclic code or $\theta_t$-cyclic code if it satisfies the property $c=(c_0, c_1, \cdots, c_{n-1})\in C~\mbox{implies}~\sigma(c)=(\theta_t(c_{n-1}), \theta_t(c_0), \cdots, \theta_t(c_{n-2}))\in C$, where $\sigma(c)$ denotes the skew cyclic shift of $c$.\ In [@38], it was shown that a linear code $C$ of length $n$ over $F_{q}$ is a skew cyclic code with respect to automorphism $\theta$ if and only if it is a left $F_{q}[x, \theta]$-submodule of $F_{q}[x, \theta]/\langle x^n-1 \rangle$. Moreover, if $C$ is a left submodule of $F_{q}[x, \theta]/\langle x^n-1 \rangle$, then $C$ is generated by a monic polynomial $g(x)$ which is a right divisor of $x^n-1$ in $F_{q}[x, \theta]$.\ The method which we use in this section is same as the method used by Gao in [@35] over the ring $F_p+vF_p+v^2F_p$ with $v^3=v$. The main difference in our case is that the ring $R[x, \theta_t]$ is non-commutative. Let $C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3$ be a linear code of length $n$ over $R$. Then $C$ is a skew cyclic code over $R$ with respect to automorphism $\theta_t$ if and only if $C_{1}, C_{2}$ and $C_{3}$ are skew cyclic codes of length $n$ over $F_{q}$ with respect to same automorphism $\theta_t$. [***[Proof.]{}***]{} For any $r=(r_0, r_1, \cdots, r_{n-1})\in C$, we can write its components as $r_i=(1-v^2)a_i+\frac{p+1}{2}(v^2+v)b_i+\frac{p+1}{2}(v^2-v)c_i$, where $a_i,~b_i$, $c_i\in F_{q},~0\leq i\leq n-1$. Let $a=(a_0, a_1, \cdots, a_{n-1}),~b=(b_0, b_1, \cdots, b_{n-1})$ and $c=(c_0, c_1, \cdots, c_{n-1})$. Then $a\in C_{1},~b\in C_2$ and $c\in C_{3}$. Now, Suppose $C_{1},~C_2$ and $C_{3}$ are skew cyclic codes over $F_{q}$ with respect to automorphism $\theta_t$. This means that $\sigma(a)=(\theta_t(a_{n-1}), \theta_t(a_0), \cdots, \theta_t(a_{n-2}))=(a_{n-1}^{p^t}, a_0^{p^t}, \cdots, a_{n-2}^{p^t})\in C_{1},~ \sigma(b)=(\theta_t(b_{n-1}), \theta_t(b_0), \cdots, \theta_t(b_{n-2}))=(b_{n-1}^{p^t}, b_0^{p^t}, \cdots, b_{n-2}^{p^t})\in C_{2}$ and $\sigma(c)=(\theta_t(c_{n-1}), \theta_t(c_0), \cdots, \theta_t(c_{n-2}))=(c_{n-1}^{p^t}, c_0^{p^t}, \cdots, c_{n-2}^{p^t})\in C_{3}$. Thus $(1-v^2)\sigma(a)+(v^2+v)\frac{p+1}{2}\sigma(b)+(v^2-v)\frac{p+1}{2}\sigma(c)\in C$. It can be easily seen that $(1-v^2)\sigma(a)+(v^2+v)\frac{p+1}{2}\sigma(b)+(v^2-v)\frac{p+1}{2}\sigma(c)=\sigma(r)$. Hence $\sigma(r)\in C$, which means that $C$ is a skew cyclic code over $R$ with respect to automorphism $\theta_t$.\ Conversely, suppose $C$ is a skew cyclic code over $R$ with respect to automorphism $\theta_{t}$. Let $r_i=(1-v^2)a_i+\frac{p+1}{2}(v^2+v)b_i+\frac{p+1}{2}(v^2-v)c_i$, for any $a=(a_0, a_1, \cdots, a_{n-1})\in C_{1},~b=(b_0, b_1, \cdots, b_{n-1})\in C_{2}$ and $c=(c_0, c_1, \cdots, c_{n-1})\in C_3$. Then $r=(r_0, r_1, ..., r_{n-1})\in C$. By the hypothesis $\sigma(r)\in C$. Since $(1-v^2)\sigma(a)+(v^2+v)\frac{p+1}{2}\sigma(b)+(v^2-v)\frac{p+1}{2}\sigma(c)=\sigma(r)$, $(1-v^2)\sigma(a)+(v^2+v)\frac{p+1}{2}\sigma(b)+(v^2-v)\frac{p+1}{2}\sigma(c)\in C$. Thus $\sigma(a)\in C_{1},~\sigma(b)\in C_{2}$ and $\sigma(c)\in C_3$, which implies that $C_{1},~C_2$ and $C_{3}$ are skew cyclic codes of length $n$ over $F_{q}$ with respect to automorphism $\theta_t$. Let $C$ be a skew cyclic code of length $n$ over $R$. Then the dual code $C^\perp$ is also a skew cyclic code of length $n$ over $R$. [***[Proof.]{}***]{} In view of Theorem 3.5, we know that $C^\perp=(1-v^2)C_1^\perp\oplus\frac{p+1}{2}(v^2+v)C_2^\perp\oplus\frac{p+1}{2}(v^2-v)C_3^\perp$. Since the dual code of every skew cyclic code over $F_{q}$ is also skew cyclic ([@33], Corollary 18), by Theorem 4.1, $C^\perp$ is a skew cyclic code over $R$. A code $C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3$ of length $n$ over $R$ is a self-dual skew cyclic if and only if $C_1,~C_2$ and $C_3$ are self-dual skew cyclic codes of length $n$ over $F_{q}$. Let $C^\prime$ be a linear code of length $n$ over $F_{q}$ and $c=(c^1|c^2|\cdots |c^s)$ be a codeword in $C^\prime$ into $s$ equal parts of length $r$ where $n=rs$. If $(\sigma(c^1)|\sigma(c^2)|\cdots |\sigma(c^s))\in C^\prime$, then the linear code $C$ which is permutation equivalent to $C^\prime$ is called a skew quasi-cyclic code of index $s$ or skew $s$-quasi cyclic code. (for detail see reference [@27]) Let $C$ be a skew cyclic code of length $n$ over $R$. Then $\phi(C)$ is a skew $3$-quasi cyclic code of length $3n$ over $F_{q}$. [***[Proof.]{}***]{} In view of Theorem 3.2 and the definition of skew quasi-cyclic codes, we can obtain the required result. Let $C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3$ be skew cyclic code of length $n$ over $R$. Then $C=\langle(1-v^2)g_1(x), \frac{p+1}{2}(v^2+v)g_2(x), \frac{p+1}{2}(v^2-v)g_3(x)\rangle$ and $|C|={q}^{3n-deg(g_1(x))-deg(g_2(x))-deg(g_3(x))}$, where $g_1(x),~g_2(x)$ and $g_3(x)$ are the generator polynomials of $C_{1},~C_2$ and $C_{3}$ respectively. [***[Proof.]{}***]{} Since $C_{1}=\langle g_1(x)\rangle\subseteq{F_{q}[x, \theta_t]}/{\langle x^n-1\rangle},~C_{2}=\langle g_2(x)\rangle\subseteq{F_{q}[x, \theta_t]}/{\langle x^n-1\rangle},~C_3=\langle g_3(x)\rangle\subseteq{F_{q}[x, \theta_t]}/{\langle x^n-1\rangle}$ and $C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3$, we find that $C=\{c(x)~|~c(x)=(1-v^2)f_1(x)+\frac{p+1}{2}(v^2+v)f_2(x)+\frac{p+1}{2}(v^2-v),~f_1(x)\in C_{1},~f_2(x)\in C_{2},~f_3(x)\in C_3\}.$ Therefore $$C\subseteq\langle(1-v^2)g_1(x), \frac{p+1}{2}(v^2+v)g_2(x), \frac{p+1}{2}(v^2-v)g_3(x)\rangle\subseteq R[x, \theta_t]/\langle x^n-1\rangle.$$ For any $(1-v^2)k_1(x)g_1(x)+\frac{p+1}{2}(v^2+v)k_2(x)g_2(x)+\frac{p+1}{2}(v^2-v)k_3(x)g_3(x)\in\linebreak\langle(1-v^2)g_1(x), \frac{p+1}{2}(v^2+v)g_2(x), \frac{p+1}{2}(v^2-v)g_3(x)\rangle\subseteq R[x, \theta_t]/\langle x^n-1\rangle,$ where $k_1(x), k_2(x), k_3(x)\in R[x, \theta_t]/\langle x^n-1\rangle$, there are $r_1(x), r_2(x), r_3(x)\in F_{q}[x, \theta_t]$ such that $$(1-v^2)k_1(x)=(1-v^2)r_1(x),$$ $$\frac{p+1}{2}(v^2+v)k_2(x)=\frac{p+1}{2}(v^2+v)r_2(x)$$ and $$\frac{p+1}{2}(v^2-v)k_3(x)=\frac{p+1}{2}(v^2-v)r_3(x).$$ This means that $$\langle(1-v^2)g_1(x), \frac{p+1}{2}(v^2+v)g_2(x), \frac{p+1}{2}(v^2-v)g_3(x)\rangle\subseteq C.$$ Hence $\langle(1-v^2)g_1(x), \frac{p+1}{2}(v^2+v)g_2(x), \frac{p+1}{2}(v^2-v)g_3(x)\rangle=C$. Since $|C|=|C_{1}||C_{2}||C_3|$, $|C|={q}^{3n-deg(g_1(x))-deg(g_2(x))-deg(g_3(x))}$. Let $C_1,~C_2$ and $C_3$ be skew cyclic codes over $F_{q}$ with monic generator polynomials $g_1(x),~g_2(x)$ and $g_3(x)$ respectively. If $C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3$ is a skew cyclic code of length $n$ over $R$, then there is a unique polynomial $g(x)\in R[x, \theta_t]$ such that $C=\langle g(x)\rangle$ and $g(x)$ is a right divisor of $x^n-1$, where $g(x)=(1-v^2)g_1(x)+\frac{p+1}{2}(v^2+v)g_2(x)+ \frac{p+1}{2}(v^2-v)g_3(x)$. [***[Proof.]{}***]{} By Theorem 4.5, we may assumed that $C=\langle(1-v^2)g_1(x), \frac{p+1}{2}(v^2+v)g_2(x), \frac{p+1}{2}(v^2-v)g_3(x)\rangle$, where $g_1(x),~g_2(x)$ and $g_3(x)$ are the monic generator polynomials of $C_{1},~C_2$ and $C_{3}$ respectively. Let $g(x)=(1-v^2)g_1(x)+\frac{p+1}{2}(v^2+v)g_2(x)+ \frac{p+1}{2}(v^2-v)g_3(x)$. Clearly, $\langle g(x)\rangle\subseteq C $. Note that $$(1-v^2)g_1(x)=(1-v^2)g(x),$$ $$\frac{p+1}{2}(v^2+v)g_2(x)=\frac{p+1}{2}(v^2+v)g(x)$$ and $$\frac{p+1}{2}(v^2-v)g_3(x)=\frac{p+1}{2}(v^2-v)g(x),$$ so $C\subseteq \langle g(x)\rangle$. Hence $C=\langle g(x)\rangle$. Since $g_1(x),~g_2(x)$ and $g_3(x)$ are monic right divisors of $x^n-1$, there are $r_1(x), r_2(x), r_3(x)\in F_{q}[x, \theta_t]/\langle x^n-1\rangle$ such that $$x^n-1=r_1(x)g_1(x)=r_2(x)g_2(x)=r_3(x)g_3(x).$$ This implies that $$x^n-1=[(1-v^2)r_1(x)+\frac{p+1}{2}(v^2+v)r_2(x)+\frac{p+1}{2}(v^2-v)r_3(x)]g(x).$$ Hence, $g(x)|x^n-1$. The uniqueness of $g(x)$ can be followed from that of $g_1(x), g_2(x)$ and $g_3(x)$.\ The following corollary is an immediate consequence of the above theorem: Every left submodule of $R[x, \theta_t]/\langle x^n-1\rangle$ is principally generated. In order to study the generator polynomials of the dual code of a skew cyclic code over $R$, we need the following definition which can be found in [@33].\ Let $g(x)=g_0+g_1x+\cdots+g_rx^r$ and $h(x)=h_0+h_1x+\cdots+h_{n-r}x^{n-r}$ be polynomials in $F_{q}[x, \theta_t]$ such that $x^n-1=h(x)g(x)$ and $C^\prime$ be the skew cyclic code generated by $g(x)$ in $F_{q}[x, \theta_t]/\langle x^n-1\rangle$. Then the dual code of $C^\prime$ is a skew cyclic code generated by the polynomial $\bar{h}(x)=h_{n-r}+\theta_t(h_{n-r-1})x+\cdots+\theta_{t}^{n-r}(h_0)x^{n-r}$.\ In view of Theorems 3.5 $\&$ 4.6, we have the following corollary: Let $C_1,~C_2$ and $C_3$ be skew cyclic codes over $F_{q}$ and $g_1(x),~g_2(x)$ and $g_3(x)$ be their generator polynomials such that $$x^n-1=h_1(x)g_1(x)=h_2(x)g_2(x)=h_3(x)g_3(x) \in F_{q}[x, \theta_t].$$ If $C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3$, then $$C^\perp=\langle(1-v^2)\bar{h_1}(x)+\frac{p+1}{2}(v^2+v)\bar{h_2}(x)+\frac{p+1}{2}(v^2-v)\bar{h_3}(x)\rangle$$ and $|C^\perp|={q}^{deg(g_1(x))+deg(g_2(x))+deg(g_3(x))}$. Idempotent generators of skew cyclic codes over $R$ =================================================== The idempotent generators of skew cyclic codes over $F_q$ studied by Gursoy et al. [@36] under some restrictions. In fact, they proved the following results: [[@36 Lemma 2]]{} Let $g(x)\in F_{q}[x, \theta_t]$ be a monic right divisor of $x^n-1$. If $g.c.d.(n, m_t)=1$, then $g(x)\in F_{p^t}[x]$, where $m_t=m/t$ denotes the order of the automorphism $\theta_t$. [[@36 Theorem 6]]{} Let $g(x)\in F_{q}[x, \theta_t]$ be a monic right divisor of $x^n-1$ and $C=\langle g(x)\rangle$. If $g.c.d.(n, m_t)=1$ and $g.c.d.(n, q)=1$, then there exists an idempotent polynomial $e(x)\in F_{q}[x, \theta_t]/\langle x^n-1\rangle$ such that $C=\langle e(x)\rangle$. Now, we give the idempotent generators of skew cyclic codes over $R$. Let $C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3$ be skew cyclic code of length $n$ over $R$ and $g.c.d.(n, m_t)=1,~g.c.d.(n, q)=1$. Then $C_i$ has idempotent generator, say $e_i(x)$ for $i=1, 2, 3$. Moreover $e(x)=(1-v^2)e_1(x)+\frac{p+1}{2}(v^2+v)e_2(x)+\frac{p+1}{2}(v^2-v)e_3(x)$ is an idempotent generator of $C$, that is, $C=\langle e(x)\rangle$. [***[Proof.]{}***]{} In the light of Theorem 4.6 and Lemma 5.2 , the proof follows.\ The following theorem gives the number of skew cyclic codes of length $n$ over $R$. Let $g.c.d.(n, m_t)=1$ and $x^n-1=\prod\limits_{i=1}^{r}{g_i^{s_i}(x)}$, where $g_i(x)\in F_{q}[x, \theta_t]$ is irreducible. Then the number of skew cyclic codes of length $n$ over $R$ is $\prod\limits_{i=1}^{r}{(s_i+1)^3}$. [***[Proof.]{}***]{} In view of Lemma 5.1 if $g.c.d.(n, m_t)=1$, then $g_i(x)\in F_{p^t}[x]$. In this case the number of skew cyclic codes of length $n$ over $F_{q}$ is $\prod\limits_{i=1}^{r}(s_i+1)$. Since $C=(1-v^2)C_1\oplus\frac{p+1}{2}(v^2+v)C_2\oplus\frac{p+1}{2}(v^2-v)C_3$, $\prod\limits_{i=1}^{r}{(s_i+1)^3}$ is the number of skew cyclic codes of length $n$ over $R$. When $g.c.d.(n, m_t)\neq1$, the factorization of $x^n-1$ is not unique in $F_{q}[x, \theta_t]$, therefore we can not say anything certain about the number of skew cyclic codes in this case.\ Now, we close our discussion with the following examples:\ **Example 4.11** Let $R=F_9+vF_9$ be the ring with $v^3=v$ and $\theta$ be the Frobenius automorphism over $F_9$, that is, $\theta(r)=r^3$ for any $r\in F_9$, where $F_9=F_3[2\alpha+1],~\alpha^2=-1$. Then $$x^4-1=(x+1)(x+2)(x+\alpha)(x+2\alpha)\in F_9[x, \theta].$$ If $g_1(x)=g_2(x)=g_3(x)=x+2\alpha$, then $C_1=\langle g_1(x)\rangle,~C_2=\langle g_2(x)\rangle$ and $C_3=\langle g_3(x)\rangle$ are the skew cyclic codes over $F_9$ with parameters $[4, 3, 2]$. Therefore, the code $C=\langle (1-v^2)g_1(x)+\frac{p+1}{2}(v^2+v)g_2(x)+\frac{p+1}{2}(v^2-v)g_3(x)\rangle=\langle x+2\alpha\rangle$ is a skew cyclic code of length $4$ over $R$. Further, the Gray image $\phi(C)$ of $C$ is a skew 3-quasi cyclic code over $F_9$ with parameters $[12, 9, 2]$, which is an optimal code.\ **Example 4.12** Let $R=F_9+vF_9$ be the ring with $v^3=v$ and $\theta$ be the Frobenius automorphism over $F_9$, that is, $\theta(r)=r^3$ for any $r\in F_9$, where $F_9=F_3[2\alpha+1],~\alpha^2=-1$. Then $$x^5-1=(x+2)(x^4+x^3+x^2+x+1)\in F_9[x, \theta].$$ Since $g.c.d.(5, 2)=1$, there exist $63$ nonzero skew cyclic codes of length $5$ over $R$.\ Let $g_1(x)=g_2(x)=g_3(x)=x+2$. Then $C_1=\langle g_1(x)\rangle,~C_2=\langle g_2(x)\rangle$ and $C_3=\langle g_3(x)\rangle$ are the skew cyclic codes of length $5$ over $F_9$. Therefore, the code $C=\langle(1-v^2)g_1(x)+\frac{p+1}{2}(v^2+v)g_2(x)+\frac{p+1}{2}(v^2-v)g_3(x)\rangle=\langle x+2\rangle$ is a skew cyclic code of length $5$ over $R$. Also, the Gray image $\phi(C)$ of $C$ is a skew 3-quasi cyclic code of length $15$ over $F_9$.\ **Example 4.13** Let $R=F_9+vF_9$ be the ring with $v^3=v$ and $\theta$ be the Frobenius automorphism over $F_9$, that is, $\theta(r)=r^3$ for any $r\in F_9$, where $F_9=F_3[2\alpha+1],~\alpha^2=-1$. Then $$\begin{split} x^6-1&=(2+\alpha x+2\alpha x^3+x^4)(1+\alpha x+x^2)\\ &=(2+x+(1+2\alpha)x^2+x^3)(1+x+(2\alpha+2)x^2+x^3)\\ &\in F_9[x, \theta]. \end{split}$$ If $g_1(x)=g_2(x)=2+\alpha x+2\alpha x^3+x^4$ and $g_3(x)=2+x+(1+2\alpha)x^2+x^3$, then $C_1=\langle g_1(x)\rangle,~C_2=\langle g_2(x)\rangle$ and $C_3=\langle g_3(x)\rangle$ are the skew cyclic codes of length $6$ over $F_9$ with dimensions $2,~2$ and $3$ respectively. Thus the code $$C=\langle(1-v^2)g_1(x)+\frac{p+1}{2}(v^2+v)g_2(x)+\frac{p+1}{2}(v^2-v)g_3(x)\rangle$$ is a skew cyclic code of length $6$ over $R$. Also, the Gray image $\phi(C)$ of $C$ is a skew 3-quasi cyclic code over $F_9$ with parameters $[18, 7, 4]$. Conclusion ========== In this paper, we have studied the structural properties of skew cyclic codes over the ring $F_{q}+vF_{q}+v^2F_{q}$ by taking the automorphism $\theta_t:a+vb+v^2c\mapsto a^{p^t}+vb^{p^t}+v^2c^{p^t}$. We have proved that the Gray image of a skew cyclic code of length $n$ over $F_{q}+vF_{q}+v^2F_{q}$ is a skew $3$-quasi cyclic code of length $3n$ over $F_{q}$. It has also been shown that skew cyclic codes over $F_{q}+vF_{q}+v^2F_{q}$ are principally generated. Further, we have obtained idempotent generators of skew cyclic codes over $F_{q}+vF_{q}+v^2F_{q}$. [99]{} T. Abualrub, A. Ghrayeb, N. Aydin and I. Siap, *On the construction of skew quasi cyclic codes* , IEEE. Trans. Inform. Theory, [**[56]{}**]{}(2010), 2081-2090. T. Abualrub, N. Aydin and P. Seneviratne, *On $\theta$-cyclic codes over $F_2+vF_2$*, Australian Journal of Combinatorics [**[54]{}**]{}(2012), 115-126. M. Ashraf and G. Mohammad, *On skew cyclic codes over $F_3+vF_3$*, Int. J. Information and Coding Theory, [**[2]{}**]{}(4)(2014), 218-225. M. Bhaintwal, *Skew quasi cyclic codes over Galois rings*, Designs codes and Cryptography, [**[62]{}**]{}(1)(2012), 85-101. D. Boucher, W. Geiselmann and F. Ulmer, *Skew cyclic codes*, Appl. Algebra Eng. Commun. Comput, [**[18]{}**]{}(4)(2007), 379-389. D. Boucher, P. Sole and F. Ulmer, *Skew constacyclic codes over Galois ring*, Adv. Math. Commun., [**[2]{}**]{}(3)(2008), 273-292. D. Boucher and F. Ulmer, *Coding with skew polynomial rings*, J. Symb. Comput., [**[44]{}**]{}(2009), 1644-1656. J. Gao, *Skew cyclic codes over $F_p+vF_p$*, J. Appl. Math. and Informatics [**[31]{}**]{}(2013), 337-342. J. Gao, *Some results on linear codes over $F_p+uF_p+u^2F_p$*, J. Appl. Math. Comput., [**[47]{}**]{} (2015), 473-485. F. Gursoy, I. Siap and B. Yildiz, *Construction of skew cyclic codes over $F_q+vF_q$*, Adv. Math. Commun., [**[8]{}**]{} (2014), 313-322. A. R. Hammons Jr., P. V. Kumar, A. R. Calderbank, N. J. A. Sloane and P. Sole, *The $\mathbb{Z}_4$-linearty of Kerdock, Preparata, Goethals and Related codes*, IEEE. Trans. Inform. Theory, [**[40]{}**]{}(1994), 301-319. S. Jitman, S. Ling and P. Udomkavanich, *Skew constacyclic codes over finite chain rings*, Adv. Math. Commun., [**[6]{}**]{}(2012), 29-63. I. Siap, T. Abualrub, N. Aydin and P. Seneviratne, *Skew cyclic codes of arbitrary length*, Int. J. Information and Coding Theory, [**[2]{}**]{}(2011), 10-20.
--- abstract: 'Type I X-ray bursts from low-mass X-ray binaries result from a thermonuclear runaway in the material accreted onto the neutron star. Although typical recurrence times are a few hours, consistent with theoretical ignition model predictions, there are also observations of bursts occurring as promptly as ten minutes or less after the previous event. We present a comprehensive assessment of this phenomenon using a catalog of 3387 bursts observed with the BeppoSAX/WFCs and RXTE/PCA X-ray instruments. This catalog contains 136 bursts with recurrence times of less than one hour, that come in multiples of up to four events, from 15 sources. Short recurrence times are not observed from so-called ultra-compact binaries, indicating that hydrogen burning processes play a crucial role. As far as the neutron star spin frequency is known, these sources all spin fast at over 500 Hz; the rotationally induced mixing may explain burst recurrence times of the order of 10 min. Short recurrence time bursts generally occur at all mass accretion rates where normal bursts are observed, but for individual sources the short recurrence times may be restricted to a smaller interval of accretion rate. The fraction of such bursts is roughly 30%. We also report the shortest known recurrence time of 3.8 minutes.' author: - 'L. Keek, D.K. Galloway, J.J.M. in ’t Zand, A. Heger' bibliography: - 'apj-jour.bib' - 'keek0518.bib' title: 'Multi-Instrument X-ray Observations of Thermonuclear Bursts with Short Recurrence Times' --- Introduction ============ Type I X-ray bursts are thought to result from thermonuclear flashes of hydrogen and/or helium in the envelope of neutron stars (@Woosley1976 [@Maraschi1977; @Lamb1978]). This material is accreted through Roche-lobe overflow from a lower-mass companion star (low-mass X-ray binary, LMXB). Current one-dimensional models successfully explain burst features such as the peak flux, the fluence, decay time and recurrence time (e.g., @Woosley2004 [@Heger2007]; see @Wallace1981 [@Fujimoto1981; @Fushiki1987ApJ] for earlier work). During the flash, over $90\%$ of the accreted hydrogen and helium is expected to burn to carbon and heavier elements (e.g., @Woosley2004). For the next flash to occur, a fresh layer of hydrogen/helium must first be accreted. At typical accretion rates of up to approximately $10^{-8}\,\mathrm{M_{\odot}yr^{-1}}$ this takes at least a few hours. X-ray bursts have been observed since the 1970’s (@Grindlay1976 [@1976Belian]) from approximately $90$ sources in our Galaxy, with recurrence times of hours up to days (e.g., @Lewin1993 [@Strohmayer2006]). @lewin76mnras reported the detection with *SAS-3* of three bursts that were separated by only $17$ and $4$ minutes. These bursts originated from a crowded region and, therefore, source confusion cannot be ruled out. In the 1980’s similar recurrence times as short as $10$ minutes were observed from both 4U 1608-522 with *Hakucho* (@1608:murakami80pasj) and from EXO 0748-676 with *EXOSAT* (@0748:gottwald86apj [@0748:gottwald87apj]). This rare phenomenon implies that hydrogen and helium is left over somewhere on the star after the initial flash, because the recurrence time is too short to accrete enough fuel for the subsequent burst(s). This is at odds with the current models, that predict an almost complete burning of the available hydrogen and helium on the entire star surface. @Boirin2007 analyzed 158 hours of *XMM-Newton* observations of EXO 0748-676, which revealed short recurrence time bursts in groups of two (doubles) and three (triples). This relatively large burst sample revealed that on average bursts with a short recurrence time ($8$ to $20$ minutes) are less bright and energetic than bursts with ‘normal’ recurrence times (over $2$ hours). The fit of a black body model to the burst spectrum shows a lower peak temperature, while the emitting area is the same. The profiles of short recurrence time bursts seemingly lack the long $50\,\mathrm{s}$ to $100\,\mathrm{s}$ tail caused by *rp*-process burning, which indicates that the burst fuel contains less hydrogen. After a double or triple it takes on average more time before another burst occurs, suggesting a more complete burning of the available fuel. @Galloway2008catalog showed that there are more sources that show this behavior, with bursts occurring in groups of up to four bursts, and with recurrence times as short $6.4$ minutes. The short recurrence times were observed predominantly when the persistent flux is between approximately $2\%$ and $4\%$ of the Eddington limited flux. Furthermore, indications were found for the association of short recurrence times and the accretion of hydrogen-rich material. The shortest recurrence time previously reported is $5.4$ minutes (@Linares2009ATel). Different ideas have been put forward to explain this rare bursting behavior. As most of the models only resolve the neutron star envelope in the radial direction, it is possible that short recurrence time bursts are due to multi-dimensional effects, such as the confinement of accreted material on different parts of the surface, possibly as the result of a magnetic field (e.g., @Melatos2005 [@Lamb2009]). @Boirin2007, however, found that the different bursts originate from an emitting area of similar size. Furthermore, the indication of a different fuel composition for the bursts with short recurrence times, argues against any scenario where accreted material of the same composition burns on different parts of the surface. The idea of a burning layer with an unburned layer on top has been investigated (@1636:fujimoto87apj). After the first layer flashes, the second layer could be mixed down to the depth where a thermonuclear runaway occurs. Mixing may be driven by rotational hydrodynamic instabilities () or by instabilities due to a rotationally induced magnetic field (@Piro2007 [@Keek2009]). The mixing processes take place on the correct time scale of approximately ten minutes. Although this scenario is able to explain many of the observed aspects of short recurrence time bursts, it has not been reproduced with a multi-zone stellar evolution code that includes a full nuclear burning network. @Taam1993 created models that exhibit ‘erratic’ bursting behavior, reminiscent of short recurrence time bursts. Later versions of the employed code, however, no longer produce this, most likely because of the inclusion of a more extensive nuclear network (@Woosley2004). As a different explanation for the reignition, @Boirin2007 suggested a waiting point in the chain of nuclear reactions. There may be a point in the chain where a decay reaction with a half life similar to the short recurrence times stalls nuclear burning before continuing. We use an improved version of the burst catalog compiled by @Galloway2008catalog, that is extended with the X-ray burst observations of the WFCs on-board *BeppoSAX* (e.g., @Cornelisse2003). This is the largest collection of X-ray bursts used in any study to date. This allows us to study the short recurrence time phenomenon in much more detail. Observations and Analysis Methods ================================= Nomenclature ------------ In this paper we name the different kinds of bursts using the conventions of @Boirin2007. Bursts with recurrence (waiting) times shorter than one hour are referred to as *short waiting time (SWT)* bursts, while longer recurrence time bursts are *long waiting time (LWT)* bursts. The one-hour boundary is chosen to discriminate between the two distinct groups of bursts we observe (Sect. \[sub:Recurrence-times\]). A burst *event* is defined as a series of bursts, where any two or more subsequent bursts are separated by a short waiting time. We refer to an event with one, two, three, or four bursts as a *single*, *double*, *triple*, or *quadruple* burst, respectively. Furthermore, we call a double, triple, or quadruple burst a *multiple-burst event*. For the bursts within a multiple-burst event, we use the terms *first burst* and *follow-up burst*, where the former refers to the LWT burst and the latter to any SWT burst in the event. Burst catalog ------------- Because SWT bursts are rare, we need a large sample of bursts to study them well. We use a preliminary version of the Multi-INstrument Burst ARchive (MINBAR), which is a collection of Type I bursts that are observed with different X-ray instruments, and that are all analyzed in a uniform way. MINBAR is a continuation of the effort which started with the *RXTE* PCA burst catalog by @Galloway2008catalog, combined with the many X-ray bursts observed with the WFCs on *BeppoSAX* (e.g., @Cornelisse2003). The catalog will be presented in full in a forthcoming paper. Currently it contains information on 3402 bursts from 65 sources, among which are 136 SWT bursts (MINBAR version 0.4). In comparison, @Galloway2008catalog report 1187 bursts from 48 sources, among which are 84 bursts with a short recurrence time, in that paper defined as $<30\,\mathrm{min}$ (MINBAR contains 110 SWT bursts using this criterion). Instruments\[sub:Instruments\] ------------------------------ The *Rossi X-ray Timing Explorer* (*RXTE*) was launched on December 30, 1995. One of the instruments on-board is the Proportional Counter Array (PCA; @Jahoda2006), consisting of five proportional counter units (PCUs) which observe in the $1$ to $60\,\mathrm{keV}$ energy range. The PCA has a large collecting area of $8000\,\mathrm{cm^{2}}$ ($1600\,\mathrm{cm^{2}}$ for each PCU). The PCUs are co-aligned and have a collimator that gives a $1^{\circ}$ FWHM field of view. As part of the primary mission objective of *RXTE*, the PCA gathered a large amount of exposure time on the galactic LMXB population. The observations we use have an average duration of $78\,\mathrm{min}$. We use all data that was publicly available in July 2008 (compared to June 2007 for @Galloway2008catalog). The High-Energy X-ray Timing Experiment (HEXTE; @Gruber1996) on *RXTE* consists of two clusters of NaI/CsI scintillation detectors that allow for observations in the $15$ to $250\,\mathrm{keV}$ energy range. If available, we combine PCA and HEXTE observations to obtain a broad band X-ray spectrum, when studying the persistent flux of bursting sources. The All-Sky Monitor (ASM) on *RXTE* consists of three scanning shadow cameras (SSCs) on a rotating beam that image a large part of the sky each satellite orbit, in a series of short $90\,\mathrm{s}$ observations. The SSCs observe in the $1.5$ to $12\,\mathrm{keV}$ energy range. A few months after *RXTE*, the *BeppoSAX* observatory was launched in April 1996 (@1997Boella), and it was operational until April 30, 2002. On-board were two Wide-Field Camera’s (WFCs) which faced in opposite directions (@1997Jager). The WFCs were sensitive in the 2 to 28 keV band-pass and each camera imaged at any time $40^{\circ}\times40^{\circ}$ of the sky using a coded mask aperture. Bi-yearly observations of the Galactic Center were performed, resulting in a large exposure time for many of the LMXBs in the Galaxy (@2004ZandWFC). the WFC observations have a mean duration of $255\,\mathrm{min}$. We use all WFC data. For the majority of the bursters, which are located near the Galactic Center, we gather a total net exposure time of approximately $50$ days. Both observatories were placed in a low Earth orbit of approximately $96$ minutes. During the observation of a particular source, the source is obscured by the Earth for a typical duration of approximately $36$ minutes per orbit. Furthermore, when the satellites passed through the South Atlantic Anomaly (SAA), the detectors were turned off to prevent damage. This introduces data gaps of $13$ to $26$ minutes long. The precise length of the different data gaps depends on the latitude of the source with respect to the satellite orbit, which was different for *BeppoSAX* and *RXTE*. Burst analysis -------------- We briefly discuss the process of the burst analysis. The generation of data products for the different instruments is handled identically to the studies by @Galloway2008catalog and @Cornelisse2003. Our method of burst analysis is in principle the same as for those studies, but extra care was taken to ensure that results from different instruments are directly comparable. For more details of the analysis of PCA and WFC data, we refer to @Galloway2008catalog and @Cornelisse2003, respectively. To find burst occurrences, we generate light curves for each instrument, for each known bursting source. In the light curves we locate all events that rise significantly above the background, e.g., exceeding the mean flux level by at least four times the standard deviation. These events are checked by eye for the characteristic profile of a fast rise and an exponential-like decay. Next, time-resolved spectroscopy is performed on the candidate bursts. We divide the individual bursts in time intervals, such that we obtain for each interval a burst spectrum with similar and sufficient statistics. The net burst spectrum is obtained by subtracting the spectrum of the persistent emission observed during the entire observation, excluding the burst (e.g., @2002Kuulkers). We fit the burst spectra with a black body model, taking into account the effects of interstellar absorption (@1983Morrison; solar abundances from @Anders1982). From this we find a black body temperature and radius, as well as the unabsorbed flux. By extrapolating the fitted black body model beyond the observed energy range, we obtain the bolometric unabsorbed flux. Integrating the flux over the burst yields the burst fluence. The decay time is determined by fits to the burst light curve. We obtain the net burst light curve by subtracting the persistent flux, as measured in the entire observation, excluding the burst. We start fitting the decay when the flux drops below $90\%$ of the peak flux. This has the advantage that we are less sensitive to the Poisson noise at the peak or to the effects of radius expansion. The decay of many bursts is fit well by two exponentials, with two exponential decay times. For some bursts two exponentials did not yield a statistically better fit than a single exponential. For those bursts we report only a single decay time. It should be noted that not for all bursts a good fit was obtained, but the ‘best’ fit still provides a qualitative description of the burst decay. The determination of the burst decay time is currently not done uniformly for the different instruments. For the PCA the bolometric flux is used, while for the WFCs the flux in counts per second is used. We compared the decay times obtained for $15$ bursts that have been observed by both instruments. On average, the decay times for the WFC bursts are $(22\pm7)\%$ longer than for the PCA bursts. Although the decay time scales are not fully compatible across the different instruments, we can still use them to differentiate between short bursts and the longer bursts with the *rp*-process tail. Persistent emission and mass accretion rate\[sub:Persistent-emission-and\] -------------------------------------------------------------------------- We obtain the persistent flux for each burst by fitting the spectrum from the entire observation, excluding the burst. By extending the fitted spectral model (typically a comptonized spectrum or a black body + power law; see @Galloway2008catalog for details) outside the observed energy range, we determine the bolometric correction. The uncertainty in the bolometric correction can be as small as $10\%$, when we combine *RXTE* PCA and HEXTE observations to obtain a broadband spectrum. The *BeppoSAX* WFCs, however, do not have such broad energy coverage, and some PCA observations suffer from source confusion, which leads to an increased uncertainty in the bolometric correction of up to $30\%$. To convert flux to luminosity and fluence to energy, we multiply by $4\pi d^{2}$, with $d$ the distance to the source. We use the distances from @Kuulkers2003 for globular cluster sources, and from @Galloway2008catalog (distances from photospheric-radius expansion bursts observed with the PCA) and @Liu2007 for the other sources. Most measurements of the distance have an uncertainty of the order of $30\%$. Combined with a $30\%$ error in the bolometric correction, this leads to an uncertainty in the luminosity and the fluence of up to approximately $70\%$. The persistent X-ray emission from an LMXB mainly originates from the inner part of the accretion disk, and, therefore, is a measure of the mass accretion rate (e.g, @1826:galloway04apj). We express the mass accretion rate $\dot{M}$ in terms of the Eddington limited accretion rate $\mathrm{\dot{M}_{\mathrm{Edd}}}$ by equating their ratio to the ratio of the persistent luminosity $L$ and the Eddington luminosity for hydrogen accreting sources $\mathrm{L_{\mathrm{Edd}}}=2\cdot10^{38}\,\mathrm{erg\, s^{-1}}$: $\dot{M}/\mathrm{\dot{M}_{\mathrm{Edd}}}=L/\mathrm{L_{\mathrm{Edd}}}$. $\mathrm{L_{Edd}}$ depends on the neutron star mass and the hydrogen fraction of the accreted material (e.g., @Bildsten1998), both of which are not known to great precision for most LMXBs. The current observational constraints on the mass (@Lattimer2007) introduce an uncertainty of several tens of percents. Furthermore, we assume that the accretion process has an efficiency of $100\%$. It is possible that part of the matter leaves the system in a jet, such as observed in black-hole binaries (e.g., @Fender2005). In this paper, however, we neglect this possibility and assume the luminosity is a good measure of the mass accretion rate. We also neglect any anisotropy factors that may arise from the inclination of the disk with respect to the line of sight; because the inclination is ill-constrained for most LMXBs, we assume isotropic emission. The combined uncertainties are quite large, but this only plays a role when we compare different sources. It is, however, of no consequence when we compare the bursts of any single source. We will still compare different sources, but one must be careful to keep these uncertainties in mind. Results ======= --------------------------- ----------------------- ------------------------- -------- -------- -------- -------- -------- ---------------------------------------- Name $\nu_{\mathrm{spin}}$ $t_{\mathrm{exposure}}$ MINBAR Single Double Triple Qua- Remarks (Hz) (days) bursts druple $552$$^{\mathrm{a}}$ 83.1 269 251 9 0 Triples: @Boirin2007; Appendix 30.9 17 17 0 Double: @aoki92pasj 53.9 41 33 4 $620$$^{\mathrm{b}}$ 52.4 67 60 2 1 $581$$^{\mathrm{c}}$ 60.6 241 212 11 1 1 $567$$^{\mathrm{d}}$ 49.8 27 25 1 60.1 125 108 5 1 1 47.9 18 16 1 54.7 50 38 6 56.3 269 240 10 3 (Tz5)$^{\mathrm{f}}$ 48.3 24 15 1 1 1 Type II bursts? (@Galloway2008catalog) (NGC 6641)$^{\mathrm{f}}$ 55.1 31 15 8 Two sources? (@Galloway2004AIPC) 71.5 63 57 3 $549$$^{\mathrm{e}}$ 34.3 60 49 4 1 $^{\mathrm{f}}$ 55.7 55 45 5 --------------------------- ----------------------- ------------------------- -------- -------- -------- -------- -------- ---------------------------------------- $^{\mathrm{a}}$ @Galloway2009 $^{\mathrm{b}}$ @Muno2002 $^{\mathrm{c}}$ @Strohmayer1998 $^{\mathrm{d}}$ @Wijnands2001 $^{\mathrm{e}}$ @Zhang1998 $^{\mathrm{f}}$ Source excluded from our analysis. See Sect. \[sub:Source-selection\] for details. Source selection\[sub:Source-selection\] ---------------------------------------- The MINBAR catalog contains X-ray bursts from $65$ sources, $15$ of which exhibit SWT bursts (Table \[tab:Overview-of-16\]), i.e., bursts with a recurrence time shorter than one hour. We exclude the Rapid Burster, which is known to exhibit Type II bursts, which are not of thermonuclear origin (e.g., @Lewin1993). Interestingly, no multiple-burst events are detected from candidate and confirmed ultra-compact binaries (UCXBs). The companion star in a UCXB is thought to be an evolved star, donating hydrogen-poor matter to the neutron star (@Zand2005). For confirmed UCXBs the binary period has been measured, while candidates are identified by tentative measurements of the period, by a low optical to X-ray flux, or by stable mass transfer at rates below $1\%$ of the Eddington limited rate. We employ the list of (candidate) UCXBs from @intZand2007, which omits candidates proposed on the basis of their X-ray spectrum, which is likely a less reliable method. From these sources we find $229$ bursts, none of which are SWT bursts. The frequent burster (GX 354-0) is suspected of being a UCXB based on its bursting behavior (@Galloway2008catalog). $543$ bursts of this source are present in MINBAR, but no multiple-burst events. As this strongly suggests that SWT bursts are limited to hydrogen-rich accretors, we exclude in our studies the list of (candidate) UCXBs from @intZand2007, with the addition of 4U 1728-34. GX 17+2 and Cyg X-2 exhibit bursts at accretion rates close to the Eddington limit (e.g., @2002Kuulkers), while most LMXBs do not show bursts above approximately $10\%$ of Eddington (e.g., @Paradijs1988 [@Cornelisse2003]). Of these two sources, Cyg X-2 exhibits SWT bursts. Due to the high accretion rate, however, the amount of matter accreted in between the bursts is enough to account for the amount of fuel burned in the bursts. Furthermore, @Galloway2008catalog find indications that (some of) the bursts of GX 17+2 and Cyg X-2 could be Type II bursts. For these reasons we exclude these two sources as well. The bursts from EXO 1745-248, located in the globular cluster Terzan 5, exhibit only weak evidence for cooling, which means that their thermonuclear origin is not firmly established, and that they possibly are Type II bursts. Another sign that the bursting behavior is anomalous, is the fact that most sources in Table \[tab:Overview-of-16\] exhibit more double bursts than triples and quadruples, while EXO 1745-248 has one of each. We exclude the source in our studies. 4U 1746-37, located in the globular cluster NGC 6641, exhibits both faint and bright bursts. @Galloway2004AIPC found in PCA observations that the faint bursts occur at very regular intervals, unaffected by the occurrence of bright bursts, and vice versa. This lead to the speculation that the faint bursts originate from a different LMXB that is also located in NGC 6641. We, therefore, exclude the bursts from this source. After excluding the mentioned sources, we consider $44$ hydrogen-rich accretors from which we observe $2274$ single, $56$ double, $7$ triple, and $2$ quadruple events. Source confusion ---------------- While the WFCs are imaging instruments, the PCA is not. The PCA’s collimators restrict the field of view to $1^{\circ}$ FWHM. Especially in crowded regions, such as near the Galactic Center, multiple X-ray sources may be in the field of view. For the sources in Table \[tab:Overview-of-16\] we check the *RXTE* ASM light curves for any nearby bright X-ray sources that were active at the time of PCA burst observations. This is the case for 2S 1742-294 and SAX J1747.0-2853. Those bursts, for which we cannot reliably measure the persistent flux, are excluded from our studies. The problem is small for SAX J1747.0-2853, because most of its bursts were observed with the WFCs, and we have a reliable measurement of the persistent flux for all three SWT bursts. For 2S 1742-294 the problem of source confusion plays a role for $61$ bursts, including $12$ out of $16$ SWT bursts. Most of these bursts, including all SWT bursts, occur in the time interval MJD 52175–52195. The persistent flux as measured with the WFCs for bursts from that period varies by less than $10\%$. We assign the mean WFC persistent flux to those PCA bursts. Data gaps and recurrence time\[sub:Data-gaps-and\] -------------------------------------------------- We define the recurrence time as the time since the previous burst from the same source in the catalog. Due to the frequent data gaps (Sect. \[sub:Instruments\]), we must keep in mind that these are upper limits. Performing Monte Carlo simulations, we investigate the effect of the data gaps on the number of observed SWT and LWT bursts. We generate a series of $10^{6}$ burst occurrence times and check which bursts fall in data gaps. We use EXO 0748-676 as a template: we generate LWT and SWT bursts with an SWT fraction of $30\%$; we use the mean LWT and SWT recurrence times $3.0$ hours and $12.7$ minutes, respectively (@Boirin2007); we position the bursts in time following a Gaussian distribution around the LWT or SWT $t_{\mathrm{recur}}$, with a width of $16\%$ of either $t_{\mathrm{recur}}$ (mean variability in persistent flux of a series of persistent sources; @kee06) to model variations in the mass accretion rate. To check which of these bursts would be observed in the presence of data gaps, we assume a $96$ min. satellite orbit, containing a $36$ min. data gap due to Earth occultation. The presence and duration of data gaps due to the South-Atlantic Anomaly (SAA) depends on the position of the source and the satellite at the time of the observation. We model SAA data gaps by placing a $20$ min. gap at a random phase of each orbit. Only $49\%$ of the generated bursts is ‘observed’, the rest coincide with a data gap. When a burst is missed, the recurrence time of the next burst, as seen from the previously detected burst, is incorrectly found to be longer: in the distribution of observed recurrence times a long tail of bursts is present at longer $t_{\mathrm{recur}}$ than present in the original distribution (Fig. \[fig:data gaps\]). ![\[fig:data gaps\]Histogram of simulated recurrence times $t_{\mathrm{recur}}$. The *intrinsic* distribution is from a Monte Carlo simulation of $10^{6}$ bursts with SWT and LWT recurrence times as well as SWT fraction from EXO 0748-676 (@Boirin2007). The *observed* distribution includes the effect of data gaps due to Earth occultation and the South-Atlantic Anomaly. All bursts with $t_{\mathrm{recur}}<60\,$min. (dotted line) are considered SWT bursts.](f1) There are also bursts in between the Gaussian peaks of SWT and LWT bursts, but their number is less than $10\%$ of the total number of SWT bursts. Because most of these bursts have $t_{\mathrm{recur}}<60$ min., we still consider them SWT bursts. The fraction of bursts that are SWT bursts, is reduced when one or more bursts in a multiple-burst event occur during a data gap. While we start our simulations with an SWT fraction of $30\%$, the observed distribution has a fraction of $20\%$. We repeat the simulations with different values of the SWT $t_{\mathrm{recur}}$. The total fraction of detected bursts remains the same, but the SWT fraction drops from $30\%$ to $10\%$ for increasing $t_{\mathrm{recur}}$, until $t_{\mathrm{recur}}$ exceeds the duration of the Earth occultation data gap (Fig. \[fig:swtfrac\_data gaps\]). ![\[fig:swtfrac\_data gaps\]SWT fraction as a function of the SWT recurrence time $t_{\mathrm{recur}}$. The solid line allows for variation of $t_{\mathrm{recur}}$ following a Gaussian distribution, while the dashed line does not. The latter drops to $0$ at $t_{\mathrm{recur}}=60\,\mathrm{min}.$, which is the largest SWT $t_{\mathrm{recur}}$ we consider.](f2) Repeating the simulations with different durations of the Earth-occultation data gap, the SWT fraction drops by only a few percent for longer data gaps, until the fraction quickly goes to $0$, when an SWT recurrence time no longer fits in the observed part of the orbit (Fig. \[fig:swtfrac\_data gaps-1\]). ![\[fig:swtfrac\_data gaps-1\]SWT fraction as a function of the duration of the Earth-occulation data gaps. The SWT fraction drops to $0$ when the data gaps approach the $96$ min. duration of the satellite orbit.](f3) A similar decrease of the SWT fraction is expected if the duration of the observation is less than the SWT recurrence time. While the average exposure times of both the PCA and the WFCs exceed one hour, a substantial part of the observations had a shorter recurrence time: $19\%$ of the WFC exposures and $66\%$ of the PCA exposures were shorter than one hour. Detection limits ---------------- ![\[fig:instruments\]Histogram of observed peak burst flux $F_{\mathrm{peak}}$ for the WFCs and the PCA. Additionally we show the distributions for the short recurrence time bursts.](f4) The instruments we use have different detection limits. The WFCs had a substantially larger field of view than the PCA, which results in a higher background level. Furthermore, the PCA has a $46$ times larger collecting area than each WFC. Consequently, we are able to find fainter bursts in PCA data than in WFC data (Fig. \[fig:instruments\]). This is especially important for the SWT bursts, as they have been found to be on average fainter than the LWT bursts (e.g., @Boirin2007). In the PCA data we find SWT bursts with peak flux $F_{\mathrm{peak}}$ as low as $1.6\cdot10^{-10}\,\mathrm{erg\, cm^{-2}\, s^{-1}}$, while in the WFC data the faintest SWT burst has $F_{\mathrm{peak}}=3.6\cdot10^{-9}\,\mathrm{erg\, cm^{-2}\, s^{-1}}$. As a result, we find many more SWT bursts with the PCA: for the PCA we find 76 SWT bursts out of a total of 910 bursts, and for the WFCs we find 14 SWT bursts out of 1560 bursts. \[sub:Recurrence-times\]Recurrence times ---------------------------------------- We plot for all bursts the recurrence time $t_{\mathrm{recur}}$ as a function of the persistent luminosity $L_{\mathrm{pers}}$ (Fig. \[fig:twait\_raw\]). While most bursts have a recurrence time of at least several hours, there is also a group of SWT bursts with $t_{\mathrm{recur}}<1$ hour. There is an intrinsic spread in $t_{\mathrm{recur}}$ due to, for example, variations in the mass accretion rate, or variations in the temperature in the neutron star envelope. We investigate whether the short recurrence times can be explained as the tail of the distribution of the long recurrence times. For this distribution we assume a Gaussian, even though it is not certain whether this is correct far from the mean. The data gaps modify the observed distribution, especially towards longer $t_{\mathrm{recur}}$ (Sect. \[sub:Data-gaps-and\]). The leading part of the Gaussian, however, is not modified substantially, apart from the overall lower number of observed bursts due to the lower net exposure time (Fig. \[fig:data gaps\]). We fit a Gaussian to the distribution of recurrence times between $1$ and $3$ hours with the center fixed at $3$ hours. Extrapolating the best fit towards shorter $t_{\mathrm{recur}}$, we predict $5.6$ SWT bursts. The Poisson probability for the observed number of SWT bursts of $76$, is negligibly small ($P\lesssim10^{-10}$). Therefore, the short recurrence times follow a separate distribution. There is a separation between bursts with short ($\lesssim0.5$ hour) and long ($\gtrsim1$ hour) recurrence times. Above $L_{\mathrm{pers}}\gtrsim6\cdot10^{36}\,\mathrm{erg\, s^{-1}}$, however, there are some bursts that have recurrence times of $30$ to $60$ minutes. Comparing the recurrence time distributions of SWT bursts ($t_{\mathrm{recur}}<1$ hour) at persistent luminosities below and above $L_{\mathrm{pers}}=6\cdot10^{36}\,\mathrm{erg\, s^{-1}}$, a KS-test yields $P=0.16$, which means we can exclude at $84\%$ that both distributions are the same. This is not a strong constraint constraint, mainly due to the small number of bursts with $L_{\mathrm{pers}}<6\cdot10^{36}\,\mathrm{erg\, s^{-1}}$. It is, however, consistent with the behavior of EXO 0748-676 during EXOSAT, XMM-Newton, and Chandra observations, that resulted in relatively large data sets, where SWT bursts with recurrence times exceeding $30$ minutes only occurred when the persistent flux was larger (@Gold1968 [@Gottwald1987; @Boirin2007]; Appendix). There are SWT bursts at all values of $L_{\mathrm{pers}}$ where LWT bursts are observed, with the possible exception of $L_{\mathrm{pers}}\gtrsim3\cdot10^{37}\,\mathrm{erg\, s^{-1}}$, although this may be a statistical effect due to the lower number of bursts. The $\alpha$-parameter is defined as the ratio of the persistent fluence between bursts and the burst fluence. Assuming a burst fluence of $2.2\cdot10^{39}\,\mathrm{erg}$ — the average fluence of MINBAR single bursts from hydrogen-rich accretors — we draw lines of constant $\alpha$; $\alpha\simeq40$ is a typical value for many bursters, while $\alpha\simeq1000$ is observed for superbursters (e.g., @Zand2003). The effective $\alpha$-value for the SWT bursts is far below $\alpha=40$, which is the expected value for thermonuclear ignition of mixed hydrogen/helium fuel, highlighting the requirement for ignition of unburned fuel left-over from the previous burst. We find the shortest recurrence time reported so-far[^1]: $3.8$ minutes for a double burst from 4U 1705-44, detected with the WFCs at MJD 51233.89. ![image](f5) Accretion rate dependence\[sub:Accretion-rate-dependence\] ---------------------------------------------------------- We use the persistent luminosity $L_{\mathrm{pers}}$ as a measure of the mass accretion rate (Sect. \[sub:Persistent-emission-and\]). A Kolmogorov-Smirnov (KS) test finds that the distributions of all SWT and LWT as a function of $L_{\mathrm{pers}}$ are compatible ($P=0.10$; Fig. \[fig:fpers-histo\]): there are multiple-burst events at all mass accretion rates where normal bursts occur. This does not hold, however, for all individual sources. We investigate a few frequent bursters more closely. For EXO 0748-676 the distributions of single and multiple bursts are compatible ($P=0.73$); 4U 1636-53 and 2S 1742-294 exhibit multiple bursts only in a small $L_{\mathrm{pers}}$ interval, while single bursts occur in a wider range of $L_{\mathrm{pers}}$. The small $L_{\mathrm{pers}}$ intervals for these two sources do not seem to coincide. The uncertainty in the luminosity, however, is large (Sect. \[sub:Persistent-emission-and\]), so the intervals may still be consistent. We find SWT bursts for only $15$ out of $44$ hydrogen-rich accretors. For most of the sources we can attribute the lack of SWT bursts to the low number of bursts detected per source. There are, however, two bursters, and , from which we have detected over $300$ LWT bursts per source, but no SWT bursts. The position in so-called color-color diagrams, $S_{\mathrm{Z}}$, is regarded as a tracer of the mass accretion rate (@1989Hasinger). We compared the distribution of $S_{\mathrm{Z}}$ for LWT and SWT bursts observed with the PCA from eight sources for which $S_{\mathrm{Z}}$ is well defined: 4U 1608-522, 4U 1636-536, 4U 1702-429, 4U 1705-44, 4U 1728-34, KS 1731-260, Aql X-1, and XTE J2123-058, omitting 4U 1746-37 as explained in Sect. \[sub:Source-selection\] (@Galloway2008catalog). While LWT bursts have associated $S_{\mathrm{Z}}$ values of up to $2.8$, SWT bursts all have $S_{\mathrm{Z}}\lesssim2$. Therefore, we find that SWT bursts are restricted to the so-called island state, while LWT bursts also occur in the ‘banana’ branch. A KS-test yields $P\simeq10^{-3}$, confirming that the $S_{\mathrm{Z}}$ distributions for LWT and SWT bursts are different. Most of the SWT bursts from the frequent burster 4U 1636-53 occurred with $1.5\lesssim S_{\mathrm{Z}}\lesssim2.0$. KS 1731-260 exhibits only a few LWT bursts in that range. This suggests that SWT bursts occur mainly in a small $S_{\mathrm{Z}}$ interval, and that the lack of SWT bursts from the latter source is caused by the low number of observed bursts in that range. Note that we observe SWT bursts with $S_{\mathrm{Z}}$ as low as $0.8$, so the interval is not the same for all sources. ![image](f6a)![image](f6b) ![image](f6c)![image](f6d) ![image](f6e)![image](f6f) Frequency of short vs. long recurrence times\[sub:Frequency-of-short\] ---------------------------------------------------------------------- ![image](f7a)![image](f7b) ![image](f7c)![image](f7d) We consider $2415$ bursts from hydrogen-accreting sources. $76$ have a short recurrence time: the overall SWT fraction is $(3.1\pm0.4)\%$. The 1-$\sigma$ uncertainty is derived from the Poisson uncertainties in the number of (SWT) bursts. As a function of persistent luminosity there is some variation in the SWT fraction, but there is no clear trend (Fig. \[fig:multiplefrac\]). The weighted mean of the SWT fraction in the range of $L_{\mathrm{pers}}$ where SWT bursts are observed is $(2.3\pm0.3)\%$ for all hydrogen-rich accretors, $(13\pm4)\%$ for 4U 1636-536, $(16\pm4)\%$ for 2S 1742-294 and $(6\pm2)\%$ for EXO 0748-676. We compare these fractions to *XMM-Newton* and *Chandra* observations of EXO 0748-676. In 2003 the *XMM-Newton* EPIC PN observed double and triple bursts from EXO 0748-676 with an SWT fraction of $(32\pm7)\%$ (@Boirin2007). @Homan2003 find in *XMM-Newton* EPIC PN and MOS observations from 2000 and 2001 $4$ single and $4$ double bursts, which results in an SWT fraction of $(33\pm19)\%$. *Chandra* ACIS-S observations from 2001 and 2003 exhibit $41$ bursts with an SWT fraction of $(27\pm9)\%$ (see Appendix). The weighted mean of these rates is $(30\pm5)\%$. An important difference in the data of on the one hand *XMM-Newton* and *Chandra*, and on the other hand *RXTE* and *BeppoSAX*, is the frequent data gaps in observations of the latter observatories. From Monte Carlo simulations we found that data gaps reduce an SWT fraction of $30\%$ to $20\%$ (Sect. \[sub:Data-gaps-and\]). This is significantly higher than the $(6\pm2)\%$ we obtain from MINBAR. We repeat the simulations for 4U 1636-536 and 2S 1742-294, using the mean SWT recurrence times from MINBAR: $16.2$ min. and $20.5$ min., respectively. The obtained SWT fractions, respectively $18\%$ and $16\%$, are consistent within $1.3\sigma$ with the fractions from MINBAR. The discrepancy for EXO 0748-676 may arise because of the fact that the WFCs are less sensitive to fainter bursts than the PCA or the instruments on *XMM-Newton* and *Chandra*. Taking only the PCA bursts into account, we find SWT fractions of $(11\pm6)\%$ for EXO 0748-676, $(13\pm4)\%$ for 4U 1636-53, and $(23\pm6)\%$ for 2S 1742-294, all of which are within $1.5\sigma$ from the fractions found from the Monte Carlo simulations. Two frequent bursters exhibit no SWT bursts: KS 1731-26 and GS 1826-24. Combining the number of LWT bursts observed from these sources, we derive an upper limit to the SWT fraction of $0.14\%$. This is over $20$ times smaller than the SWT fraction we find for all hydrogen-accreting sources combined. Temperature and energetics -------------------------- From time-resolved spectral analysis of the bursts, we obtain the black-body temperature and the burst energetics. The peak temperature of the first bursts in multiple-burst events is on average higher than the peak temperature of the SWT bursts (Fig. \[fig:peak temp\]). A KS test shows that the temperature distributions for single bursts and first bursts are not compatible ($P\simeq10^{-12}$). We compare the peak luminosity and the fluence of single bursts to those in multiple-burst events (Fig. \[fig:peak luminosity\], Fig. \[fig:fluence\]). The SWT bursts have on average a lower peak luminosity and a lower fluence. The energetics of the first bursts does not follow the same distributions as the single bursts ($P\lesssim10^{-2}$). By adding the fluence of all bursts in a multiple-burst event, we calculate the total event fluence (Fig. \[fig:event fluence\]). Multiple-burst events are on average more energetic than single burst events, but do not have a higher fluence than the most energetic single bursts. Summarizing, SWT bursts are on average weaker and cooler than LWT bursts, but the combined fluence in multiple-burst events is on average $8\%$ higher than the fluence of single-burst events. ![\[fig:peak temp\]Histogram of peak black-body temperature of single bursts, the first bursts in multiple events, and the SWT bursts.](f8) ![\[fig:peak luminosity\]Histogram of bolometric peak luminosity of single bursts, the first bursts in multiple events, and the SWT bursts.](f9) ![\[fig:fluence\]Histogram of bolometric fluence of single bursts, the first bursts in multiple events, and the SWT bursts.](f10) ![\[fig:event fluence\]Histogram of the summed bolometric fluence of all bursts in the different multiple-burst events.](f11) Decay time scale ---------------- The two component exponential provides a good fit to many bursts. If the two-component decay does not provide a significantly better fit than the one-component, we use the latter. We compare the longest decay time scales of all bursts in Fig. \[fig:decay time\]. On average, the SWT bursts decay faster than the other bursts. KS tests show that all distributions are incompatible ($P\lesssim10^{-2}$). ![\[fig:decay time\]Histogram of the exponential decay time for single bursts of hydrogen-rich accretors, as well as the first and SWT bursts in multiple events.](f12) Neutron stars with SWT bursts spin fast --------------------------------------- The spin frequency $\nu_{\mathrm{spin}}$ of an accreting neutron star is measured in observations of accretion-powered pulsations or burst oscillations. Both mechanisms are thought to arise from hotter and, hence, brighter spots on the surface rotating in and out of view. Currently $\nu_{\mathrm{spin}}$ is known for $25$ accreting neutron stars, $19$ of which are bursters (e.g., @Galloway2008). Among them are $5$ sources that exhibit SWT bursts (Table \[tab:Overview-of-16\]). They are concentrated towards the high-frequency part of the distribution for all bursting sources (Fig. \[fig:ns spin\]). Since we only have five multiple-bursting sources with a known spin frequency, we cannot exclude that this bias is the result of the small sample. There could be a selection effect: sources with a higher mass accretion rate accrete more angular momentum, causing them to spin up faster. Furthermore, their burst rate is higher, making it easier to detect a rare multiple-burst event. We check the MINBAR catalog for the number of bursts from these sources as a function of the spin frequency (Fig. \[fig:ns spin\]). There is roughly an equal number of bursts observed at higher and at lower $\nu_{\mathrm{spin}}$, so the selection effect is not present. There are a few hydrogen-accreting sources of which we detected a large number of bursts, but no SWT bursts (Sect. \[sub:Accretion-rate-dependence\]). One of these sources, KS 1731-260, is known to have a large spin frequency of $\nu_{\mathrm{spin}}=524\,\mathrm{Hz}$ (@Smith1997). ![\[fig:ns spin\]Histogram of the neutron star spin $\nu_{\mathrm{spin}}$ determined from X-ray observations, for all LMXBs with known spin, for the known bursters and for the sources that exhibit multiple-bursts (Table \[tab:Overview-of-16\]). The latter are concentrated at the high end of the distribution of known spins. The dotted line indicates the number of bursts in the MINBAR catalog for the bursters in each bin (right-hand axis).](f13) Quadruple burst from 4U 1636-53 ------------------------------- ![\[fig:quad\]Quadruple burst from 4U 1636-53 as observed with the *RXTE* PCA on MJD 52286. Light curve at $2\,\mathrm{s}$ time resolution (*top*) and zoomed in on each burst at $1\,\mathrm{s}$ time resolution (*bottom*). For each burst we indicate the (longest) exponential decay time $\tau$.](f14a) ![\[fig:quad\]Quadruple burst from 4U 1636-53 as observed with the *RXTE* PCA on MJD 52286. Light curve at $2\,\mathrm{s}$ time resolution (*top*) and zoomed in on each burst at $1\,\mathrm{s}$ time resolution (*bottom*). For each burst we indicate the (longest) exponential decay time $\tau$.](f14b) To illustrate the properties of LWT and SWT bursts, of which we have shown the distributions using a large number of bursts from the catalog, we consider the quadruple burst from 4U 1636-53 (Fig. \[fig:quad\]). The four bursts occurred within $54\,\mathrm{minutes}$, and the time between the burst onsets is $18.2\,\mathrm{min}$, $17.9\,\mathrm{min}$, and $16.8\,\mathrm{min}$, respectively. As indicated in Fig. \[fig:quad\], the first burst has by far the longest decay time $\tau$. The first burst has a peak flux and fluence that is over seven times larger than any of the three SWT bursts. The peak black-body temperature ($\mathrm{k}T$) of the four bursts is, respectively, $(1.67\pm0.02)\,\mathrm{keV}$, $(1.06\pm0.06)\,\mathrm{keV}$, $(1.51\pm0.06)\,\mathrm{keV}$, and $(1.50\pm0.04)\,\mathrm{keV}$: again the first burst has the highest value. The combined net burst fluence of the quadruple event is $(2.93\pm0.05)\,10^{39}\mathrm{erg}$, which is $35\%$ higher than the average fluence of single bursts from this source. and $27\%$ higher than the fluence of the first burst from the quadruplet. This means that at least $27\%$ of the available fuel did not burn in the initial burst. Discussion ========== We study the short recurrence time behavior in a large sample of bursts from multiple sources. We use a preliminary version of the MINBAR burst catalog, containing $3387$ Type I X-ray bursts from $65$ sources. $15$ sources exhibit bursts with recurrence times less then one hour: SWT bursts. The short recurrence times do not allow for the accretion of the hydrogen and helium that is burned during the burst, which means that it must have been accreted before the previous burst. For example, the burst fluences from the quadruple event we observe from 4U 1636-53 indicate that at least $27\%$ of the accreted fuel did not burn in the first burst. This is in contradiction with current one-dimensional multi-zone models, which predict that during a flash over $90\%$ of the available fuel is burned (e.g., @Woosley2004). The aim of this study is to provide a comprehensive observational assessment of the SWT behavior. Temperature and energetics -------------------------- We perform time resolved spectroscopy of the bursts, and find that SWT bursts are on average less bright and reach a lower black-body temperature at the peak than the LWT bursts. The fluence of SWT bursts is on average lower, but the combined bursts in a multiple-burst event are as energetic as the most energetic single bursts. This is in agreement with the *XMM-Newton* observations of EXO 0748-676 analyzed by @Boirin2007. In contrast to that investigation, however, we find that the distributions of these quantities for single and for first bursts of multiple-burst events are not compatible according to Kolmogorov-Smirnov tests. It is possible that our improved statistics allows us to see a disparity that previously went unnoticed. Multiple surface regions vs. multiple layers -------------------------------------------- Several ideas have been put forward to explain SWT bursts in an attempt to answer the two main questions: how to preserve fuel during a burst for the next burst, and how to reignite this fuel on a time scale of approximately ten minutes. Subsequent bursts with short recurrence time may take place in different regions of the neutron star surface. The accreted matter may be confined by a magnetic dipole field to the poles of the neutron star, or perhaps the burning front of a burst is stalled at the equator if the inflow of matter from the accretion disk is particularly strong there. This would provide an explanation for double bursts, but the triple and quadruple bursts that we observe would require a more complicated configuration of the magnetic field. Furthermore, @Boirin2007 find for EXO 0748-676 that there is no evidence for a difference in the X-ray emitting region during the different bursts of multiple-burst events. Also, they find indications that SWT bursts burn a fuel mixture with a lower hydrogen content, which would not be the case if different regions of pristine accreted material are burned. We, therefore, favor the scenario where SWT bursts take place in different layers on top of each other (@1636:fujimoto87apj). For this to work, the thermonuclear burning during the flash must be halted before the hydrogen and helium is depleted. No SWT bursts from UCXBs ------------------------ We observe no SWT bursts from any of the $16$ (candidate) ultra-compact sources from which in total $229$ bursts have been observed with the PCA and WFCs. For this reason we excluded these sources from our analyses. For the hydrogen-accreting sources we found an fraction of $3.1\%$ of all bursts are SWT bursts. If we assume the same SWT fraction for the most frequently bursting confirmed UCXB, , we expect $1.7$ SWT bursts out of the $54$ bursts observed by the PCA and WFCs. The Poisson probability of detecting no SWT bursts when expecting $1.7$, is $0.18$. Taking into account the bursts from all confirmed and candidate UCXBs, we expect $7$ SWT bursts, and the probability of a non-detection is less then $10^{-3}$. Based on its bursting behavior, 4U 1728-34 is a suspected UCXB (@Galloway2008catalog). If we include the $543$ observed by the PCA and WFCs from this source, the expected number of SWT bursts is $23$, and the probability of detecting none is $10^{-10}$. Thermonuclear burning processes ------------------------------- No SWT bursts are observed from UCXBs, but only from sources that are thought to accrete hydrogen-rich matter. This suggests that the nuclear burning processes involving hydrogen, i.e., the hot CNO cycle, the $\alpha$*p*-process, and the *rp*-process, are important for creating SWT bursts. The *rp*-process is a series of proton captures and $\beta$-decays, that creates heavy isotopes with mass numbers up to approximately $100$ (@Schatz2001). In this reaction chain there might be a nuclear waiting point with the correct time scale to interrupt and reignite the thermonuclear burning, for example the time scale for spontaneous $\beta$-decay of an isotope. We find, however, a broad distribution of short recurrence times $t_{\mathrm{recur}}$ (Fig. \[fig:twait\_raw\]), which argues against a single waiting point in the reaction chain (see also @Boirin2007). Note that we do not detect SWT bursts from all hydrogen-rich accretors, the two frequent bursters KS 1731-26 and GS 1826-24 being the best examples. This means that merely accreting hydrogen is not enough to produce SWT bursts. The decay profile of an X-ray burst is shaped by two processes. First there is radiative cooling on a thermal time scale. For normal bursts, which ignite at a typical column depth of $y\simeq10^{8}\,\mathrm{g\, cm^{2}}$, this time scale is $\tau_{\mathrm{therm}}\simeq10\,\mathrm{s}$. A second process, that slows down the decay, is prolonged thermonuclear burning through the *rp*-process, which lasts up to approximately $100\,\mathrm{s}$ (@Schatz2001). For EXO 0748-676 @Boirin2007 found SWT bursts to lack the second slower decay component, while the first bursts clearly exhibit a two-component exponential decay. We confirm that this holds true for the other sources with SWT bursts as well. This supports the conclusion by @Boirin2007 that follow-up bursts must occur in a layer with a significantly reduced hydrogen content. @Zand2009 found observations where the burst decay can be followed for several thousands of seconds. They explain the long tail as due to the cooling of a deeper layer below the bursting layer, that is heated by the burst. One of these long tails is detected for the first burst in a triple event of EXO 0748-676. The tail continues to decay uninterrupted while the second and third bursts occur. This is consistent with the idea that SWT bursts occur in a layer above the ignition depth where the first burst occurs. Taking into account that SWT bursts are less energetic, the deeper layer would not be heated substantially by the SWT bursts and continues to cool. Mass accretion rate dependence ------------------------------ SWT bursts are observed over the entire range of mass accretion rates $\dot{M}$ where LWT bursts are observed. For some individual sources, however, the SWT bursts occur in a smaller $\dot{M}$ interval than LWT bursts. This is supported by the position in the color-color diagram where SWT bursts are observed (Sect. \[sub:Accretion-rate-dependence\]; @Galloway2008catalog). Other frequent bursters exhibit no SWT bursts at all, even though the ranges of $\dot{M}$ we observe from sources with and without SWT bursts overlap. Because of the large uncertainty in converting flux to accretion rate, the precise overlap is uncertain. At low accretion rates SWT recurrence times $t_{\mathrm{recur}}$ are mostly restricted to $3.8\,\mathrm{min}\lesssim t_{\mathrm{recur}}\lesssim40\,\mathrm{min}.$ Above approximately $0.05\,\mathrm{\dot{M}_{\mathrm{Edd}}}$, where the Eddington limited mass accretion rate $\mathrm{\dot{M}}_{\mathrm{Edd}}$ corresponds to a persistent luminosity of $\mathrm{L_{\mathrm{Edd}}}=2\cdot10^{38}\,\mathrm{erg\, s^{-1}}$ for hydrogen-accreting sources, recurrence times occur also in the range $40\,\mathrm{min}\lesssim t_{\mathrm{recur}}\lesssim60\,\mathrm{min}.$ At $0.05\,\mathrm{\dot{M}_{\mathrm{Edd}}}$ a transition between two bursts regimes is predicted by @Fujimoto1981 (see also @Bildsten1998). For lower accretion rates all accreted hydrogen burns in a stable manner, and the burst ignites in a hydrogen-poor layer. At higher rates there is no time to burn all hydrogen, and the burst ignites in a layer containing a substantial fraction of both hydrogen and helium. It may be that the latter regime allows for short recurrence times as long as an hour, while the former regime does not. Rotation and mixing ------------------- The spin frequency $\nu_{\mathrm{spin}}$ is not known for most accreting neutron stars, as it requires the observation of X-ray pulsations or burst oscillations (e.g., @Galloway2008). For five sources with short recurrence times $\nu_{\mathrm{spin}}$ is known: all five are fast spinning neutron stars with $\nu_{\mathrm{spin}}\gtrsim500\,\mathrm{Hz}$ (Table \[tab:Overview-of-16\]). The fast rotation could be required for the occurrence of multiple-burst events. It induces rotational instabilities, for example shear instabilities (e.g, ), and instabilities due to a rotationally induced magnetic field (@Spruit2002), that mix the neutron star envelope on a time scale of approximately ten minutes (@Piro2007 [@Keek2009]). If the thermonuclear burning during a flash is halted before it reaches higher layers, the hydrogen and helium in those layers will be mixed down. On a ten minute time scale it reaches the depth where temperature and density are sufficiently high to create the thermonuclear runaway for the next burst. At accretion rates higher than $0.05\,\dot{M}_{\mathrm{Edd}}$, we find bursts with short recurrence times as long as an hour. We mentioned that this may be related to the transition to a different burst regime. The time scale for rotational mixing depends strongly on the thermal and compositional profile of the neutron star envelope (e.g., @Heger2000 for the case of massive stars), which may vary for different burning regimes or even different bursts. This could provide an explanation for the spread in the observed recurrence times. Further theoretical study is necessary to better understand this. The occurrence of multiple-burst events in sources with high rotation rates has consequences for the hypothesis that strong magnetic fields at the neutron star surface contain accreted matter at the poles. This would explain multiple bursts as caused by the burning of different magnetically-confined patches at the poles. The presence of a strong magnetic field, however, would allow for the transportation of angular momentum away from the neutron star, causing it to spin slower. The observations of short recurrence time bursts preferentially at high $\nu_{\mathrm{spin}}$ seems in contradiction with this, which disfavors the magnetic-confinement scenario. KS 1731-260 spins at a high frequency of $524\,\mathrm{Hz}$, accreted hydrogen-rich material, and exhibited many bursts ($369$ in MINBAR). No short recurrence times were observed. Therefore, while a high rotation rate may support the occurrence of SWT bursts, it is not possible to discriminate between sources with and without SWT bursts based on this property alone. Possibly a combination of fast rotation and a mass accretion rate within a certain range (see previous section) are required for short recurrence times. Frequency of SWT bursts ----------------------- We investigate the SWT fraction: the number of bursts that have a short recurrence time with respect to the total number of observed bursts. We determine this fraction, at the persistent luminosities where SWT bursts are observed, for three frequent bursters with SWT bursts: EXO 0748-676, 4U 1636-53, and 2S 1742-294. *XMM-Newton* and *Chandra* observations of EXO 0748-676 find an SWT fraction of $(30\pm5)\%$. Our burst sample is obtained from *RXTE* PCA and *BeppoSAX* WFC observations, which contain data gaps due to Earth occultations and due to the South-Atlantic Anomaly. Monte-Carlo simulations show that the data gaps reduce an SWT fraction of $30\%$ to $20\%$. An additional problem is the fact that the WFCs are less sensitive to fainter bursts than the PCA or the instruments on *XMM-Newton* and *Chandra*. Especially for EXO 0748-676 we find a much lower SWT fraction than from the *XMM* and *Chandra* observations. Taking only the PCA bursts into account, we find SWT fractions of $(11\pm6)\%$ for EXO 0748-676, $(13\pm4)\%$ for 4U 1636-53, and $(23\pm6)\%$ for 2S 1742-294, all of which are within $1.5\sigma$ from the SWT fractions we obtain from Monte Carlo simulations, with an initial SWT fraction of $30\%$. Therefore, in the range of mass accretion rates where SWT bursts occur, approximately $30\%$ of the bursts have a short recurrence time. Conclusions =========== We studied thermonuclear bursts with short recurrence times (SWT) using a large catalog of bursts from multiple sources observed with the *RXTE* PCA and the *BeppoSAX* WFCs. The short recurrence times are of insufficient duration to accrete the fuel that burns in an SWT burst. Bursts are seen to occur in events of up to four bursts: double, triple and quadruple bursts. We report the shortest recurrence time ever found: $3.8$ minutes. We confirm the result of @Boirin2007, that SWT bursts are on average less bright, cooler, and less energetic than LWT bursts. The decay profiles of SWT bursts lack the longer decay component from the *rp*-process, suggesting that SWT bursts take place in a hydrogen-depleted layer. Some sources exhibit short recurrence times at all values of the mass accretion rate where normal bursts occur, while for others SWT bursts are limited to a smaller range of accretion rate. In this range the fraction of bursts with a short recurrence time is consistent with $30\%$. Two frequent bursters that likely accrete hydrogen-rich matter do not show any SWT bursts. Only the hydrogen-accreting neutron stars in our catalog exhibit SWT bursts. This suggests that the hydrogen-burning processes are responsible for the incomplete burning of the available hydrogen and helium during bursts. The mechanism for halting the burning is still unknown. It will require further theoretical modeling of hydrogen-accreting neutron star envelopes to resolve this issue. As far as we know the spin of the sources with SWT bursts, they are all fast rotators. This indicates that rotational mixing can be responsible for ignition of follow-up bursts on a time scale of approximately $10$ minutes (@Piro2007 [@Keek2009]). The number of SWT sources with known spin is small. Measurements of the spin frequency for more sources and further model studies of rotational mixing will better constrain this reignition scenario. Chandra observations of EXO 0748-676\[sec:Chandra-observations-of\] =================================================================== *Chandra* ACIS-S observations with High-Energy Transmission Grating (HETG) in 2001 and 2003 of EXO 0748-676. We use the level 2 event files prepared by the Chandra Data Archive and select events from a circle centered on the source and two bands overlapping the signal from the gratings. Using the CIAO software package (version 4.1), we extract light curves in the $0.2-5\,\mathrm{keV}$ and $5-10\,\mathrm{keV}$ energy bands at $60\,\mathrm{s}$ time resolution, and calculate the hardness ratio by taking the ratio of the count rates in the hard and soft bands (Fig. \[fig:Chandra-light-curves\]). The light curves are similar to the *XMM-Newton* observations performed in 2001 and 2003. Eclipses are present at the binary period of $3.8\,\mathrm{h}$. Especially in the data from 2003, dipping is present in the soft band. We, therefore, use the hard band to locate bursts. We find a total of $41$ bursts; $11$ have a short recurrence time. The burst events are divided into $20$ singles, $9$ doubles, and $1$ triple. The SWT fraction is $(27\pm9)\%$. Note that burst $22$ of observation 4573 appears anomalously long. This is caused by a raised detector background level during a few hundred seconds after the burst, combined with the bin size of $60\,\mathrm{s}$ we used for the figure. ![image](f15a) ![image](f15b) ![image](f15c) <span style="font-variant:small-caps;">Fig. \[fig:Chandra-light-curves\] continued.</span> [^1]: @1658:wijnands02apj found a pair of candidate bursts from MXB 1659-298 only $\sim50\,\mathrm{s}$ apart. The first flare, however, is relatively weak and lacks the cooling characteristic for Type I bursts. The second flare has been identified as $\gamma$-ray burst GRB 990419C (HEASARC IPNGRB catalog).
--- abstract: 'We present first results from a targeted search for brown dwarfs with unusual red colors indicative of peculiar atmospheric characteristics. These include objects with low surface gravities or with unusual dust content or cloud properties. From a positional cross-match of SDSS, 2MASS and WISE, we have identified 40 candidate peculiar early L to early T dwarfs that are either new objects or have not been identified as peculiar through prior spectroscopy. Using low resolution spectra, we confirm that 10 of the candidates are either peculiar or potential L/T binaries. With a $J-K_s$ color of 2.62 $\pm$ 0.15 mag, one of the new objects — the L7 dwarf 2MASS J11193254-1137466 — is among the reddest field dwarfs currently known. Its proper motion and photometric parallax indicate that it is a possible member of the TW Hydrae moving group. If confirmed, it would its lowest-mass (5–6 $M_{\rm Jup}$) free-floating member. We also report a new T dwarf, 2MASS J22153705+2110554, that was previously overlooked in the SDSS footprint. These new discoveries demonstrate that despite the considerable scrutiny already devoted to the SDSS and 2MASS surveys, our exploration of these data sets is not yet complete.' author: - 'Kendra Kellogg, Stanimir Metchev, Kerstin Gei$\ss$ler, Shannon Hicks, J. Davy Kirkpatrick, and Radostin Kurtev' bibliography: - 'bibliography.bib' title: 'A Targeted Search for Peculiarly Red L and T Dwarfs in SDSS, 2MASS, and WISE: Discovery of a Possible L7 Member of the TW Hydrae Association' --- = 1 Keywords: binaries: close - brown dwarfs - infrared: stars - stars: peculiar - stars: late-type - stars: individual (2MASS J11193254-1137466) Introduction ============ Compared to main sequence stars, ultra-cool dwarfs display a wide range of near-infrared colors, even among objects at the same effective temperature or spectral type. The diversity is diagnostic of the unique processes taking place in their molecule- and condensate-rich atmospheres. Effective temperature is the main factor that governs the photospheric appearance of field-aged brown dwarfs, with current understanding pointing to a monotonic correspondence between effective temperature and optical spectral type [@vrba04; @golim04; @loop08]. @cruz09 proposed a dimensional extension to the classification scheme for brown dwarfs, by incorporating surface gravity as a second parameter. They adopt a qualitative description of surface gravities—intermediate, low, and very low—based on optical spectral line strengths. @allers13 expanded the classification scheme to the near-IR by adding continuum index measures to classify the absorption strengths of volatile molecules. Low surface gravities generally contribute to higher dust content in the upper atmospheres of brown dwarfs, making them redder. Analyses of the L and T dwarf population have shown that the optical and near-infrared colors of low-surface gravity objects are readily distinguishable from those of “normal” objects [e.g., @knapp04; @cruz09; @faher12; @allers13]. However, there is also evidence of red brown dwarfs with high dust content without any signatures of youth [@loop08b; @kirk10]. Their near-IR colors are very similar to those of the young, low-surface gravity objects but their spectra do not have any of the characteristics of youth. That is, peculiarly red brown dwarfs may not necessarily be low-gravity and hence young, but could instead be unusually dusty. As there have not been many unusually red old L dwarfs found, the cause of such dustiness is not well established. Finding the cause for the enhanced dust content is undoubtedly of interest for understanding the evolution of substellar objects, and the processes that affect the sedimentation and/or condensation of atmospheric dust. It is also crucial for revealing the ages and properties of directly imaged extrasolar planets, most of which exhibit spectral energy distribution (SED) characteristics of both youth and high dust content [e.g., @mar12; @bon13]. Because isolated brown dwarfs can be scrutinized much more readily than directly imaged extrasolar planets, we stand to potentially learn more about ultra-cool atmospheres from brown dwarfs than we can from exoplanets. Our understanding of the nature of brown dwarfs with unusual SEDs is presently hindered by the relatively small numbers of such peculiar objects. Until recently, there have been no color-selected searches for peculiar brown dwarfs. Discoveries have been serendipitous, usually a by-product of searches for T dwarfs [@loop08; @loop08b; @mcl07; @burg04 etc.]. Only over the past few years have targeted searches been performed on large-area surveys to specifically seek unusually red objects (e.g., [@aller13; @gagne15]). In view of this, we are conducting an independent program to purposefully seek L and T dwarfs with unusual optical/near-infrared (near-IR) colors. The goal is to substantially expand the sample of peculiar L and T dwarfs in order to map the full range of their photospheric properties, and to better understand the evolution and content of their atmospheres. We cross-correlated the SDSS, 2MASS, and WISE survey databases to seek candidate peculiar brown dwarfs based solely on photometric criteria. Our first pass through the databases focused mainly on identifying unusually red objects. Most notable among these is one of the reddest L dwarfs ever found (2MASS J11193254-1137466; 2MASS $J-K_s = 2.62 \pm 0.15$ mag). While peculiar L and T dwarfs have until now been found mostly serendipitously in large-scale photometric surveys, we have implemented a systematic approach to find these objects by design. We discuss the selection and prioritization of candidates in Section 2, and their follow-up observations in Section 3. The spectroscopic characterization of the new L and T dwarfs is presented in Section 4. In Section 5 we assess the significance of the findings from our systematic search of peculiar objects in the context of the presently known sample of L and T dwarfs. Candidate Selection =================== We employ a photometric search for peculiar L and T dwarfs using combined optical (SDSS), near-IR (2MASS) and mid-IR (WISE) fluxes. Our candidate selection expands on the procedure presented in [@metchev08] and [@geissler11], which applied joint positional and color constraints to search for T dwarfs in the overlap area of 2MASS and SDSS DR1 (2099 deg$^{2}$). We use the ninth Data Release (DR9) from SDSS [@ahn12], which has a 14555 deg$^{2}$ footprint, encompassing an area 6.9 times larger than the DR1 footprint. The $>$10-year observational epoch difference between 2MASS and SDSS DR9 prompts us to choose a much larger cross-match radius than was used in the first two studies. We use the Virtual Astronomical Observatory catalog cross-comparison tool[^1] and chose a cross-match radius of $16\farcs5$ to maintain sensitivity to objects with proper motions as high as $1\farcs5$ yr$^{-1}$. Selection Criteria \[sec:selection\] ------------------------------------ Our magnitude and color selection criteria are summarized below. In the following, all $\it{griz}$ magnitudes are on the SDSS photometric scale [@lup02], and the 2MASS and WISE magnitudes are on the Vega scale: 1. $z-J$ $>$ 2.5 mag; 2. $i-z$ $>$ 1.5 mag; 3. $J$ $>$ 14 mag; 4. $z$ $\leq$ 21 mag and $z_{err}$ $\leq$ 0.2 mag; 5. no $g, r <$ 23 mag detection within $1\farcs3$ of the 2MASS coordinate; 6. SDSS object flag setting type = 6 or 3 (point or extended source); 7. 2MASS object flag setting mp\_flg = 0 (i.e., not marked as a known minor planet), gal\_contam = 0 (i.e., not contaminated by a nearby 2MASS extended source), and ext\_key = NULL (i.e., not extended in 2MASS); 8. $H-W2$ $>$ 1.2 mag; 9. $z-J > -0.75(J-K_{s})+ 3.8$ mag (criterion used only to prioritize follow-up of red outliers). Our $z-J$ and $i-z$ color cuts (criteria 1 and 2) were chosen to ensure sensitivity to L and T dwarfs, all of which have a steep red optical slope. The $J$ $>$ 14 magnitude cutoff was imposed to minimize the large number of candidates representing the cross-identification of a bright star artifact in SDSS (e.g., a filter glint or a diffraction spike, especially near saturated stars) with the (unsaturated) image of the same star in 2MASS. Criterion 4 was chosen to ensure detection in SDSS with at least a moderate SNR. Our $16\farcs5$ matching radius commonly resulted in multiple matches of nearby faint SDSS objects to the same, brighter 2MASS object. Each of these individual matches would nominally satisfy the color and magnitude selection criteria, since the faint SDSS photometry would be paired with the brighter 2MASS photometry. However, visual inspection clearly demonstrated that the SDSS and 2MASS objects were distinct, and that the actual object in SDSS that positionally matched the 2MASS object was not nearly as red, and so did not satisfy the $z-J>2.5$ mag criterion. Therefore, we discarded any object that had a $g$-band detection (i.e. $g \leq$ 23 mag and likely a star) in the SDSS catalog within $1\farcs3$ (the angular resolution of SDSS) of the original 2MASS coordinates (criterion 5). This removed $\sim$86% of the candidate sample. The SDSS object flag restrictions (criterion 6) ensure that the identified candidates are not known artifacts or flux measurements of the blank sky in SDSS. The SDSS morphological star-galaxy separation is $<$ 97% accurate for $r$ $\geq$ 21 mag [@yas01] so we include both star and galaxy object types in this criterion in case our faint brown dwarfs were mis-classified. We also wanted to ensure that they are not known minor planets, extended or contaminated by nearby extended sources in 2MASS (criterion 7). To make sure that all of the objects in our candidate list were real objects, we cross-matched our list with the WISE All-Sky Data Release using the SDSS coordinates. Our objects are expected to be detected in the WISE $W1$ band because outside of the galactic plane the $W1$ SNR $=5$ level corresponds to $\lesssim$ 16.8 mag.[^2] This matches the 2MASS $K_{s}$ flux limit at high galactic latitude, especially since L and T dwarfs have positive $K_{s}-W1$ colors. A radius of $16\farcs5$ was again chosen for this cross-match. An additional color cut was applied on $H-W2$ (criterion 8) in order to select only L and T dwarfs (based the color-spectral type relations from [@kirk11]). This removed $\sim$74% of the remaining sample. Finally, we visually inspected the images of remaining candidates using the Infrared Science Archive Finder Chart service[^3] and removed objects that were contaminated by nearby extended sources in SDSS. This eliminated approximately 22% of the remaining candidate sample leaving us with 314 candidates (Figure \[fig:ccdiag\_paper\]a). Prioritization of Peculiar Objects ---------------------------------- Since our goal was to select unusually red brown dwarfs in the absence of spectral type information, an additional color criterion (9) was set in order to prioritize ed objects. To decide the form of the color criterion, we first analyzed the spectra of L and T dwarfs in the SpeX Prism Archive[^4] by forming synthetic photometry over various red-optical and near-IR bandpasses. These L and T dwarfs with archival SpeX data formed our control sample, based on which we designed our $z-J$ vs. $J-K_s$ criterion 9 (Figure \[fig:ccdiag\_paper\]b). Given available spectral type information, the unusually red objects in the control sample were set to be those for which the $J-K_s$ color was $>$2-$\sigma$ redder than the median for the spectral subtype. The medians and standard deviations of the $J-K_s$ colors of M8-T8 dwarfs were adopted from Faherty et al. (2009; M8-M9 and T0-T9) and from Faherty et al. (2013; L0-L9). The unusually red objects in the control sample are shown with red symbols in Figure \[fig:ccdiag\_paper\]b. The number of objects from our sample that passed this criteria was 178. The color prioritization did not streamline our observational follow-up strategy significantly, as the scatter in colors among spectral types is larger than the scatter at any given spectral type. Nonetheless, we did observe the reddest candidates whenever possible, and included observations of lower-priority targets only as necessary. ![ (a) Photometric color-color diagram of all L and T dwarf candidates redder than $z-J$ = 2.5 mag (green dots) identified in our SDSS-2MASS-WISE cross-match. All other symbols (squares - M dwarfs; upwards triangles - L dwarfs; downwards triangle - T dwarf) represent the synthetic colors of the candidates followed up with spectroscopic observations so far. The black symbols are “normal" objects and the red and blue symbols are objects that we have identified as peculiar or binary. Objects redder than the $z-J=-0.75(J-K_s)+3.8$ mag line are candidate peculiarly red L and T dwarfs and were prioritized for spectroscopic follow-up. (b) SDSS/2MASS synthetic color-color diagram of L and T dwarfs from the SpeX Prism Archive (upwards and downwards triangles, respectively). The $z-J$, and $J-K_s$ colors were formed synthetically from the SpeX spectra. Two-sigma red and blue photometric color outliers are indicated by red and blue symbols, respectively. The $z-J=-0.75(J-K_s)+3.8$ mag line was designed to select the photometric red outliers.[]{data-label="fig:ccdiag_paper"}](fig1.eps) Spectroscopic Observations and Data Reduction ============================================= Once our candidates were selected, we performed follow-up spectroscopic observations of 40 of the objects ($\sim$13% of the total candidate sample; 22 high priority and 18 lower priority) using the SpeX instrument [@rayner03] on the NASA Infrared Telescope Facility (IRTF) and the Folded-port InfraRed Echellette (FIRE) instrument [@simcoe08] on the Magellan Baade telescope. Conditions were photometric on most nights, except on August 3, 2011, April 18, 2012, April 19, 2012, and July 14, 2012, when there was scattered cirrus. All reduction of the low-resolution spectra (SpeX and FIRE LD) was done in Interactive Data Language (IDL). IRTF/SpeX --------- The majority of our follow-up observations were taken using the SpeX spectrograph on the IRTF. The broad, simultaneous wavelength coverage (0.8–2.5 $\mu$m) of SpeX and its location in the northern hemisphere are ideal for follow-up of SDSS-identified candidates. These spectra were obtained between 2011 August and 2013 June. The observations were taken in prism mode either with the $0\farcs8 \times 15\farcs0$ or with the $1\farcs6 \times 15\farcs0$ slit, resulting in resolutions of $R \sim$150 and $\sim$75, respectively. The slit orientation was maintained to within 20$^{\circ}$ of the parallactic angle for all targets. We used a standard A-B-B-A nodding sequence along the slit to record object and sky spectra. Individual exposure times were either 60 s or 180 s per pointing. The shorter exposure times allowed us to better subtract the sky-glow under changing atmospheric conditions. Standard stars were used for flux calibration and telluric correction. Flat-field and argon lamps were observed immediately after each set of target and standard star observations for use in instrumental calibrations. Observation epochs and instrument settings for each science target are given in Table \[tab:spex\]. All reductions of the data taken with SpeX were carried out with the [spextool]{} package version 3.4 [@cushing04; @vacca03], using a weighted profile extraction approach [@horne86; @rob86]. The aperture widths were set to be the radius at which the spatial profile dropped to $\sim$5% of the peak flux value to ensure no contamination from background noise; the background regions were chosen to begin at the edge of the PSF radius (i.e., beyond 2.5 pixels = $0\farcs 375$). A constant value was fit to the background and subtracted from the spectrum. The individual extracted and wavelength calibrated spectra from a given sequence of observations, each with their own A0 standard, were then scaled to a common median flux and median-combined using [x\_combspec]{}. The combined spectra were corrected for telluric absorption and flux-calibrated using the respective telluric standards with [x\_tellcor]{}. All calibrated sets of observations of a given object were median combined to produce the final spectrum. The reduced spectra were smoothed to the instrumental resolution corresponding to the chosen slit width, using the Savitzky-Golay smoothing kernel [@press92]. [ccccccc]{} & 2011 Dec 31 & 16.44 & 0.8 & 24 & HD 75135\ & 2013 Jun 06 & 16.25 & 1.6 & 24 & HIP 53735\ & 2013 Jun 07 & 17.19 & 1.6 & 32 & HIP 35735, HIP 54815\ & 2013 Jun 06 & 17.29 & 1.6 & 16 & HIP 53735\ ... & 2013 Jun 07 & 17.29 & 1.6 & 8 & HIP 54815\ & 2013 Jun 07 & 17.20 & 1.6 & 24 & HIP 54815, HIP 56147\ & 2013 Jun 06 & 17.32 & 1.6 & 70 & HIP 68209, HIP 68868\ & 2011 Dec 31 & 16.16 & 0.8 & 16 & HD 125798\ & 2013 Jun 07 & 16.84 & 1.6 & 160 & HIP 68868, HIP 116886\ & 2011 Aug 02 & 16.97 & 0.8 & 54 & HD 153650\ & 2011 Aug 03 & 16.84 & 0.8 & 60 & HD 152531\ & 2011 Aug 02 & 16.96 & 0.8 & 60 & HD 153650\ & 2012 Apr 19 & 17.05 & 0.8 & 48 & HD 151353\ & 2012 Apr 18 & 16.97 & 0.8 & 60 & HD 165623\ & 2011 Aug 03 & 16.26 & 0.8 & 36 & HD 152531\ & 2012 Jul 14 & 16.00 & 0.8 & 12 & HD 157359\ & 2012 Apr 18 & 16.63 & 0.8 & 48 & HD 158261\ & 2012 Jul 14 & 16.88 & 0.8 & 12 & HD 157359\ & 2011 Aug 03 & 16.50 & 0.8 & 36 & HD 157359\ & 2012 Apr 19 & 17.22 & 0.8 & 60 & HD 155838\ & 2011 Aug 03 & 16.90 & 0.8 & 48 & HD 157359\ & 2012 Apr 19 & 17.03 & 0.8 & 48 & HD 155838\ & 2012 Jul 15 & 16.33 & 1.6 & 90 & HD 164728\ & 2011 Aug 02 & 16.42 & 0.8 & 48 & HD 164728\ & 2012 Jul 14 & 16.84 & 0.8 & 12 & HD 165623\ & 2011 Aug 03 & 16.75 & 0.8 & 36 & HD 165623, HD 165622\ ... & 2012 Jul 15 & 16.75 & 1.6 & 36 & HD 165623\ & 2012 Jul 15 & 16.81 & 1.6 & 24 & HD 165622\ & 2012 Apr 19 & 16.88 & 0.8 & 60 & HD 166639\ & 2011 Aug 03 & 16.42 & 0.8 & 36 & HD 209051\ & 2011 Aug 03 & 16.09 & 0.8 & 36 & HD 209051\ & 2011 Aug 02 & 16.90 & 0.8 & 60 & HD 209051\ & 2011 Aug 02 & 16.82 & 0.8 & 60 & HD 210253\ & 2013 Jun 06 & 17.03 & 1.6 & 56 & HIP 53735, HIP 68209, HIP 68868\ & 2011 Aug 03 & 16.49 & 0.8 & 48 & HD 210265\ ... & 2011 Dec 31 & 16.49 & 0.8 & 36 & HD 210265\ ... & 2012 Jul 15 & 16.49 & 0.8 & 48 & HD 210265\ & 2013 Jun 07 & 16.00 & 1.6 & 72 & HIP 116886\ & 2011 Dec 31 & 16.82 & 0.8 & 28 & HD 220184\ ... & 2012 Jul 14 & 16.82 & 1.6 & 36 & HD 220184\ ... & 2012 Jul 15 & 16.82 & 0.8 & 24 & HD 210265\ & 2012 Jul 15 & 16.80 & 0.8 & 60 & HD 222903\ & 2013 Jun 06 & 16.89 & 1.6 & 42 & HIP 68868\ & 2011 Aug 02 & 16.77 & 0.8 & 60 & HD 2717\ Magellan/FIRE ------------- Two of the 40 total candidates were observed using the FIRE spectrograph on the 6.5 m Magellan telescope. The observations of these objects were taken in the low-dispersion (LD) mode with the $0\farcs6 \times 50\farcs0$ longslit resulting in a resolution of $\sim$400. We used a standard A-B-B-A nodding sequence along the slit to record object and sky spectra. Individual exposure times ranged from 31.7–126.8 s per pointing, depending on the brightness of the object. Standard stars were used for flux calibration and telluric correction. We used optimal gain settings of 1.2 e$^{-}$/DN and 3.8 e$^{-}$/DN for the science targets and 3.8 e$^{-}$/DN for the standards as suggested in the FIRE observing manual[^5]. Illumination and appropriate pixel flats were observed either at the beginning or the end of the night and a neon-argon lamp was observed immediately after each set of target and standard star observations for use in instrumental calibrations. All science and telluric observations were taken using the sample-up-the-ramp (SUTR) readout mode whereas all calibration observations were taken in Fowler 1 mode due to the shortness of exposure times. Observation epochs and instrument settings for each target are given in Table \[tab:fire\]. [ccccccccc]{} & 2012 Mar 21 & 16.27 & Long Slit & 0.60 & 3.8 & 4.2 & HD 57450\ & 2012 Mar 21 & 17.02 & Long Slit & 0.60 & 1.2 & 16.9 & HD 153940\ The FIRE low-dispersion spectra were reduced using the FIREHOSE Low Dispersion package which evolved from the optical echelle reduction software package MASE [@boch09]. The spectra were extracted using the optimal extraction approach with the aperture radius being the PSF radius (usually $\sim$3 pixels = 0$\farcs$45) which was then masked to prevent biasing to the sky model. A local background was modeled using a basis spline (i.e., piecewise polynomial) fit to the masked profile and subtracted from the spectra which were subsequently extracted using a weighted profile extraction approach [@horne86]. The extracted spectra were wavelength-calibrated and each set of observations were median-combined. The combined spectra were corrected for telluric absorption and flux-calibrated with their associated A0 calibration star. All calibrated sets of observing sequences of a given object were median combined to produce a final spectrum. The reduced spectra were smoothed, using the IDL Savitzky-Golay smoothing algorithm, to the same resolution as the SpeX standards for comparison. Synthetic Photometry \[sec:synphot\] ------------------------------------ While comparing the 2MASS colors of our L and T dwarf candidates to their spectra, we noticed that in a significant fraction of cases the 2MASS colors were too red compared to the spectra. All of our objects were flux-calibrated with A0 stars with known $B-V$ colors, observed at similar airmasses, so we had no reason to suspect a chromatic effect in our flux calibration. Instead, the reason for the discrepancy was traced to flux over-estimation bias at low SNR in 2MASS. Our objects are faint and often near the SNR = 5 detection limit of 2MASS in the [*J*]{}-band filter. The greater noise near the detection limit means that objects that would normally be below the limit have a finite chance of appearing brighter because of statistical variations. The effect enhances the number of faint objects with low SNR in a flux-limited survey, becoming increasingly important at SNR $<10$ [@bias]. Because all of our objects are faint and red, their 2MASS $J$-band magnitudes preferentially suffer from this bias, resulting in redder than expected $z-J$ colors. This effect is particularly large in the case of the few faint M-dwarfs that entered our sample because of their biased photometric colors (section 4.1). Figure \[fig:colorcompare\] shows how the synthetic colors compare to the photometric colors as a function of the photometric $J$-band SNR for both $z-J$ and $J-K_s$. Indeed, at lower SNR, the $z-J$ photometric colors are on average redder than their synthetic colors while the $J-K_s$ photometric colors are on average slightly bluer. For the remainder of our analysis we use only synthetic SDSS $z$ and 2MASS $JHK_s$ magnitudes for our candidates and for previously known objects with SpeX Prism Archive spectra. The errors on the synthetic photometry in Table \[tab:results\] are standard errors derived from the scatter among the continuum slopes of the individual 60 s or 180 s-exposure spectra of our targets and their corresponding standard stars. These errors incorporate systematic uncertainties from potential chromatic slit losses should the targets have been imperfectly positioned on the slit. Spectral Classification Results =============================== We estimate spectral types for our objects by comparing them to spectra of brown dwarfs available from the SpeX Prism Archive[^6]. When our spectra don’t match any of the normal brown dwarf spectra, we compare to other unusual spectra. In this way, we are able to assess potential spectroscopic peculiarities that may not be evident from the colors alone. Finally, following the approach of [@burg07] and [@burg10], we form combination templates from the standards to assess whether any of our objects might be best fit as unresolved binaries. For spectral comparison to standard L and T dwarfs we used $\chi^2$ minimization over the 0.95-1.35 $\mu$m wavelength range. To assess candidate binarity we compare our spectra to combinations of L and/or T dwarf doubles over the entire 0.8-2.5 $\mu$m range, as detailed in Section 4.3. Table \[tab:results\] lists the determined spectral types, the characteristics of each object, and the peculiarities of our objects determined from both colors and a detailed analysis of their spectra. All of our spectra are shown in Figure \[fig:spectra\]. We determined that our candidate list of 40 observed objects includes 13 M dwarfs, 26 L dwarfs, and 1 T dwarf. Of these, 10 were previously known and suspected to be L dwarfs but did not have any published near-IR spectra. The remaining 30 are new, including the T dwarf. Ten of the 27 L and T dwarfs are either peculiar (4) or possible unresolved binaries (6). ![Difference in synthetic vs photometric $J-K_s$ and $z-J$ colors for M, L and T dwarfs from the SpeX Prism Archive (green symbols) and for objects from this work (all other colored symbols). The black symbols are “normal" objects and the blue and red symbols are objects that we have identified as peculiar or binary. Fewer objects appear in the $z-J$ comparison figure (lower panel) because not all SpeX Archive objects are in the SDSS database.[]{data-label="fig:colorcompare"}](fig2.eps) [cccccccccc]{} & & & & High Priority & & & & &\ 07483864+1743329 & L5 & & ... & $\cdots$ & $\cdots$ & $\cdots$ & 1 & &\ 08095903+4434216 & L7 & & 19.25 $\pm$ 0.09 & 16.22 $\pm$ 0.05 & 14.94 $\pm$ 0.05 & 14.07 $\pm$ 0.05 & 2 & $+$ &\ 09572983+4624177 & L6 & & 19.78 $\pm$ 0.08 & 16.62 $\pm$ 0.06 & 15.39 $\pm$ 0.06 & 14.59 $\pm$ 0.06 & 7 & $+$ &\ 10020752+1358556 & L7 pec & L7+T8? & 20.49 $\pm$ 0.10 & 17.72 $\pm$ 0.08 & 16.56 $\pm$ 0.06 & 15.78 $\pm$ 0.07 & & &\ 11193254-1137466 & L7 red & young & 20.69 $\pm$ 0.30 & 17.23 $\pm$ 0.12 & 15.70 $\pm$ 0.08 & 14.60 $\pm$ 0.10 & & & $+$\ 11260310+4819256 & L5 & & 20.07 $\pm$ 0.11 & 17.20 $\pm$ 0.07 & 16.14 $\pm$ 0.06 & 15.44 $\pm$ 0.06 & & &\ 13043568+1542521 & T0 pec & L6+T6? & 20.50 $\pm$ 0.09 & 17.24 $\pm$ 0.08 & 16.37 $\pm$ 0.06 & 15.76 $\pm$ 0.08 & & &\ 13431670+3945087 & L5 & & 19.02 $\pm$ 0.10 & 16.02 $\pm$ 0.08 & 14.84 $\pm$ 0.07 & 14.08 $\pm$ 0.07 & 3 & &\ 14025564+0800553 & T2 pec & L8+T5? & 20.30 $\pm$ 0.09 & 17.31 $\pm$ 0.07 & 16.51 $\pm$ 0.04 & 16.01 $\pm$ 0.06 & 5 & $+$ &\ 16005759+3021571 & L5 & & 20.25 $\pm$ 0.10 & 17.33 $\pm$ 0.07 & 16.30 $\pm$ 0.06 & 15.66 $\pm$ 0.07 & & &\ 16094569+1426422 & L4 & & 19.70 $\pm$ 0.08 & 16.92 $\pm$ 0.06 & 15.93 $\pm$ 0.06 & 15.28 $\pm$ 0.06 & & &\ 16091143+2116584 & L2 & & 20.02 $\pm$ 0.10 & 17.19 $\pm$ 0.09 & 16.25 $\pm$ 0.08 & 15.62 $\pm$ 0.09 & 1 & &\ 16135698+4019158 & L5 red & old/dusty & 20.29 $\pm$ 0.13 & 17.46 $\pm$ 0.10 & 16.30 $\pm$ 0.06 & 15.54 $\pm$ 0.10 & & $+$ &\ 16470847+5120088 & M9 & & 19.58 $\pm$ 0.10 & 17.10 $\pm$ 0.05 & 16.36 $\pm$ 0.06 & 15.91 $\pm$ 0.06 & & &\ 16592987+2055298 & M9 & & 20.14 $\pm$ 0.13 & 17.75 $\pm$ 0.08 & 17.06 $\pm$ 0.06 & 16.58 $\pm$ 0.07 & & &\ 17081563+2557474 & L5 red & young & 20.00 $\pm$ 0.08 & 16.92 $\pm$ 0.04 & 15.68 $\pm$ 0.05 & 14.84 $\pm$ 0.04 & & $+$ &\ 17161258+4125143 & L4 & & 19.77 $\pm$ 0.16 & 16.82 $\pm$ 0.09 & 15.83 $\pm$ 0.09 & 15.16 $\pm$ 0.08 & & &\ 17373467+5953434 & L5 pec & L4+T5? & 20.80 $\pm$ 0.14 & 17.75 $\pm$ 0.08 & 16.97 $\pm$ 0.07 & 16.52 $\pm$ 0.07 & 6 & & $-$\ 21203483-0747378 & L2 & & 19.97 $\pm$ 0.08 & 17.17 $\pm$ 0.06 & 16.24 $\pm$ 0.05 & 15.68 $\pm$ 0.05 & 6 & &\ 21243864+1849263 & L9 & & 20.15 $\pm$ 0.09 & 17.27 $\pm$ 0.05 & 16.18 $\pm$ 0.05 & 15.62 $\pm$ 0.06 & & &\ 22153705+2110554 & T1 pec & T0+T2? & 19.48 $\pm$ 0.07 & 16.22 $\pm$ 0.07 & 15.45 $\pm$ 0.06 & 15.09 $\pm$ 0.05 & & &\ 23322678+1234530 & T0 pec & L5+T5? & 19.66 $\pm$ 0.10 & 16.92 $\pm$ 0.08 & 16.20 $\pm$ 0.07 & 15.76 $\pm$ 0.06 & & & $-$\ & & & & Lower Priority & & & & &\ 16110632+0025469 & M9 & & ... & $\cdots$ & $\cdots$ & $\cdots$ & 4 & &\ 16231308+3950419 & L3 & & 20.16 $\pm$ 0.08 & 17.35 $\pm$ 0.06 & 16.42 $\pm$ 0.06 & 15.81 $\pm$ 0.05 & & &\ 16242936+1251451 & M9 & & 19.20 $\pm$ 0.06 & 16.68 $\pm$ 0.06 & 15.88 $\pm$ 0.04 & 15.39 $\pm$ 0.06 & 5 & &\ 16304999+0051010 & L2 & & 18.40 $\pm$ 0.11 & 15.64 $\pm$ 0.07 & 14.74 $\pm$ 0.06 & 14.11 $\pm$ 0.07 & & &\ 16322360+2839567 & L1 & & 19.90 $\pm$ 0.12 & 17.30 $\pm$ 0.12 & 16.50 $\pm$ 0.10 & 16.00 $\pm$ 0.10 & & &\ 16360752+2336011 & L1 & & 19.61 $\pm$ 0.08 & 17.00 $\pm$ 0.07 & 16.16 $\pm$ 0.06 & 15.65 $\pm$ 0.06 & 5 & &\ 16370238+2520386 & L4 & & 19.95 $\pm$ 0.10 & 17.10 $\pm$ 0.08 & 16.18 $\pm$ 0.07 & 15.61 $\pm$ 0.06 & & $-$ &\ 16403870+5215505 & M9 & & 20.37 $\pm$ 0.09 & 17.97 $\pm$ 0.11 & 17.19 $\pm$ 0.08 & 16.72 $\pm$ 0.11 & & &\ 16410015+1335591 & L2 & & 20.27 $\pm$ 0.08 & 17.48 $\pm$ 0.14 & 16.56 $\pm$ 0.10 & 15.97 $\pm$ 0.14 & & &\ 17145224+2439024 & M9 & & 19.17 $\pm$ 0.12 & 16.67 $\pm$ 0.08 & 15.86 $\pm$ 0.07 & 15.35 $\pm$ 0.07 & & &\ 17251557+6405005 & L2 pec & blue & 20.10 $\pm$ 0.13 & 17.40 $\pm$ 0.08 & 16.66 $\pm$ 0.07 & 16.20 $\pm$ 0.07 & 1 & & $-$\ 21050130-0533505 & M7 & & 18.36 $\pm$ 0.06 & 16.32 $\pm$ 0.06 & 15.65 $\pm$ 0.06 & 15.28 $\pm$ 0.06 & & &\ 21111559-0543437 & M9 & & 18.94 $\pm$ 0.09 & 16.46 $\pm$ 0.08 & 15.67 $\pm$ 0.08 & 15.17 $\pm$ 0.07 & & &\ 21115335-0644172 & M9 & & 19.83 $\pm$ 0.08 & 17.38 $\pm$ 0.06 & 16.60 $\pm$ 0.06 & 16.06 $\pm$ 0.07 & & &\ 21392224+1124323 & M8 & & 19.74 $\pm$ 0.13 & 17.54 $\pm$ 0.10 & 16.81 $\pm$ 0.08 & 16.39 $\pm$ 0.10 & & &\ 22483513+1301453 & M9 & & 19.61 $\pm$ 0.12 & 17.14 $\pm$ 0.09 & 16.38 $\pm$ 0.07 & 15.92 $\pm$ 0.08 & & &\ 23023319-0935188 & M7 & & 19.63 $\pm$ 0.10 & 17.41 $\pm$ 0.10 & 16.75 $\pm$ 0.07 & 16.36 $\pm$ 0.10 & & &\ 23443744-0855075 & M9 & & 19.43 $\pm$ 0.10 & 16.96 $\pm$ 0.06 & 16.24 $\pm$ 0.04 & 15.79 $\pm$ 0.05 & & &\ ![FIRE (0748+1743, 1611+0025; $R \sim$400) and SpeX (all the rest; $R \sim$75-150) spectra of all of our reported ultra-cool dwarfs in order of right ascension. Spectral types are given in parentheses.[]{data-label="fig:spectra"}](fig3.eps) ![](fig4.eps) ![](fig5.eps) ![](fig6.eps) The newly classified M, L, and T dwarfs are plotted on the $z-J$ vs. $J-K_s$ color-color diagram in Figure \[fig:ccdiag\_paper\]a, where we have used the synthetic colors integrated from the spectra. We find that in a few cases the synthetic $z-J$ colors are bluer than 2.5 mag. As discussed in Section 3.3, this is likely the result of flux over-estimation bias for these faint targets, mostly in the 2MASS $J$-band. We discuss the normal, peculiar, and candidate binary ultra-cool dwarfs in our sample below. A handful of objects have synthetic colors that are bluer than the $z-J$ = 2.5 mag color selection criterion. The SDSS and 2MASS photometry had suggested that they were redder than $z-J$ = 2.5 mag. However, their photometric SNRs from 2MASS and/or SDSS were low (see §3.3), and the synthetic photometry indicates that they are in fact bluer. Normal Ultra-cool Dwarfs ------------------------ We classify 17 of our candidates as normal L dwarfs, i.e., they do not have any readily apparent peculiarities based on their comparison to SpeX spectral standards. These objects are presented as black upward-triangles in Figure \[fig:ccdiag\_paper\]a. We find that a further 13 candidates are M7–M9 dwarfs. These were included in our program likely because the $i-z$ and $z-J$ colors of late-M dwarfs are close to the limits of our color selection criteria (Section \[sec:selection\]), and because they may have been subject to flux-overestimation bias at $J$ band (Section \[sec:synphot\]). Peculiar L Dwarfs ----------------- Various absorption features in the near-infrared are gravity-sensitive, hence, the low-gravity of young brown dwarfs will result in line strengths that differ from those in older objects (e.g., [@lucas01; @gorlova03; @mcgovern04; @allers07; @lodieu08; @rice10; @allers13]). Some of these features include the Na I (1.138 and 1.141 $\mu$m) and K I (1.169 and 1.178 $\mu$m, 1.244 and 1.253 $\mu$m) doublets, FeH (bandheads at 0.990 $\mu$m and 1.194 $\mu$m), and VO (1.05–1.08 $\mu$m and 1.17–1.22 $\mu$m). Alkali lines are weaker at low gravity because of decreased pressure broadening. In low-resolution spectra these lines are often blended with other molecular features so we can not obtain accurate measurements of their strengths. Metal hydride molecular features are also weaker at low gravity because of decreased opacity from these refractory species, while VO bands are stronger (see, e.g., [@kirk06]). The 1.17um VO band is not used as a gravity indicator at low resolution because if is blended with K I, FeH and H$_2$O [@allers13]. Collision-induced absorption from molecular hydrogen (H$_{2}$ CIA) also changes as a function of gravity, with lower collision rates in low-gravity objects imparting a triangular shape to the $H$ band. Several prior analyses have introduced broad-band measures to discern low-gravity from field-gravity objects. [@allers13] design several near-IR indices to measure the changing strengths of FeH, VO, and K I absorption and the slope of the $H$-band continuum as a function of gravity by comparing $\sim$1-100 Myr M5-L7 members of young moving associations with field dwarfs. [@cant13] analyzed M9-L0 dwarfs to design an $H_2$($K$) index that measures the contribution of $H_2$ CIA on the slope of the $K$-band continuum; [@schn14] expand this index to the L dwarfs. Indices have the potential to offer a quantitative gravity classification, analogous to spectral classification. However, index measures depend on the spectral resolution of the data used to calibrate them, and our spectra are sufficiently distinct from those used in prior studies. In addition, most of the indices do not extend into the late-L dwarfs, and so are inadequate to classify some of our most interesting objects. Therefore, we do not adopt spectral indices as a default gravity classification scheme. However, we do check for consistency with applicable spectral indices whenever we note peculiarities in the spectra of our L and T candidates. We note that some of the spectral features, in particular the strength of the FeH bands, the peakiness of the $H$-band continuum, and the redness of the near-IR SED, may also be attributable to high atmospheric dust content or thicker clouds, as discussed in @loop08b and @allers13. High dust content itself may be linked to low gravity, so a clear distinction may not always be possible, especially at low spectral resolution. Our assessment of peculiarity is based on two factors: (1) the deviation from the median J-Ks colors for objects of the same spectral type, with $>$2$\sigma$ outliers considered peculiar, or (2) high spectral similarity to objects that have previously been classified as peculiar. In two cases below (Sections 4.2.2–4.2.3), we find similarities to the spectra of objects previously classified as peculiar because of being young and/or dusty. In the remaining two cases (Sections 4.2.1 and 4.2.4) the assessment of peculiarity is based on the comparison to spectra of previously classified peculiar objects as well as the $J-K_s$ colors. ### 2MASS J11193254$-$1137466 (L7) The most interesting object uncovered by our cross-correlation is 2M 1119$-$1137. This object is one of the reddest objects published to date with a synthetic $J-K_{s} =$ 2.62 $\pm$ 0.15 mag. Only the L7 dwarfs PSO J318.5338$-$22.8603 [@liu13] and ULAS J222711$-$004547 [@maroc14] among free-floating brown dwarfs are known to be redder. From its low-resolution spectrum (Figure \[fig:pec\]), we classify this object as an L7. The low signal-to-noise prevents us from unambiguously determining if this object has low gravity. The peak of the $H$-band continuum — thought to be sharpened at low surface gravity (e.g., [@lucas01; @allers13]) — is not very sharp. We measured the $H$-cont index of [@allers13] and found a value of 0.907, which is 1.5$\sigma$ above the medan for L7 dwarfs, and similar to the $H$-cont indices of low gravity objects. The authors note that very red L dwarfs with no youth signatures can still exhibit a triangular $H$-band shapes and similarly high $H$-cont indices. In summary, the $H$-cont index of 2M 1119$-$1137 is consistent with it being a low-gravity object, but we can not conclude from the index alone that it is definitely young. In Figure \[fig:1119\] we compare 2M 1119$-$1137 to the known very low-gravity dwarfs 2MASSW J224431.67+204343.3 (L7.5; [@loop08]), WISE J174102.78$-$464225.5 (L7; [@schn14]), and WISEP J004701.06+680352.1 (L7.5; [@gizis12]). We see that 2M1119$-$1137 most closely matches W0047+6803 and also matches the redness of W1741$-$4642 but has a less peaked $H$-band and a shallower slope in the $K$-band. Although it is slightly redder, the shape of the $H$- and $K$-band of 2M 1119$-$1137 also matches that of 2M 2244+2043. The agreement with the spectra of other young L7–L7.5 dwarfs also indicate that 2M 1119$-$1137 may be young. A decisive classification will require higher-SNR and/or higher-resolution spectra than we presently have. Further evidence that 2M 1119$-$1137 may be young comes from its proper motion and photometric distance. By comparing the 2MASS and AllWISE positions, we estimate an annual proper motion of $-$155 $\pm$ 20 mas in right ascension and $-$101 $\pm$ 17 mas in declination. Given a $K_s$ absolute magnitude of 12.6 $\pm$ 0.4 mag for young L7 dwarfs or 12.5 $\pm$ 0.4 mag for field-age L7 dwarfs (calculated from the empirically determined $L_{\rm bol}$-SpT relationship and $K_s$ bolometric corrections from [@filip15]), the photometric parallax of 2M 1119$-$1137 is 40 $\pm$ 12 mas or 38 $\pm$ 12 mas. The BANYAN II space motion estimation algorithm [@malo13; @gagne14] gives 2M 1119$-$1137 between 39% and 69% probability of being a TW Hydrae moving group member, depending on whether an arbitrary age or a $<$1 Gyr age is chosen as an input prior with the respective photometric parallax estimates. Confirmation of the association with the TW Hydrae group will require radial velocity and trigonometric parallax measurements. Should 2M 1119$-$1137 be confirmed as a member of the 7–13 Myr [@bell15] TW Hydrae association [@webb99], it will be its coolest and lowest-mass (5–6 $M_{\rm Jup}$, based on evolutionary models by [@allard12]) free-floating member. Only the planetary-mass companion 2M 1207b [@chauv04; @chauv05] is likely cooler. ### 2MASS J17081563+2557474 (L5) This object is determined to be a young L5 brown dwarf based on the decreased absorption of K I and FeH and the increased absorption of H$_{2}$O in the $J$-band. Calculations of the spectral indices from [@allers13] and [@schn14] also suggest that this object is a low gravity brown dwarf. As seen in Figure \[fig:pec\], the strengths of the gravity sensitive features in the $J$-band and the shape of the $H$-band are more similar to the young L5 2MASS J23174712-4838501 [@kirk10], although the observed spectrum is still slightly redder than the comparison spectrum. ### 2MASS J16135698+4019158 (L5) While this object is peculiarly red, it does not exhibit the features of a low-gravity object. As seen in Figure \[fig:pec\], the object has normal absorption strengths, aside from H$_{2}$O, and is more similar to the red L5 dwarf 2M 2351+3010 published in [@kirk10]. There is also strong FeH ![Spectra of the four peculiar objects identified in this work. In each case, the spectrum of the candidate is compared to the spectrum of a normal object of the same spectral type, and to the spectrum of a peculiar object of the nearest spectral type. The comparison spectra from left to right and top to bottom are L7 (2MASS J0028208+224905; [@burg10]) and L7.5 young (2MASS J22443167+2043433; [@loop08]); L5 (2MASS J01550354+0950003; [@burg10]) and L5 pec (2MASS J23174712$-$4838501; [@kirk10]); L5 (2MASS J01550354+0950003; [@burg10]) and L5 pec (2MASS J23512200+3010540; [@kirk10]); L2 (2MASS J13054019$-$2541059; [@burg07b]) and L2 pec (2MASS J14313097+1436539; [@shepp09]).[]{data-label="fig:pec"}](fig7.eps) ![A comparison of the SpeX prism spectrum of 2M 1119$-$1137 (black) with low-resolution spectra of other young L7–L7.5 dwarfs: WISEP J004701.06+680352.1 (L7.5 (pec), [@gizis12]), WISE J174102.78$-$464225.5 (L7 (pec), [@schn14]) and 2MASSW J224431.67+204343.3 (L7.5 (pec), [@kirk10]).[]{data-label="fig:1119"}](fig8.eps) absorption. The authors speculate that 2M 2351+3010 is actually an older object that simply has a higher dust content. Since our object seems very similar in nature, we adopt this explanation as well. ### 2MASS J17251557+6405005 (L2) 2M 1725+6405 is a peculiarly blue L2 dwarf (Fig. \[fig:pec\]). This object was found in our cross-correlation but it was not part of our high-priority sample. Peculiarly blue L dwarfs have often been classified as metal-poor (e.g., [@burg03; @burg04b]), with their blue near-IR colors dictated by increasingly strong collision-induced hydrogen absorption over 1.5–2.5 $\micron$. Metal-poor L dwarfs, or L subdwarfs, also show strong metal-hydride absorption. However, the FeH Wing-Ford band at 0.99 $\mu$m in 2M 1725+6405 is weak compared to the standard, which suggests that the 2M 1725+6405 is blue likely because it is unusually dust-poor. It is also possible that 2M 1725+6405 may be an unresolved L + T dwarf binary, with the $J$ band flux enhanced by the T dwarf component. We consider unresolved binarity in the next Section (\[sec:binaries\]). Unlike all of the candidate binaries discussed in Section \[sec:binaries\], we actually do not find a better binary template fit for 2M 1725+6405. We therefore conclude that this L2 dwarf is intrinsically blue. Brown Dwarfs with Composite Spectral Types \[sec:binaries\] ----------------------------------------------------------- Several of the objects show peculiarities that do not readily match those found in other individual objects. Instead, they more closely resemble combination spectra of L and T dwarfs. [@burg07] and [@burg10] developed a technique that enables one to infer the spectral types of the individual components of a candidate unresolved binary by a goodness-of-fit comparison to a library of spectral template combinations. We adopt this technique in a simple form, by creating combination templates from the set of single L and T dwarf standards from the SpeX Prism Library. Unlike @burg10, we do not create a large list of templates built on the entire population of L and T dwarfs with available SpeX spectra. Nonetheless, we find that our simple approach gives sufficient indication whether a brown dwarf displays a composite spectral signature, and produces approximate spectral types for the components. Our composite template spectra are constructed by normalizing all of the standard single brown dwarfs over the same wavelength range (1.2-1.3 $\mu$m; chosen because it is relatively free of absorption features), scaling them to their absolute spectral-type dependent magnitudes given by the polynomials in Table 14 of [@dup12], and summing the pairs of resulting spectra. We compute the $\chi^2$ over most of the 0.8-2.5 $\mu$m region, excluding ranges of strong water absorption (1.35-1.45 and 1.8-2.0 $\mu$m). In all cases, the $\chi^{2}$ is greater than one but this is to be expected as we are only testing the fit to templates created from one object of each spectral type. We have classified an object as a likely spectral type composite – a potential binary – if the $\chi^2$ of the dual-template spectral fit is significantly lower than the $\chi^2$ of the single-template fit. Each of the $\chi^2$ values have been calculated over the entire 0.8-2.5 $\mu$m region, minus the water absorption bands. In addition to template fitting, we have analyzed the spectral indices defined specifically for SpeX prism spectra in [@burg10] for all our binary candidates and we report the strength of their candidate binarity. We have also analyzed the SpeX prism spectral indices from [@bard14] but because the binary index selection criteria in that work were not designed for late-L to early-T dwarfs, we only report the results where applicable. We note that while brown dwarfs displaying combination spectral signatures have until recently been considered to all be unresolved binaries, they can also be highly variable brown dwarfs with photospheres that display two distinct temperature components. Recent examples include the T1.5 dwarf 2MASS J21392676+0220226, suggested as a strong L8.5 + T3.5 spectral binary candidate by @burg10, but identified as a $J$-band variable that is unresolved in [*HST*]{} images [@rad12], or the T dwarfs 2MASS J13243559+6358284 (T2.5) and SDSS J151114.66+060742.9 (T2), identified as binary candidates [@burg10; @geissler11], but that are also unresolved in [*HST*]{} and are variable [@metchev15]. Therefore, while the objects discussed in this section are considered candidate unresolved binaries, they are also strong candidates for photometric variables. ### 2MASS J13043568+1542521 (L6+T6?) This object is one of several that is best fit by a binary combination template. As seen in Figure \[fig:bin\], the best fit single brown dwarf (T0) does not match the features of 2M 1304+1542. The $Y$-/$J$-band ratio is lower than any of the closest standard objects and the $H$-band has a dip at $\sim$1.65 $\mu$m. The $K$-band does not have differences that are as pronounced as in the other bands though it is slightly redder than the standard object. In fitting this object with a binary template, we find that the best fit is a combination of an L6 and a T6 brown dwarf. The $Y$-/$J$-band ratio and the $K$-band flux more closely resemble the object spectrum. The contribution of the methane break in the cooler brown dwarf at 1.6 $\mu$m also reproduces the dip in the $H$-band well. Further evidence that this object is a binary comes from the analysis of spectral indices identified in [@burg10] and [@bard14]. 2M 1304+1542 satisfies four of the six binary index selection criteria given in Table 5 of [@burg10] and ten of the twelve selection criteria in Table 4 of [@bard14], making this a strong binary candidate. ### 2MASS J14025564+0800553 (L8+T5?) The spectrum of 2M 1402+0800 also shows distinctive composite characteristics. While the $Y$-/$J$-band ratio is not significantly dissimilar from the closest single brown dwarf spectrum, the ![Spectra of all objects identified as candidate unresolved binaries (or photometric variables). The left panels show comparisons to the spectra (in green) that fit the 0.95-1.35 $\mu$m continuum best: i.e., as done for spectral typing of the individual objects in Sections 4.1–4.2. The right panels show the two-component templates (also in green) that fit best over 0.8-2.5 $\mu$m; the individual component contributions are shown in red and blue. The quoted $\chi^2$ values are the smallest ones for, respectively, single- and binary-template fits over the entire 0.8-2.5 $\mu$m range, as done in Section 4.3. The comparison spectra from left to right and top to bottom are: L7 (2MASS J0028208+224905; [@burg10]) and T8 (2MASS J04151954-0935066; [@burg04]); T0 (2MASS J12074717+0244249; [@loop07]), L6 (2MASS J10101480-0406499; [@reid06]) and T6 (2MASS J16241436+0029158; [@burg06b]); T2 (2MASS J12545393-0122474; [@burg04]), L8 (2MASS J16322911+1904407; [@burg07]) and T5 (2MASS J15031961+2525196; [@burg04]).[]{data-label="fig:bin"}](fig9.eps) ![The comparison spectra from left to right and top to bottom are: L5 (2MASS J08350622+1953050; [@chiu06]), L4 (2MASS J21580457-1550098; [@kirk10]) and T5 (2MASS J15031961+2525196; [@burg04]); T1 (2MASS J01514155+1244300; [@burg04]), T0 (2MASS J12074717+0244249; [@loop07]) and T2 (2MASS J12545393-0122474; [@burg04]). L5 (2MASS J08350622+1953050; [@chiu06]) and T5 (2MASS J15031961+2525196; [@burg04]).](fig10.eps) $H$- and $K$-bands are more similar to an L8+T5 binary. Figure \[fig:bin\] shows that the shape and relative flux of all three 2MASS bandpasses are very well reproduced by the L/T binary template. Most importantly, the dip in the $H$-band is well reproduced by the contribution of the methane in the T dwarf. This object passes all six of the binary index selection criteria of [@burg10] which makes it a strong binary candidate. ### 2MASS J17373467+5953434 (L4+T5?) This object is classified as having an L4+T5 composite spectrum. As seen in Figure \[fig:bin\], an L5 spectrum matches 2M 1737+5953 well in the $Y$- and $J$-bands but is a very poor match to the $H$- and $K$-bands. The observed spectrum shows signs of methane absorption at 1.6 and 2.2 $\mu$m which is indicative of having a T dwarf secondary component. The binary index selection criteria from [@burg10] were not designed for mid-L dwarfs so we analyzed the spectral indices from [@bard14] instead. Because this object only passes four of the twelve selection criteria from [@bard14], it is only a weak binary candidate. ### 2MASS J23322678+1234530 (L5+T5?) While 2M 2332+1234 is best fit in the $J$-band by a scaled T0 spectrum, the $H$- and $K$-bands clearly do not appear to belong to a T0 dwarf. The $H$-band shows evidence of methane absorption at 1.6-1.8 $\mu$m but there is less presence of CH$_4$ in the $K$-band. This points to a composite L/T spectrum similar to SDSS J151114.66+060742.9 presented in [@burg10]. The methane absorption features are best fit by an L5+T5 template, however, the continuum of our observed spectrum is still slightly bluer at the longer wavelengths. This object passes four of the binary index selection criteria of [@burg10] which makes it a strong binary candidate. ### 2MASS J10020752+1358556 (L7+T8?) This object is tentatively classified as having a composite spectrum. As seen in Figure \[fig:bin\], 2M 1002+1358 has a large dip in flux in the $H$ band at the location of the CH$_{4}$ absorption feature that is usually present in a T dwarf, and has much more water and methane absorption in the $J$-band than a typical L dwarf. The $K$- band, however, seems to be similar to an L4–L6 dwarf. These suggest a composite spectral type. There is a much greater difference between L and T dwarfs in the $J$- and $H$-band features than there is in the $K$-band features, therefore, the $K$-band of a combined binary spectrum can look like it belongs to an L dwarf whereas the $J$- and $H$-bands will appear to have a contribution from both binary components. The large dip in $H$-band flux may also be the result of an extraneous signal in the raw spectrum of the object as it has an atypical shape compared to that of a feature usually associated with CH$_{4}$. However, the spectrum of the telluric calibration star does not exhibit the same behavior, while the feature is apparent in most of the individual spectra of this object, even if at low SNR. This suggests that the feature may be real, even if we can not fully exclude a random variation due to noise. Analyzing the spectral indices does not shed any light on the true nature of this object as it only passes four of the twelve binary index selection criteria from [@bard14], making it a weak binary candidate. ### 2MASS J22153705+2110554 (T0+T2?) The T dwarf 2M 2215+2110 is a new discovery in the SDSS footprint. Some of the features in the spectrum of 2M 2215+2110 are ambiguous as to their origin. While the $J$- and $K$- bands more closely resemble an early T dwarf, the $H$- band has a clear dearth of flux. The overall shape of this band might be explained by a presence of a slightly later-type T dwarf secondary component than the primary, but the lack of flux still persists in the binary template spectrum. Several features, such as the FeH feature at 0.99 $\mu$m, do match a T0+T2 composite spectrum. However, the H$_{2}$O + CH$_4$ absorption between 1.1–1.2 $\mu$m is much stronger in the binary composite template than in the observed spectrum. The spectral indices also do not help us with this object – only two of the index selection criteria from [@burg10] are passed which makes this object a weak binary candidate. Discussion ========== Our search was aimed at discovering peculiar L or T dwarfs, with priority in this first iteration placed on unusually red objects. Overall, we have observed and identified 10 peculiar or binary L dwarfs, 16 normal L dwarfs, one T dwarf, and 13 M dwarfs. The latter had been mis-identified as candidate L or T dwarfs because of low-SNR photometry. The total fraction of objects in an unbiased sample of brown dwarfs with $J-K_s$ colors $>$2$\sigma$ from the mean color at a given spectral type — the criterion used for detecting photometrically peculiar L and T dwarfs in [@faher09] — is expected to be 4.6%. [@faher09] report a somewhat larger fraction, 5.8%, of peculiar objects among the 1268 M7–T8 dwarfs in their sample. The small discrepancy arises from an apparent non-gaussianity of the $J-K_s$ color distribution: they have nearly twice as many red outliers than blue outliers. Only three of our L dwarfs are peculiarly red or dusty, and an equal number of our discoveries are in fact peculiarly blue. While at face value this does not indicate a higher success rate in finding peculiarly red objects than in a random sample of field brown dwarfs, we have at present followed up only a small number (40) of our total candidate sample (314). The 40 objects presented here comprise roughly equal numbers of high- (22) and low-priority (18) objects: a circumstance of weather and observational constraints. It is possible that the larger high-priority sample (178 candidates) will reveal a higher incidence rate of unusually red objects. We do find, however, that our present prioritization strategy reveals a larger fraction of unusual objects — including not only peculiar L dwarfs but also candidate unresolved binaries that are not color outliers in $J-K_s$ but are unusually red in $z-J$ — among the high-priority candidates. Eight of the 22 objects in the high-priority sample are peculiar or candidate binaries vs. two of the 18 in the low-priority sample. The difference between the two is statistically significant at the 96% level. It indicates that combinations of optical and infrared colors, such as employed here, can successfully discern even moderate peculiarities in ultra-cool dwarfs. Table \[tab:results\] summarizes the peculiarities of each object — from spectral comparison and synthetic colors. Because L and T dwarfs are brighter in the 3–5 $\mu$m wavelength range, we investigated whether the $J-K_s$ color outliers also have unusual colors at these wavelengths. We find that L dwarfs with the very reddest $J-K_s$ colors are clearly distinguishable from the locus of L dwarfs on a $J-K_{s}$ vs. $H-W2$ and $J-K_{s}$ vs. $W1-W2$ diagram (Fig. \[fig:jkout\]) mainly because of their red near-IR colors. They stand out in their $J-K_s$ and $H-W2$ colors but not significantly in their $W1-W2$ colors. T dwarfs with peculiarly red $J-K_s$ colors are only marginally redder in $H-W2$ and $W1-W2$, and the peculiarly blue L or T dwarfs are not distinguishable from the normal population with the exception of the blue L dwarf discovered in this work (2MASS J17251557+6405005). Conclusions =========== We performed a color-selected search for peculiar L and T dwarfs, focusing primarily on the peculiarly red objects, and demonstrated that with the proper selection criteria, we can identify unusual L and T dwarf candidates in large photometric surveys in the absence of spectral type information. With follow-up spectroscopy, we can verify the unusual properties and begin to discern their underlying cause. This is particularly advantageous for finding isolated objects that are analogous to the typically very red directly imaged extrasolar planets in order to study their atmospheric characteristics at higher fidelity. We had a high success rate in discovering either peculiar L dwarfs or candidate unresolved binaries in our prioritized sample, and discovered one of the reddest L dwarfs known to date. This new red L7 dwarf is a potential TW Hydrae member, and if confirmed, would make it the coolest and least massive free-floating object in the association. We note that even after many searches for T dwarfs in the SDSS and 2MASS catalogs, we still uncovered a new T dwarf among the $\sim$13% fraction of candidates that we have spectroscopically characterized so far. These discoveries attest to the power of simultaneous positional and color cross-correlations across photometric databases — as performed here, in [@metchev08], in [@geissler11], and now enabled with the Virtual Astronomical Observatory — over color-only searches on individual databases that are then positionally compared to other databases. At the same time, the discovery of only a single new T dwarf in our characterized sample indicates the census of T dwarfs (132) in SDSS is nearly complete. ![Photometric color-color diagrams of objects from [@kirk11]. Upwards and downwards triangles denote L and T dwarfs, respectively. Red symbols denote objects with $J-K_s$ colors $>$2$\sigma$ redder than the mean for their spectral type [@faher09; @fah13]. Blue symbols denote objects that are $>$2$\sigma$ bluer. Large symbols represent peculiar objects identified in this work. Red circles indicate the previously known red brown dwarfs with spectral types of L4 and later.[]{data-label="fig:jkout"}](fig11.eps) [*Facilities:*]{} , We would like to thank our referee, Jackie Faherty, for her insightful critique. This work was supported by the NASA Astrophysical Data Analysis Program through award No. NNX11AB18G to S.M. at Stony Brook University and by an NSERC Discovery grant to S.M. at The University of Western Ontario. Part of the data are obtained with the Magellan-Baade 6.5m telescope under program CN2012A-54. Support for R.K. is provided from Fondecyt Reg. No. 1130140 and by the Ministry of Economy, Development, and Tourism�s Millennium Science Initiative through grant IC12009, awarded to The Millennium Institute of Astrophysics (MAS). The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This research has made use of data obtained from or software provided by the US Virtual Astronomical Observatory, which is sponsored by the National Science Foundation and the National Aeronautics and Space Administration. [^1]: <http://vao-web.ipac.caltech.edu/applications/VAOSCC/> [^2]: <http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/sec6_3a.html> [^3]: <http://irsa.ipac.caltech.edu/applications/FinderChart/> [^4]: <http://pono.ucsd.edu/~adam/browndwarfs/spexprism/> [^5]: <http://web.mit.edu/~rsimcoe/www/FIRE/ob_manual.htm> [^6]: [@kirk10], [@burg10], [@burg08], [@burg07b], [@loop07], [@burg06], [@chiu06], [@reid06], and [@burg04]
--- abstract: 'This paper presents a new efficient algorithm which guarantees a solution for a class of multi-agent trajectory planning problems in obstacle-dense environments. Our algorithm combines the advantages of both grid-based and optimization-based approaches, and generates safe, dynamically feasible trajectories without suffering from an erroneous optimization setup such as imposing infeasible collision constraints. We adopt a sequential optimization method with *dummy agents* to improve the scalability of the algorithm, and utilize the convex hull property of Bernstein and relative Bernstein polynomial to replace non-convex collision avoidance constraints to convex ones. The proposed method can compute the trajectory for 64 agents on average 6.36 seconds with Intel Core i7-7700 @ 3.60GHz CPU and 16G RAM, and it reduces more than $50\%$ of the objective cost compared to our previous work. We validate the proposed algorithm through simulation and flight tests.' author: - 'Jungwon Park$^{1}$, Junha Kim$^{1}$, Inkyu Jang$^{1}$ and H. Jin Kim$^{1}$[^1]' bibliography: - 'my\_bib.bib' title: '**Efficient Multi-Agent Trajectory Planning with Feasibility Guarantee using Relative Bernstein Polynomial** ' --- INTRODUCTION ============ Multi-agent systems with many unmanned aerial vehicles (UAVs) broaden the range of achievable missions to complex environments unsafe or hard to reach for humans or a single agent. For successful operation of these multi-agent systems, path planning algorithm is required to generate a collision-free trajectory in any obstacle environment. However, many works have a risk to fail in dense cluttered environments due to deadlock [@bareiss2013reciprocal; @zhou2017fast] or failure caused by enforcing infeasible collision constraints in the formulation [@chen2015decoupled; @luis2019trajectory]. In this paper, we present an efficient multi-agent trajectory planning algorithm which generates safe, dynamically feasible trajectories in obstacle-dense environments by extending our previous work [@1909.02896]. The proposed algorithm is designed to have the advantages of both grid-based and optimization-based approaches. First, it guarantees the feasibility of optimization problem formulation by utilizing an initial trajectory computed from grid-based multi-agent path finding algorithm. Second, it generates a continuous dynamically feasible trajectory by optimizing the initial trajectory with consideration of quadrotor dynamics as shown in Fig. \[fig: flight test\]. When we formulate the optimization problem, we utilize the convex hull property of relative Bernstein polynomial to translate non-convex collision avoidance constraints to convex ones. Compared to the previous work [@1909.02896], we modify the method for constructing constraints not to occur infeasible constraints between collision avoidance constraints, and we introduce a sequential optimization method. This sequential method can deal with a large scale of agents with improved computational efficiency, and does not cause deadlock by employing *dummy agents*. Our main contributions can be summarized as follows. - A multi-agent trajectory planning algorithm is presented for obstacle-dense environments, which generates collision-free and dynamically feasible trajectories without a potential optimization failure by using relative Bernstein polynomial. - A sequential trajectory optimization method is proposed with dummy agents, which reduces computational load. - The source code will be released in <https://github.com/qwerty35/swarm_simulator>. There have been discussions in literature closely related to our work on multi-agent trajectory planning. In [@mellinger2012mixed; @augugliaro2012generation; @chen2015decoupled], the trajectory generation problems are reformulated as mixed-integer quadratic programming (MIQP) or sequential convex programming (SCP) problems, that apply collision constraints at each discrete time step. These methods suit well systems with a small number of agents, but they are intractable for large teams and complex environments because an additional adaptation process is required to find proper discretization time step depending on the size of agents and obstacles. On the other hand our method does not require this process because we do not use time discretization. Sequential planning proposed in [@robinson2018efficient] for better scalability is similar to our work. However, it may not be able to find a feasible solution for a crowded situation. To solve this, we adopt dummy agents which move along the initial trajectory computed by a grid-based planner to prevent deadlock. The most relevant work can be found in [@honig2018trajectory; @debord2018trajectory]. They plan an initial trajectory with a grid-based planner and then construct a safe flight corridor (SFC), which indicates a safe region of each agent. However, they need to resize SFC iteratively until the overall cost converges, while our proposed method does not need an additional resizing process by using relative Bernstein polynomial. Recently, distributed planning is receiving much attention due to scalability [@bareiss2013reciprocal; @zhou2017fast; @luis2019trajectory]. However, such distributed methods are not able to guarantee a safe solution in obstacle-dense environments due to deadlock. PROBLEM FORMULATION {#sec: problem formulation} =================== In this section, we formulate an optimization problem to generate safe, continuous trajectories for a multi-agent robot system consisting of $N$ quadrotors. We assign the mission for the $i^{th}$ quadrotor to move from the start point $s^i$ to the goal point $g^i$. The quadrotors may have a different size with radius $r^{1},...,r^{N}$m. The maximum velocity and acceleration of the $i^{th}$ quadrotor are $v^{i}_{max}$, $a^{i}_{max}$ respectively. Assumption {#subsec: assumption} ---------- We assume that prior knowledge of the free space $\mathcal{F}$ of the environment is given as a 3D occupancy map. We also assume that the grid-based initial trajectory planner in section \[subsec: initial trajectory planning\] can find a solution when the grid size is $d$. Trajectory Representation ------------------------- Due to the differential flatness of quadrotor dynamics, it is known that the trajectory of quadrotor can be represented in a polynomial function with flat outputs in time $t$ [@mellinger2011minimum]. However, it is difficult to handle collision avoidance constraints with standard polynomial basis. For this reason, we formulate the trajectory of quadrotors using a piecewise Bernstein polynomial. The Bernstein polynomial is the linear combination of Bernstein basis polynomials, and the Bernstein basis polynomial of degree $n$ is defined as follows: $$B_{k,n}(t) = {n \choose k}t^{k}(1-t)^{n-k} \label{eq: bernstein basis}$$ for $t\in[0, 1]$ and $k=0,1,...,n$. The trajectory of the $i^{th}$ quadrotor, $p^{i}(t) \in \mathbb{R}^{3}$, can be represented as $M$-segment piecewise Bernstein polynomials: $$\begin{alignedat}{2} p^{i}(t) = \begin{cases} \ \sum_{k=0}^{n}c^{i}_{1,k}B_{k,n}(\tau_1) & t \in [T_{0}, T_{1}] \\ \ \sum_{k=0}^{n}c^{i}_{2,k}B_{k,n}(\tau_2) & t \in [T_{1}, T_{2}] \\ \ \vdotswithin{=} & \vdotswithin{\in} \\ \ \sum_{k=0}^{n}c^{i}_{M,k}B_{k,n}(\tau_M) & t \in [T_{M-1}, T_{M}] \\ \end{cases} \end{alignedat} \label{eq: trajectory representation}$$ where $\tau_m = \frac{t-T_{m-1}}{T_{m}-T_{m-1}}$, $c^{i}_{m,k}$ is the $k^{th}$ control point of the $m^{th}$ segment of the $i^{th}$ quadrotor’s trajectory, and $T_{m-1}, T_{m}$ are the start and end time of the $m^{th}$ segment, respectively. Thus, the decision vector of our optimization problem, $c$, consists of all the control points of $p^{i}(t)$ for $i=1,...,N$. Objective Function ------------------ We define the objective function to minimize the integral of the square of the $\phi^{th}$ derivative: $$J = \sum\limits_{i=1}^{N}\int_{T_0}^{T_M}\left\|\frac{d^{\phi}}{dt^{\phi}}p^{i}(t)\right\|^{2}_{2} = c^{T}Qc \label{eq: objective function}$$ where $Q$ is the Hessian matrix of the objective function. In this paper, we set $\phi= 3$ to minimize the jerk of the trajectory, so that the input to the quadrotor becomes less aggressive [@mueller2015computationally]. Convex Constraints ------------------ The trajectory must pass the start and goal points and should be continuous up to the $\phi-1^{th}$ derivatives. Also, it must not exceed maximum velocity and acceleration. These constraints can be written in affine equality and inequality constraints respectively: $$A_{eq}c = b_{eq} \label{eq: equality constraints}$$ $$A_{dyn}c \preceq b_{dyn} \label{eq: dynamic feasible constraints}$$ Non-Convex Collision Avoidance Constraints ------------------------------------------ ### Obstacle Avoidance Constraints We define an obstacle collision model of the $i^{th}$ quadrotor, which models a collision region between a quadrotor and obstacles (See Fig. \[fig: collision model\]): $$\mathcal{C}^i_{obs} = \{ p \in \mathbb{R}^{3} \mid \|p\|^{2}_{2} \leq (r^i)^{2} \} \label{eq: obstacle collision model}$$ The $i^{th}$ quadrotor must satisfy the condition below not to collide with obstacles: $$p^{i}(t) \oplus \mathcal{C}^i_{obs} \subset \mathcal{F}, \:\:\:\:\: t \in [T_0, T_M] \label{eq: obstacle collision avoidance constraint}$$ where $\oplus$ is the Minkovski sum. ### Inter-Collision Avoidance Constraints A collision region between $i^{th}$ and $j^{th}$ agents can be expressed with an inter-collision model $\mathcal{C}^{i,j}_{inter}$: $$\mathcal{C}^{i,j}_{inter} = \{ p \in \mathbb{R}^{3} \mid p^{T}Ep \leq (r^{i}+r^{j})^{2} \} \label{eq: inter-collision model}$$ where $E$ is $diag([1, 1, 1/(c_{dw})^2])$, and $c_{dw}$ is a coefficient to consider a downwash effect. The $i^{th}$ agent does not collide with the $j^{th}$ agent if the relative trajectory of the $j^{th}$ agent respect to the $i^{th}$ agent, $p^{i,j}(t)=p^{j}(t)-p^{i}(t)$, satisfies the following condition: $$p^{i,j}(t) \cap \mathcal{C}^{i,j}_{inter} = \emptyset, \:\:\:\:\: t \in [T_0, T_M] \label{eq: inter-collision avoidance constraint}$$ Non-convexity of (\[eq: obstacle collision avoidance constraint\]) and (\[eq: inter-collision avoidance constraint\]) makes it difficult to directly employ them. In the next section, we will show the method that relaxes those non-convex constraints to convex ones using relative Bernstein polynomial. Relative Bernstein Polynomial {#sec: relative bernstein polynomial} ============================= One of the useful property of the Bernstein polynomial is a convex hull property that the Bernstein polynomial is confined within the convex hull of its control points [@zettler1998robustness]. This property has been used to confine the trajectory to a convex set called safe flight corridor (SFC) for obstacle avoidance [@tang2016safe; @gao2018online]. Here, we introduce the method to confine the relative polynomial trajectory to inter-collision-free region by utilizing the convex hull property. Let the $m^{th}$ segment of $p^{i}(t), p^{j}(t)$ be $p^{i}_m(t), p^{j}_m(t)$ respectively, and $p^{i,j}_{m}(t) = p^{j}_{m}(t)-p^{i}_{m}(t)$ their relative trajectory. Then $p^{i,j}_{m}(t)$ can be written as $$\begin{split} p^{i,j}_{m}(t) & = \sum\limits_{k=0}^{n}(c^j_{m,k}-c^i_{m,k})B_{k,n}(\tau_{m})\\ & = \sum\limits_{k=0}^{n}c^{i,j}_{m,k}B_{k,n}(\tau_{m}) \end{split} \label{eq: relative bernstein polynomial}$$ where $c^{i,j}_{m,k} = c^j_{m,k}-c^i_{m,k}$ is the control point of $p^{i,j}_{m}(t)$, which implies that the relative Bernstein polynomial is also Bernstein polynomial. Thus, by the convex hull property, we can enforce the $i^{th}$ and $j^{th}$ quadrotors not to collide with each other by limiting all control points $c^{i,j}_{m,k}$ within a convex, inter-collision free region. We call this region a relative safe flight corridor (RSFC). In this way, we can generate the safe trajectory by adjusting SFC, RSFC. Definition {#sec: definition} ========== In the previous work [@1909.02896], we determined RSFC by choosing a proper one among pre-defined RSFC candidates. RSFC candidates were designed to be able to utilize the differential flatness of quadrotor, and so as to achieve fast planning speed. However, it may fail to find a trajectory because a feasible region that satisfies both RSFC and SFC constraints may not exist. To guarantee the existence of such feasible region as Fig. \[fig: collision avoidance constraints\], we precisely define three key terms in this paper: initial trajectory, SFC, and RSFC. Initial Trajectory ------------------ An initial trajectory of the $i^{th}$ quadrotor, $\pi^{i} = \{\pi^{i}_{0},...,\pi^{i}_{M}\}$, is defined as a path that satisfies the following conditions for all $m = 0,...,M$ and $i \neq j$: $$\pi^i_0 = s^i, \pi^i_M = g^i \label{eq: initial trajectory condition0}$$ $$\langle \pi^i_{m-1}, \pi^i_{m} \rangle \oplus \mathcal{C}^i_{obs} \subset \mathcal{F} \label{eq: initial trajectory condition1}$$ $$\langle \pi^{i,j}_{m-1}, \pi^{i,j}_{m} \rangle \cap \mathcal{C}^{i,j}_{inter} = \emptyset \label{eq: initial trajectory condition2}$$ where $\langle \pi^i_{m-1}, \pi^i_{m} \rangle = \{\alpha\pi^i_{m-1}+(1-\alpha)\pi^i_{m} \mid 0 \leq \alpha \leq 1\}$ is a line segment between waypoints $\pi^i_{m-1}$ and $\pi^i_{m}$, and $\pi^{i,j}_m = \pi^j_{m}-\pi^i_{m}$. shows that the initial trajectory does not collide with obstacles, and means that the agents do not collide with other agents when all the agents move along their initial trajectory at constant velocity. Safe Flight Corridor -------------------- The $m^{th}$ safe flight corridor (SFC) of the $i^{th}$ quadrotor, $\mathcal{S}^{i}_{m}$, is defined as a convex set satisfies following conditions: $$\mathcal{S}^{i}_{m} \oplus \mathcal{C}^{i}_{obs} \subset \mathcal{F} \label{eq: sfc condition1}$$ $$\langle \pi^i_{m-1}, \pi^i_{m} \rangle \subset \mathcal{S}^{i}_{m} \label{eq: sfc condition2}$$ The condition (\[eq: sfc condition1\]) shows that an agent in SFC does not collide with obstacles, so SFC can be used for obstacle collision avoidance. Relative Safe Flight Corridor ----------------------------- The $m^{th}$ relative safe flight corridor (RSFC) between $i^{th}$ and the $j^{th}$ quadrotor, $\mathcal{R}^{i,j}_{m}$, is defined as a convex set that satisfies the following conditions: $$\mathcal{R}^{i,j}_{m} \cap \mathcal{C}^{i,j}_{inter} = \emptyset \label{eq: rsfc condition1}$$ $$\langle \pi^{i,j}_{m-1}, \pi^{i,j}_{m} \rangle \subset \mathcal{R}^{i,j}_{m} \label{eq: rsfc condition2}$$ If $\mathcal{R}^{i,j}_{m}$ includes $p^{i,j}(t)$ for $t \in [T_{m-1}, T_m]$, then there is no collision between the $i^{th}$ and $j^{th}$ agents for $t \in [T_{m-1}, T_m]$ due to (\[eq: inter-collision avoidance constraint\]) and (\[eq: rsfc condition1\]). For this reason, we can use RSFC to avoid collision between agents. Method ====== In this section, we introduce the efficient trajectory planning algorithm using convex safe corridors. Alg. \[alg: trajectory planning algorithm\] shows the process of trajectory planning. We first plan initial trajectories (line 1), and we use them to determine safe flight corridor (SFC) (line 3) and relative safe flight corridor (RSFC) (line 5). After that, we compose quadratic programming (QP) problem using initial trajectories and safe corridors (line 8). Finally, we scale the total flight time to satisfy dynamic feasibility constraints (line 9). The detail of each process is described in the following subsections. \[sec: method\] $\pi = (\pi^{1},...,\pi^{N}) \gets$ planInitialTraj($s^{\forall i}$, $g^{\forall i}, \mathcal{E}$) $p^{0}(t),...,p^{N}(t) \gets$ trajOpt($\pi, \mathcal{S}^{\forall i}, \mathcal{R}^{\forall i, j > i})$ $T, p^{0}(t),...,p^{N}(t) \gets$ timeScale($p^{0}(t),...,p^{N}(t)$) Initial Trajectory Planning {#subsec: initial trajectory planning} --------------------------- To plan an initial trajectory, we use a graph-based multi-agent pathfinding (MAPF) algorithm. Among various MAPF algorithms such as [@wagner2011m; @sharon2015conflict], we choose enhanced conflict-based search (ECBS) for the following two reasons: (i) ECBS can find a suboptimal solution in a short time. Because the optimal MAPF algorithm is NP-complete [@yu2013structure], it could be better to use a suboptimal MAPF solver with respect to computation time. (ii) The ECBS algorithm is complete. To guarantee completeness of Alg. \[alg: trajectory planning algorithm\], individual submodules in the algorithm must be complete. To utilize the graph-based ECBS in our problem, we formulate the planInitialTrajectory function in line 1, Alg \[alg: trajectory planning algorithm\] as follows. First, we translate the given 3D occupancy map into a 3D grid map with grid size $d$. Next, we set constraints which determine conflict in the ECBS algorithm to satisfy the condition (\[eq: initial trajectory condition2\]). After that, we give start and goal points as the input and compute the initial trajectory. If start and goal points are not located on the 3D grid map, then we use the nearest grid points instead and append the start/goal points to both ends respectively. Safe Flight Corridor Construction {#subsec: safe flight corridor construction} --------------------------------- Alg. \[alg: sfc construction\] shows the construction process of safe flight corridor (SFC). We initialize SFC to $\langle \pi^{i}_{m-1}, \pi^{i}_{m} \rangle$ to fulfill the condition (\[eq: sfc condition2\]) (line 3). For all direction, we check whether SFC is expandable (line 5-9), and we expand SFC by a pre-specified length (line 10). This algorithm guarantees to return convex sets that satisfy the definition of SFC. $D \gets \{\pm x, \pm y, \pm z\}$ Relative Safe Flight Corridor Construction ------------------------------------------ [0.21]{} ![Construction of relative safe flight corridor. The red ellipsoid is an inter-collision model between the quadrotors $i,j$, and the green-shaded region is the relative safe flight corridor (RSFC). []{data-label="fig: relative safe flight corridor construction"}](figure/rsfc_construction9.png "fig:"){width="\textwidth"}   [0.21]{} ![Construction of relative safe flight corridor. The red ellipsoid is an inter-collision model between the quadrotors $i,j$, and the green-shaded region is the relative safe flight corridor (RSFC). []{data-label="fig: relative safe flight corridor construction"}](figure/rsfc_construction10.png "fig:"){width="\textwidth"} To build RSFC, we first perform affine coordinate transformation $\widetilde{x}=E^{\frac{1}{2}}x$, where $E^{\frac{1}{2}}$ is $diag([1, 1, 1/c_{dw}])$. Then, the inter-collision model $\mathcal{C}^{i,j}_{inter}$ and initial trajectory $\pi^{i,j}$ are transformed to $\widetilde{\mathcal{C}}^{i,j}_{inter}$ and $\widetilde{\pi}^{i,j}$ as shown in Fig. \[fig: relative safe flight corridor construction 2\]. Let $\widetilde{\pi}^{i,j}_{min}$ be the nearest point of $\langle \widetilde{\pi}^{i,j}_{m-1}, \widetilde{\pi}^{i,j}_{m} \rangle$ to the origin. We construct RSFC as follows: $$\mathcal{R}^{i,j}_{m} = \{x = E^{-\frac{1}{2}}\widetilde{x} \mid \widetilde{x} \cdot \widetilde{n}_{min} - (r^i+r^j) > 0 \} \label{eq: rsfc construction}$$ where $\widetilde{n}_{min} = \widetilde{\pi}^{i,j}_{min} / \|\widetilde{\pi}^{i,j}_{min}\|$. As depicted in Fig. \[fig: relative safe flight corridor construction 1\], our RSFC is a half-space divided by the plane, which is tangent to the inter-collision model at the $\pi^{i,j}_{min}= E^{-\frac{1}{2}}\widetilde{\pi}^{i,j}_{min}$. We note that the convex set in (\[eq: rsfc construction\]) satisfies the definition of RSFC, but we omit the proof due to page limits. Sequential Trajectory Optimization {#subsec: trajectory optimization} ---------------------------------- Optimizing all control points of polynomials at once can cause the scalability problem because the time complexity of the QP solver is $O(n^3)$. Here, we propose an efficient sequential optimization method using *dummy agents*. $p_{dmy}(t) = (p^{0}_{dmy}(t),...,p^{N}_{dmy}(t)) \gets$ planDummy($\pi$) Alg. \[alg: trajopt\] shows the process of sequential optimization. First, we generate trajectories for dummy agents $p_{dmy}(t)$ using the following control points (line 1): $$c^{i}_{m,k} = \begin{cases} \ \pi^{i}_{m-1} & k = 0,...,\phi-1 \\ \ \pi^{i}_{m} & k = n-(\phi-1),...,n \\ \ x \in \langle \pi^i_{m-1}, \pi^i_{m} \rangle & else \end{cases} \label{eq: feasible constraints}$$ Next, we divide the agents into $N_{b}$ batches, and we solve the below QP problem for the batch $b$ (line 3-4): $$\begin{aligned} & \text{minimize} & & c^{T}Qc \\ & \text{subject to} & & A_{eq}c = b_{eq} \\ & & & c^i_{m,k} = \text{control points in (\ref{eq: feasible constraints})}, & & \forall i \notin b, m, k \\ & & & c^i_{m,k} \in \mathcal{S}^{i}_{m}, & & \forall i \in b, m, k\\ & & & c^j_{m,k}-c^i_{m,k} \in \mathcal{R}^{i,j}_{m} & & \forall i, j>i, m, k\\ \end{aligned} \label{eq: trajectory optimization}$$ where $c \in \mathbb{R}^{\frac{N}{N_{b}}M(n+1)}$, $A_{eq} \in \mathbb{R}^{\frac{N}{N_{b}}(M+1)\phi \times \frac{N}{N_{b}}M(n+1)}$, and the number of inequality constraints is $(N-\frac{1}{2}(\frac{N}{N_{b}}+1))\frac{N}{N_{b}}M(n+1)$. At last, we replace the trajectory of dummy agents to the previously planned one (line 5), and plan the trajectory for the next batch sequentially. Fig. \[fig: sequential planning\] visualizes the overall process. For each iteration, we deploy dummy agents except the agents in the current batch (Fig. \[fig: sequential planning 1\]). Then, we plan the trajectory for the current batch to avoid dummy agents (Fig. \[fig: sequential planning 2\]). After that, agents in the current batch are used as dummy agents at the next iteration (Fig. \[fig: sequential planning 3\]). At the end of the iteration, collision-free trajectories are found without deadlock because all the agents are planned to avoid the previous batch (Fig. \[fig: sequential planning 4\]). This sequential method can achieve better scalability because we can avoid the high time complexity of QP solver by increasing the number of the batch as the number of agents increases while keeping the same number of decision variables of QP. Furthermore, we can prove that consists of feasible constraints, which means that our method does not cause optimization failure due to infeasible constraints. If $n \geq 2\phi-1$, then there exists decision vector $c$ that satisfies the constraints of eq. (\[eq: trajectory optimization\]). \[theorem: feasible constraints\] Let us assign the decision vector $c$ as for $i = 1,...,N$ and $m = 1,...,M$. Then $c$ satisfies the waypoint constraints due to . $p^i(t)$ is continuous up to $\phi-1$ derivatives at $t=T_{m}$ for $m = 1,...,M-1$ because $c^{i}_{m,n-(\phi-1):n} = c^{i}_{m+1,0:\phi-1} = \pi^{i}_{m}$. $c$ also fulfills safe corridor constraints due to and . Thus, $c$ is the decision vector that satisfies the constraints of . [0.19]{} ![Sequential planning with dummy agents when $N_{b}=2$. Dummy agent is depicted as a black circle, and agent in the current batch is depicted as a colored circle. For each iteration, we plan a trajectory for the current batch (color line) that avoids the trajectory of dummy agents (black line). []{data-label="fig: sequential planning"}](figure/b1_crop.png "fig:"){width="\textwidth"}   [0.19]{} ![Sequential planning with dummy agents when $N_{b}=2$. Dummy agent is depicted as a black circle, and agent in the current batch is depicted as a colored circle. For each iteration, we plan a trajectory for the current batch (color line) that avoids the trajectory of dummy agents (black line). []{data-label="fig: sequential planning"}](figure/b2_crop.png "fig:"){width="\textwidth"}   [0.19]{} ![Sequential planning with dummy agents when $N_{b}=2$. Dummy agent is depicted as a black circle, and agent in the current batch is depicted as a colored circle. For each iteration, we plan a trajectory for the current batch (color line) that avoids the trajectory of dummy agents (black line). []{data-label="fig: sequential planning"}](figure/b3_crop.png "fig:"){width="\textwidth"}   [0.19]{} ![Sequential planning with dummy agents when $N_{b}=2$. Dummy agent is depicted as a black circle, and agent in the current batch is depicted as a colored circle. For each iteration, we plan a trajectory for the current batch (color line) that avoids the trajectory of dummy agents (black line). []{data-label="fig: sequential planning"}](figure/b4_crop.png "fig:"){width="\textwidth"} In (\[eq: trajectory optimization\]), we do not consider dynamic limits in the QP problem because they can be infeasible constraints for QP. Instead, similar to [@honig2018trajectory], we scale the total flight time for all agents uniformly after optimization (line 9 of Alg. \[alg: trajectory planning algorithm\]). EXPERIMENTS {#sec: experiments} =========== Implementation Details ---------------------- The proposed algorithm is run in C++ and executed the proposed algorithm on a PC running Ubuntu 18.04. with Intel Core i7-7700 @ 3.60GHz CPU and 16G RAM. We model the quadrotor with radius $r^{\forall i} = 0.15$m, maximum velocity $v^{\forall i}_{max} = 1.7m/s$, maximum acceleration $a^{\forall i}_{max} = 6.2m/s^2$ and downwash coefficient $c_{dw} = 2$ based on the specification of Crazyflie 2.0 in [@debord2018trajectory]. We use the Octomap library [@hornung2013octomap] to represent the 3D occupancy map and use CPLEX QP solver [@cplex201612] for trajectory optimization. The degree of polynomials is determined to $n = 5$ to satisfy the assumption in the Theorem \[theorem: feasible constraints\]. We plan the initial trajectory in 3D grid map with grid size $d=0.5$ m, and set suboptimal bound of ECBS to $1.3$. Comparison with Previous Work ----------------------------- To validate the performance of our proposed algorithm, we compared the result with previous work [@1909.02896]. We conducted the simulations in 50 random forests. Each forest has a size of 10 m $\times$ 10 m $\times$ 2.5 m and contains randomly deployed 20 trees of size 0.3 m $\times$ 0.3 m $\times$ 1–2.5 m. We assigned start point of quadrotors in a boundary of the xy-plane in 1 m height, and the goal points at the opposite to their start position as shown in Fig. \[fig:simulation\]. ### Success Rate We executed the simulation with 16 agents, and measured the success rate by the size of agents. As shown in the left graph of Fig. \[fig: prev vs rbp\], both methods show a $100\%$ success rate in 50 random forest when the radius of agents is small, but the success rate of [@1909.02896] decreases as the size of agents increases. It is because the larger the agent size, the smaller the space for agents can exist, which lead to the higher probability that the constraints for SFC and RSFC are infeasible each other. On the contrary, the proposed method shows a perfect success rate for all case because we design SFC and RSFC to feasible each other. ### Solution Quality As depicted in the right graph of Fig. \[fig: prev vs rbp\], the proposed algorithm shows better performance with respect to both objective cost and computation time compare to previous work when the number of the batch $N_b$ is more than one. It can generate a trajectory for 64 agents in 6.36 s ($N_b=16$), and it has $78\%$ ($N_b=1$), $53\%$ ($N_b=16$) less objective cost. Note that we can adjust $N_b$ depending on the desired objective cost and computation time. \[table: scalablilty\] [c|\*[5]{}[X]{}]{} & Agents & 4 & 8 & 16 & 32 & 64 [@1909.02896] & 0.093 & 0.19 ($\times 2.0$) & 0.81 ($\times 4.3$) & 5.30 ($\times 6.5$)& 51.1 ($\times 9.6$) Proposed ($N_b=1$) & 0.11 & 0.29 ($\times 2.7$)& 1.15 ($\times 3.9$) & 11.1 ($\times 9.6$) & 197.0 ($\times 17.8$) Proposed ($N/N_b=4$) & 0.11 & 0.23 ($\times 2.2$) & 0.59 ($\times 2.5$) & 1.55 ($\times 2.6$) & 6.36 ($\times 4.1$) [0.24]{} ![ Comparison with previous work [@1909.02896]. (Left) The success rate for 16 agents, (Right) Objective cost and computation time for 64 agents by the number of batches $N_b$. []{data-label="fig: prev vs rbp"}](figure/successrate6.png "fig:"){width="\textwidth"} [0.24]{} ![ Comparison with previous work [@1909.02896]. (Left) The success rate for 16 agents, (Right) Objective cost and computation time for 64 agents by the number of batches $N_b$. []{data-label="fig: prev vs rbp"}](figure/costvscomp7.png "fig:"){width="\textwidth"} ### Scalability Analysis The computation time increment by the number of agents is shown in Table \[table: scalablilty\]. When the number of agents is small, the computation time increases linearly, regardless of the trajectory optimization method, but it follows the time complexity of QP solver as the number of agents increases if we do not adopt the sequential optimization method. On the other hand, if we maintain the size of the batch ($N/N_b$), it still shows good scalability with the high number of agents. Comparison with SCP-based Method -------------------------------- We compared the proposed algorithm with SCP-based method [@augugliaro2012generation]. Experiments are done in 10 m $\times$ 10 m $\times$ 2.5 m empty space with 8 agents. Start position and goal points are same as the previous experiment as shown in Fig. \[fig: scp vs rbp\], and we assigned the same total flight time to both algorithm. Table \[table: scp vs rbp\] shows that the proposed algorithm requires less computation time for all the cases, and this result does not change when we stop the SCP at the first iteration with collision avoidance constraints. The third column shows the safety margin ratio of each method. Safety margin ratio is calculated by $\text{argmin}_{i,j} d^{i,j}_{min}/(r^i+r^j)$, where $d^{i,j}_{min}$ is a minimum distance between two agents $i,j$. Safety margin ratio must be over $100\%$ to guarantee the collision avoidance, however, SCP-based method does not satisfy this because SCP checks only collision avoidance between discrete points on each trajectory. On the contrary, the proposed method satisfies the safety condition completely. Although the proposed method perform better in computation time and safety margin, it has longer total flight distance compared to the SCP method. It is because our initial trajectory is not optimal respect to total flight distance in non-grid space. Thus, we need to plan initial trajectory considering total flight distance, and leave it as future work. [c|\*[4]{}[X]{}]{} & Comp. Time per Iter. (s) & Total Comp. Time (s) & Safety Margin Ratio & Total Flight Dist. (m) SCP ($h=1.0$ s) & 0.78 & 2.80 & $12\%$ & 77.29 SCP ($h=0.5$ s) & 5.5 & 20.5 & $81\%$ & 77.36 SCP ($h=0.34$ s) & 16.2 & 60.4 & $92\%$& 77.38 SCP ($h=0.25$ s) & 42.1 & 156.6 & $96\%$& 77.40 Proposed ($N_b=1$) & - & 0.65 & $101\%$& 90.74 [0.17]{} ![ Trajectory planning result of the propose algorithm and SCP-based method in empty space. The dots in (b) are the initial trajectory of corresponding agents. []{data-label="fig: scp vs rbp"}](figure/empty_scp_crop.png "fig:"){width="\textwidth"}   [0.17]{} ![ Trajectory planning result of the propose algorithm and SCP-based method in empty space. The dots in (b) are the initial trajectory of corresponding agents. []{data-label="fig: scp vs rbp"}](figure/empty_rbp3_crop.png "fig:"){width="\textwidth"} Flight Test ----------- We conducted real flight test with 6 Crazyflie 2.0 quadrotors in a 5 m x 7 m x 2.5 m space. We used Crazyswarm [@preiss2017crazyswarm] to follow the pre-computed trajectory, and we used Vicon motion capture system to obtain the position information at 100 Hz. The snapshot of the flight test is shown in Fig. \[fig: flight test\], and the full flight is presented in the supplemental video. CONCLUSIONS {#sec: conclusions} =========== We presented an efficient trajectory planning algorithm for multiple quadrotors in obstacle environments, combining the advantages of grid-based and optimization-based planning algorithm. Using relative Bernstein polynomial, we reformulated trajectory generation problem to convex optimization problem, which guarantees to generate continuous, collision-free, and dynamically feasible trajectory. We improved the scalability of the algorithm by using sequential optimization method, and we proved overall process does not cause the failure of optimization if there exist initial trajectory. The proposed algorithm shows considerable reduction in computation time and objective cost compared to our previous work, and it shows better performance in computation time and safety, compared to SCP-based method. In future work, we plan to develop initial trajectory planner that optimizes total flight distance in non-grid space, and we will extend our work to dynamic obstacle environment. [^1]: $^{1}$The authors are with the Department of Mechanical and Aerospace Engineering, Seoul National University, Seoul, South Korea. [{qwerty35, wnsgk02, leplusbon, hjinkim}@snu.ac.kr]{}
--- abstract: | We show that any stack $\mathfrak{X}$ of finite type over a Noetherian scheme has a presentation $X \rightarrow \mathfrak{X}$ by a scheme of finite type such that $X(F) \rightarrow \mathfrak{X}(F)$ is onto, for every finite or real closed field $F$. Under some additional conditions on $\mathfrak{X}$, we show the same for all perfect fields. We prove similar results for (some) Henselian rings. We give two applications of the main result. One is to counting isomorphism classes of stacks over the rings $\mathbb{Z}/p^n$; the other is about the relation between real algebraic and Nash stacks. author: - Avraham Aizenbud and Nir Avni title: Pointwise surjective presentations of stacks --- Introduction {#sec:introduction} ============ Let $\mathfrak{X}$ be an algebraic stack. By definition, there exist a scheme $X_0$ and a submersive smooth map $X \rightarrow \mathfrak{X}$; such a map is called a presentation of $\mathfrak{X}$. Let $X_1=X_0 \times_{\mathfrak{X}} X_0$ be the fiber product. Note that, in general, $X_1$ is an algebraic space. The two projections $s,t:X_1 \rightarrow X_0$, together with the diagonal $\Delta:X_0 \rightarrow X_1$ and the composition map $c:X_1 \times_{s,X_0,t} X_1 \rightarrow X_1$ give a structure of a groupoid object $(X_0,X_1,s,t,\Delta,c)$ in algebraic spaces: here $X_0$ is the space of objects, $X_1$ is the space of morphisms, $s$ and $t$ are the source and target maps, $\Delta$ is the identity map, and $c$ is the composition map. The groupoid object $(X_0,X_1,s,t,\Delta,c)$ is closely related to $\mathfrak{X}$. In particular, for any field $F$, there is a natural and fully faithful functor from $(X_0(F),X_1(F),s,t,\Delta,c)$ to $\mathfrak{X}(F)$. For algebraically closed fields, this functor is an equivalence of groupoids. However, this is false in general: taking $\mathfrak{X}$ to be the classifying space of the group $C_2$ and $X_0$ to be a point, we have that $X_1$ is a pair of points and, for every field $F$, the groupoid $(X_0(F),X_1(F),s,t,\Delta,c)$ has only one object, whereas the isomorphism classes in $\mathfrak{X}(F)$ are in bijection with the square class group of $F$. In this paper we show that every algebraic stack has a presentation such that the above functor is an equivalence of groupoids, for any finite or real-closed field $F$. We also show that, under some condition on the stack $\mathfrak{X}$, there is a presentation such that the above functor is an equivalence of groupoids, for any perfect field $F$. The results also extend to Henselian rings with residue fields of the above form. We give two applications of the main result. The first is to the study of the sequence $|\pi_0(\mathfrak{X}(\mathbb{Z} / n))|$, where $\mathfrak{X}$ is a stack defined over $\mathbb{Z}$ and $\pi_0(\mathfrak{X}(\mathbb{Z} /n))$ is the set of isomorphism classes of $\mathfrak{X}(\mathbb{Z} / n)$. The second is to show that, for any algebraic stack $\mathfrak{X}$ defined over $\mathbb{R}$, the groupoid $\mathfrak{X}(\mathbb{R})$ has a structure of a Nash groupoid. Formulation of the main results ------------------------------- We fix a Noetherian scheme $S$. All the schemes/group schemes/algebraic spaces/algebraic stacks we will consider will be of finite type over $S$ unless stated otherwise. Let $\pi :X\rightarrow \frak{X}$ be presentation of an algebraic stack and let $T \in Sch_{/S}$ be a scheme. 1. A $T$-point $T \rightarrow \frak{X}$ is $\pi$-liftable if it factors through some map $T \rightarrow X$ (up to isomorphism). 2. We say that $\pi$ is $T$-onto if every $T$ point of $\frak{X}$ is $\pi$-liftable. 3. Let $\mathcal{S} \subset Sch_{/S}$ be a full subcategory of the overcategory of $S$. We say that $\pi$ is $\mathcal{S}$-onto if it is $T$-onto, for every object of $\mathcal{S}$. $ $ We denote: - by $\mathcal{F} \subset Sch_{/S}$ the category of spectra of fields, - by $\mathcal{F}_{\mathrm{perf}} \subset \mathcal{F}$ the category of spectra of perfect fields, - by $\mathcal{F}_f \subset \mathcal{F}$ the category of spectra of finite fields, - by $\mathcal{F}_r \subset \mathcal{F}$ the category of spectra of real closed fileds, - by $\mathcal{H} \subset Sch_{/S}$ the category of Henselian schemes (i.e. spectra of Henselian local rings), - and by $\mathcal{H}_{\mathrm{perf}}, \mathcal{H}_{f}, \mathcal{H}_{r}$ the categories of Henselian schemes whose closed points are in $\mathcal{F}_{\mathrm{perf}}, \mathcal{F}_{f}, \mathcal{F}_{r}$ respectively. $ $ - Let $\frak{X}$ be a stack, let $T$ be a scheme, and let $x\in \frak{X}(T)$ be a $T$-point. For any $T$-scheme $R\to T$, define $Aut(x)(R):=Aut(y)$ where $y$ is the $R$-point of ${\mathfrak{X}}$ defined by the composition $R\to T\to {\mathfrak{X}}$. - We say that ${\mathfrak{X}}$ is QCA if for any separably closed field $F$ and for any $F$-point $x$ of ${\mathfrak{X}}$, the functor $Aut(x)$ is represented by a linear algebraic group The main result in this paper is the following: \[thm:lift\] Suppose $\frak{X}$ is a finite type stack over $S$. Then 1. There is a presentation $X \rightarrow \frak{X}$ which is $\mathcal{H}_f$-onto and $\mathcal H_r$-onto. 2. If ${\mathfrak{X}}$ is QCA, then there is a presentation $X \rightarrow \frak{X}$ which is $\mathcal{H}_{\mathrm{perf}}$-onto. We will deduce this theorem from a statment that any surjective presentation of a stack satisfies some weaker condition that we define now: Let $\mathcal{S} \subset \mathcal{F}$ and let $\phi:\frak{X} \rightarrow \frak{Y}$ be a morphism of stacks. We say that $\phi$ is $(n,\mathcal{S})$-almost onto if, for every $\operatorname{Spec}F \in \mathcal{S}$ and any $F$-point $u:\operatorname{Spec}F \rightarrow \frak{Y}$, there is a separable field extension $E/F$ of degree at most $n$ such that the composition $\operatorname{Spec}E \rightarrow \operatorname{Spec}F \rightarrow \frak{Y}$ factors through $\phi$ (up to isomorphism). \[lem:key\] Let $\pi : X \rightarrow \frak{X}$ be a surjective map of finite type between a scheme $X$ and a stack $\frak{X}$. Then there exists $n$ such that 1. $\pi$ is $(n,\mathcal{F}_f)$-almost onto. 2. If $\frak{X}$ is QCA, then $\pi$ is $(n,\mathcal{F})$-almost onto. Applications ------------ Theorem \[thm:lift\] implies: Let $\mathfrak{X}$ be a stack of finite type over $\operatorname{Spec}\mathbb{Z}$. Then 1. For every $p$, the power series $\sum | \pi_0(\mathfrak{X}(\mathbb{Z} / p^n))| t^n$ is a rational function of $t$. 2. The Dirichlet series $\sum_n | \pi_0(\mathfrak{X}(\mathbb{Z} / n) | n^{-s}$ has a rational abscissa of convergence. By Theorem \[thm:lift\], there is a presentation $X \rightarrow \mathfrak{X}$ such that, for every finite ring $R$, the map $X(R) \rightarrow \pi_0(\mathfrak{X}(R))$ is onto. Again, by Theorem \[thm:lift\], there is a presentations $Y \rightarrow X \times_\mathfrak{X} X$ such that, for every finite ring $R$, the map $Y(R) \rightarrow (X \times_\mathfrak{X} X)(R)$ is onto. Denote the composition $Y \rightarrow X \times_\mathfrak{X} X \rightarrow X \times X$ by $f$. Then $f(Y(R))$ is an equivalence relation on the set $X(R)$ and $| \pi_0(\mathfrak{X}(R))|=| X(R) / f(Y(R)) |$, for every finite ring $R$. [@HMRC Theorems 1.3 and 1.4] imply the corollary. Let $\mathfrak{X}$ be a smooth algebraic stack of finite type defined over $\operatorname{Spec}\mathbb{R}$. In [@Sak Appendix A], Sakellaridis defines a stack $\mathfrak{X} ^{Nash}$ on the site of Nash manifolds and asks whether it is always a Nash stack (i.e., is there a smooth presentation of $\mathfrak{X} ^{Nash}$ by a Nash manifold). A criterion for $\mathfrak{X}^{Nash}$ being a Nash stack is given in [@Sak Proposition A.1.4]. Using this criterion and Theorem \[thm:lift\], we get Let $\mathfrak{X}$ be a smooth algebraic stack of finite type defined over $\operatorname{Spec}\mathbb{R}$ in the sense of [@Sak §2.3][^1] and assume that the diagonal map $\mathfrak{X} \rightarrow \mathfrak{X} \times \mathfrak{X}$ is schematic and separated. Then $\mathfrak{X} ^{Nash}$ is a Nash stack. Sketch of the proof of the main results --------------------------------------- We prove Theorem \[thm:lift\] using Theorem \[lem:key\]. Unlike Theorem \[thm:lift\], Theorem \[lem:key\] can be proved by stratifying $\mathfrak{X}$ and proving the theorem for each stratum. We prove Theorem \[lem:key\] by analyzing a sequence of special cases: 1. $\frak{X}$ is a scheme and $\pi : X \rightarrow \frak{X}$ is quasi-finite. In this case the number $n$ is obtained from the degree of the fibers of $\pi$. 2. \[ca:2\] $\frak{X}$ is a scheme. This case follows from the previous one. This case allow us, for a fixed stack ${\mathfrak{X}}$, to deduce the theorem for arbitrary surjection $\pi : X \rightarrow \frak{X}$ from knowing it for one surjection $\pi : X \rightarrow \frak{X}$ 3. \[ca:3\] $\frak{X}$ is an algebraic space. We may assume that $\pi$ is an etale presentation of $\frak{X}$. Then $\frak{X}$ can be viewed as a quotient of $X$ by an etale equivalence relation. The number $n$ comes from the size of the equivalence clases. 4. ${\mathfrak{X}}=BG$ for an algebraic group $G$. The statement can be reformulated as a statement about Galois cohomology. In the case of a finite field we use Lang’s theorem, and the number $n$ comes from the number of components of $G$. In the case of a QCA stack, the group $G$ is linear and thus can be embedded into $GL_n$. Here the proof is based on Hilbert 90 and on Case \[ca:2\] applied to the map $GL_n \to GL_n/G$ 5. ${\mathfrak{X}}=BG$, for a group scheme $G$ over a base scheme $Y$. Over the generic point it looks like the previous case. From this we deduce the theorem for an open dense subset in $X$ and proceed by Noetherian induction. 6. ${\mathfrak{X}}$ is a gerbe over an algebraic space. This follows from the previous case and Case \[ca:3\]. 7. The general case. This follows from the previous case using the fact that any stack can be stratified into gerbes. In order to deduce Theorem \[thm:lift\], we introduce a construction that starts with an almost onto presentation $\pi:X \to {\mathfrak{X}}$ and gives an onto one. This is done in the following steps: 1. Given two $S$-schemes $X$ and $Y$, one can define the internal hom $X^Y$ over $S$ as a pre-sheaf on the category of $S$-schemes. This pre-sheaf is often not representable, but under some restrictive conditions on $X,Y$ it is representable by an algebraic space and, under more restrictive conditions, by a scheme. 2. More generally given two diagrams of $S$-schemes $D_1$ and $D_2$ of the same shape, we define internal hom ${D_1}^{D_2}$ over $S$ as a pre-sheaf on the category of $S$-schemes. Again it will be representable under some conditions 3. Given a presentation $\pi:X \to {\mathfrak{X}}$ one can construct a simplicial scheme $[\pi]_{\bullet}$ called the Cech nerve of $\pi$ by taking the fiber powers of $X$ over $X$. The diagram $[\pi]_{\bullet}$ is an infinite diagram, but, since $[\pi]_{\bullet}$ is coskeletal, its behavior is determined by a finite sub-diagram (the first three levels). 4. Given a presentation $\pi:X \to {\mathfrak{X}}$ and an etale covering $\tau: S'\to S$, we construct a new presentation $$f_{\pi,\tau}:{[\pi]_{\bullet}}^{[\tau]_{\bullet}}\to {\mathfrak{X}}.$$ This presentation tends to be more onto than $\pi$. For example if $\tau: \operatorname{Spec}(E)\to \operatorname{Spec}(F)$ is a finite field extension, then if a composition $\operatorname{Spec}(E)\to \operatorname{Spec}(F) \to {\mathfrak{X}}$ is $\pi$- liftable then the map $\operatorname{Spec}(F) \to {\mathfrak{X}}$ is $f_{\pi,\tau}$-liftable. 5. For an integer $n$ we construct an etale map $\tau_n:\mathbb U_n'\to \mathbb U_n$ that packages all separable field extensions of degree $\leq n$. Namely, for any separable field extension $E/F$ of degree $\leq n$, there is an $F$ point of $\mathbb U_n$ whose fiber is $\operatorname{Spec}E$. 6. We combine the last two steps. Namely, given an $n$-onto presentation $\pi:X \to {\mathfrak{X}}$ we consider the presentation $\pi_n:X\times \mathbb U_n \to {\mathfrak{X}}$ and then the presentation $$f_{\pi_n,\tau_n}:{[\pi_n]_\bullet}^{[\tau_n]_\bullet}\to {\mathfrak{X}}.$$ 7. Denote $X_n:={[\pi_n]_\bullet}^{[\tau_n]_\bullet}$. The obtained presentation $X_n \to {\mathfrak{X}}$ is an onto one, however it might be that $X_n$ is not a scheme, but an algebraic space. In order to complete the construction we present $X_n$ by an affine scheme $Y$ and repeat the construction for the presentation $Y\to X_n$. The composition of the obtained presentation with $f_{\pi_n,\tau_n}$ is an onto presentation by a scheme. Acknowledgments --------------- We thank Shahar Carmeli, Raf Cluckers, and Ofer Gabber for fruitful discussions. We thank Angelo Vistoli for answering a question of ours on MathOverFlow, proving Lemma \[lem:left.lift\]. A.A. was partially supported by ISF grants 687/13, and grant 249/17. N.A. was partially supported by NSF grant DMS-1902041. Both authors was partially supported by BSF grants 2012247 and 2018201. Almost onto presentations of stacks (Proof of Theorem \[lem:key\]) {#sec:key.lemma} ================================================================== In some stages of the proof we will stratify of the stack ${\mathfrak{X}}$ and prove the claims for the strarta. The following allows us to do that. \[lem:strat\] Let ${\mathfrak{X}}:=\bigcup {\mathfrak{X}}_i$ be a finite stratification of an algebraic stack (See [@SP 97.28]). Then for any field $F$, any $F$-point $x:\operatorname{Spec}(F)\to {\mathfrak{X}}$ factors through one of the ${\mathfrak{X}}_i$ (up to an isomorphism). Consider the base-change of $x$ to $\bigsqcup \frak{X}_i$. This is a non-trivial closed immersion $\operatorname{Spec}F \times_\frak{X} \sqcup \frak{X}_i \rightarrow \operatorname{Spec}F$, so it is invertible. This implies that $x$ factors through $\operatorname{Spec}F \rightarrow \frak{X}_i$, for some $i$. The following will allow us to replace an arbitrary surjective morphism by a quasi-finite one: \[lem:q.f.section\] Let $\pi :X \rightarrow Y$ be a surjective morphism of finite type between Noetherian schemes. Then there is a morphism $\varphi: Z \rightarrow X$ such that $\pi \circ \varphi : Z\rightarrow Y$ is surjective and quasi-finite. By Noetherian induction on $Y$, it is enough to find a map $\varphi :Z \rightarrow X$ such that the map $\pi \circ \varphi$ is quasi-finite and its image contains a non-empty open set. We can assume that $Y$ is affine and irreducible. Since the claim depends only on the underlying topological space, we can assume that $Y$ is reduced. Let $\eta$ be the generic point of $Y$ and let [$X_\eta = \pi^{-1}(\eta)$.]{} By the [assumption that $\pi$ is surjective]{}, $X_\eta$ is non-empty. Since $\pi$ is locally of finite type, there are affine open sets $\operatorname{Spec}A \subset Y$, [and]{} $\operatorname{Spec}B \subset X \times_Y \operatorname{Spec}A$ such that $B$ is a finitely generated $A$-algebra, $\eta \in \operatorname{Spec}A$, and $\operatorname{Spec}(B \otimes_A k(\eta))\neq \emptyset$. It follows that there is a finite extension $L$ of $k(\eta)$ and a non-trivial map ${\nu}:B \otimes_A k(\eta) \rightarrow L$. Fix generators $b_1,\dots,b_n$ of $B$ over $A$. Let $a\in A$ be the product of the denominators of the coefficients of the minimal polynomials of $\nu(b_i\otimes 1)$ over $k(\eta)=\operatorname{Frac}(A)$. We obtain that $\nu(B{\otimes 1})[a ^{-1}]$ is an integral extension of $A[a ^{-1}]$. Taking $Z=\operatorname{Spec}\nu(B{\otimes 1})[a ^{-1}]$, we get a map $\varphi:Z \rightarrow X$ that $\pi \circ \varphi :Z \rightarrow \operatorname{Spec}A[a ^{-1}]$ is finite (and hence quasi-finite) and surjective. [The proof of Theorem \[lem:key\] is based on subsequent analysis of its special cases:]{} \[lem:key.scheme\] Theorem \[lem:key\] holds if $\frak{X}$ is a scheme. By Lemma \[lem:q.f.section\], there is a morphism $\varphi :Z \rightarrow X$ such that $\zeta:=\pi \circ \varphi$ is surjective and quasi-finite. [It is well known that, for quasi-finite maps, there exists]{} $m$ such that [$[k(z):k(\zeta(z))]<m$]{}, for every schematic point $z\in Z$. It is easy to see that $\pi$ is $(m,\mathcal{F})$-almost onto. \[cor:one.for.all\] Let $\mathfrak{X}$ be a [(Noetherian)]{} stack and let $\mathcal{S} \subset \mathcal{F}$. Suppose that $s:Y \rightarrow \mathfrak{X}$ is a surjective $(n,\mathcal{S})$-almost onto map, and let $\pi :X \rightarrow \mathfrak{X}$ be surjective [map]{}. Then there is $N$ such that $\pi$ is $(N,\mathcal{S})$-almost onto. Consider the diagram $$\xymatrix{X \times_\frak{X} Y \ar@{->}[r]^{\pi^*} \ar@{->}[d]^{s*} & Y \ar@{->}[d]^{s} \\ X \ar@{->}[r]^{\pi} & \frak{X}}$$ [Since $\pi$ is surjective, b]{}y definition, the map $\pi ^*$ is surjective. [Let $p:Z\to X \times_\frak{X} Y$ be an etale cover of the algebraic space $X \times_\frak{X} Y$ by a scheme $Z$. We get that $\pi^* \circ p$ is surjective]{} so, by Lemma \[lem:key.scheme\], [$\pi^* \circ p$]{} is $(m,\mathcal{F})$-almost onto, for some $m$ [and, therefore, so is $\pi^*$]{}. It follows that the composition $s\circ \pi ^* = \pi \circ s^*$ is $(nm,\mathcal{S})$-almost onto, which implies that $\pi$ is $(nm,\mathcal{S})$-almost onto. \[lem:etale.almost.onto\] Theorem \[lem:key\] holds if $\mathfrak{X}$ is an algebraic space and $\pi :X \rightarrow \mathfrak{X}$ is etale. [@LMB Proposition 1.2] states that the following are true: 1. \[it:eq1\] The two projections $p_{1,2}:X \times_\mathfrak{X} X \rightarrow X$ are etale and the subscheme $X \times_{\mathfrak{X}} X \subset X \times X$ is an equivalence relation. 2. \[it:eq2\] $\mathfrak{X}$ is the coequalizer of $X \times_\mathfrak{X} X {\underset{p_2}{\overset{p_1}{\rightrightarrows}}} X$ (as sheaves on the big etale site of $S$). In particular, for any field $F$, $\mathfrak{X}(F^{sep})=X(F^{sep})/X \times_\mathfrak{X} X(F^{sep})$. Note also that, for any Galois extension $F \subset L$, there is an injective map $i_{F,L}:\mathfrak{X}(F) \rightarrow \mathfrak{X}(L)$ and the image is the collection of $\operatorname{Gal}(L/F)$-invariants. Statement and the assumption that $X$ is Noetherian imply that there is $N$ such that, for any field $F$, the sizes of the equivalence classes of $X \times_\mathfrak{X} X(F^{sep})$ are at most $N$. We will show that $\pi$ is $(N,\mathcal{F})$-almost onto. Suppose that $F$ is a field and $x\in \mathfrak{X}(F)$. By , there is $y\in X(F^{sep})$ such that $i_{F,F^{sep}}(x)$ is the equivalence class $[y]$ of $y$. Since $\operatorname{Gal}_F$ preserves $[y]$, it follows that there is a closed subgroup $H \subset \operatorname{Gal}_F$ of index at most $N$ such that $H$ fixes $y$. Let $L=(F^{sep})^H$. Then $[L:F] \leq N$ and $y\in X(L)$. Finally, since $i_{L,F^{sep}}(\pi(y))=[y]=i_{F,F^{sep}}(x)=i_{L,F^{sep}}(i_{F,L}(x))$, it follows that $\pi(y)=i_{F,L}(x)$. Lemma \[lem:etale.almost.onto\] and Corollary \[cor:one.for.all\] give the following: \[cor:alg.space\] Theorem \[lem:key\] holds if $\frak{X}$ is an algebraic space. \[lem:Galois\] Let $F$ be a field, let $i:C \rightarrow X$, $f:X \rightarrow Y$ be morphisms of $F$-schemes, and denote the structure map of $C$ by $\kappa: C \rightarrow \operatorname{Spec}F$. Suppose that $f$ is surjective and that, for some finite extension $F \subset L$, there is $\eta :\operatorname{Spec}L \rightarrow {Y}$ making the diagram [ $$\xymatrix{C_L \ar@{->}[r]^{i_L} \ar@{->}[d]^{\kappa_L} & X \ar@{->}[d]^{f} \\ \operatorname{Spec}L \ar@{->}[r]^{\eta} & Y}$$ ]{} a pullback diagram (here, $i_L$ and $\kappa_L$ are the base-changes of $i$ and $\kappa$ respectively). Then there is $\zeta :\operatorname{Spec}F \rightarrow Y$ such that $\zeta_L=\eta$ and such that the diagram $$\xymatrix{C \ar@{->}[r]^{i} \ar@{->}[d]^{\kappa} & X \ar@{->}[d]^{f} \\ \operatorname{Spec}F \ar@{->}[r]^{\zeta} & Y}$$ is a pullback diagram. [In other words, if the fiber under a surjective morphism $f:X \to Y$ of an $L$ point $\eta$ is defined over $F$ then it is in fact a fiber of an $F$-point.]{} Let $\alpha: \operatorname{Spec}L \rightarrow \operatorname{Spec}F$ be the map corresponding to the inclusion $F \subset L$. Taking the base change of $\alpha$ and $\kappa$, we get $\alpha_C:C_L \rightarrow C$. Define $\alpha_X:X_L \rightarrow X$ and $\alpha_Y:Y_L \rightarrow Y$ similarly. Let $R=L \otimes_F L$. We have two maps $a,b:\operatorname{Spec}R \rightarrow \operatorname{Spec}L$ such that $\alpha \circ a=\alpha \circ b$. Let $\kappa_R:C_R \rightarrow \operatorname{Spec}R$ be the base change of $\kappa$. Consider the diagram $$\xymatrix{ \operatorname{Spec}(R) \ar@/_2.0pc/@{->}[rr]_{\eta \circ a}\ar@/_3.5pc/@{->}[rr]_{\eta \circ b} \ar@/^/@{->}[r]^{a}\ar@/_/@{->}[r]_{b} & \operatorname{Spec}(L)\ar@{->}[r]^{\eta} &Y}$$ We will show that $\eta \circ a=\eta \circ b$. By faithfully flat descent this would imply that $\eta$ factor through an $F$-point $\zeta:\operatorname{Spec}(F)\to Y$. This will give a morphism $C\to f^{-1}(\zeta)$ that becomes an isomorphism after extending scalars to $L$. Since $\operatorname{Spec}L \to \operatorname{Spec}F$ is faithfully flat this implies $C= f^{-1}(\zeta)$ as required. Consider the following Cartesian squares $$\xymatrix{C^a_R \ar@{->}[r]^{a_C} \ar@{->}[d]^{\kappa_R}& C_L\ar@{->}[r]^{i_L} \ar@{->}[d]^{\kappa_L}&X \ar@{->}[d]^{f}\\ \operatorname{Spec}(R)\ar@{->}[r]^{a} & \operatorname{Spec}(L)\ar@{->}[r]^{\eta} &Y}.$$ We obviously have $C^a_R \cong C_R:= C\times_{\operatorname{Spec}(F)} \operatorname{Spec}(R)$. Applying the same argument to $b$ we get: $$\xymatrix{C_R\ar@/^2.0pc/@{->}[rr]^{i_L \circ b_C}\ar@/^3.5pc/@{->}[rr]^{i_L \circ a_C} \ar@/^/@{->}[r]^{a_C}\ar@/_/@{->}[r]_{b_C} \ar@{->}[d]^{\kappa_R}& C_L\ar@{->}[r]^{i_L} \ar@{->}[d]^{\kappa_L}&X \ar@{->}[d]^{f}\\ \operatorname{Spec}(R) \ar@/_2.0pc/@{->}[rr]_{\eta \circ a}\ar@/_3.5pc/@{->}[rr]_{\eta \circ b} \ar@/^/@{->}[r]^{a}\ar@/_/@{->}[r]_{b} & \operatorname{Spec}(L)\ar@{->}[r]^{\eta} &Y}.$$ Since $i_L$ factor through $i:C\to X$, we obtain that the upper two arrows coinside (namely $i_L \circ a_C=i_L \circ b_C$). The surjectivity of $f$ implies that [$\kappa_R$]{} is surjective. Thus we obtain that the lower two arrows coincide (namely $\eta \circ a=\eta \circ b$), as required. \[lem:key.BG.special\] [Let $G$ be a flat group algebraic space over a scheme $X$, let $\frak{X}=[X/G]$ be the classifying space of $G$ (see [@SP 89.13]), and let $\pi :X \rightarrow \frak{X}$ be the neutralizing map. Then Theorem \[lem:key\] holds for $\pi :X \rightarrow \frak{X}$.]{} [By Proposition \[prop:group.algebraic.space\] [and Lemma \[lem:strat\]]{} we can assume that $G$ is a group scheme.]{} Let $F$ be a field. By [@SP [Lemma 89.15.4](https://stacks.math.columbia.edu/tag/0CQJ)], a point $u:\operatorname{Spec}F \rightarrow [X/G]$ is a pair $(x,P)$, where $x\in X(F)$ and $P$ is a $G_x$-torsor in the fppf topology, i.e., a $G_x$-space that becomes trivial after base-change to the algebraic closure of $F$. The point $u$ factors through $X$ iff $P$ is a trivial $G_x$-torsor. Therefore, it is enough to show that there is a constant $N$ such that, for any finite field $F$ (in case $\mathfrak{X}$ is QCA $F$ can be taken to be arbitrary), any $x\in X(F)$ and any $G_x$-torsor $P$, there is a field extension $E \supset F$ of degree at most $N$ such that $P \times_{{{{\operatorname{Spec}F}}}} \operatorname{Spec}E$ is trivial. We show this holds for the two cases of the theorem. 1. [[[Finite field case:]{}]{}]{}\ Let $n:=\max \#\pi_0(G_s)$ where $s$ ranges over all geometric points of $X$ (this maximum exists because $s \mapsto \# \pi_0(G_s)$ is constructible, see [@EGA4 Proposition 9.7.8]). We will show that, for any finite field $F=\mathbb{F}_q$ and any algebraic group $H$ over $F$ with at most $n$ connected components, any $H$-torsor has a trivialization over $\mathbb{F}_{q^{(n!)^2}}$. Since finite fields are perfect, any fppf torsor is an etale one. Hence, we need to show that the map $H^1(\mathbb{F}_q,H) \rightarrow H^1(\mathbb{F}_{q^{(n!)^2}},H)$ is trivial. By Lang’s theorem, it is enough to show that $H^1(\mathbb{F}_q,\pi_0H) \rightarrow H^1(\mathbb{F}_{q^{(n!)^2}},\pi_0H)$ is trivial. This map is the composition of $H^1(\mathbb{F}_q,\pi_0H) \rightarrow H^1(\mathbb{F}_{q^{n!}},\pi_0H)$ and $H^1(\mathbb{F}_{q^{n!}},\pi_0H) \rightarrow H^1(\mathbb{F}_{q^{(n!)^2}},\pi_0H)$, so it is enough to show that the second map is trivial. Note that the action of $\operatorname{Gal}(\overline{\mathbb{F}_q} / \mathbb{F}_{q^{n!}})$ on $\pi_0H$ is trivial, so any 1-cocycle is a homomorphism $\operatorname{Gal}(\overline{\mathbb{F}_q} / \mathbb{F}_{q^{n!}}) \rightarrow \pi_0H$, but any such becomes trivial when restricted to $\operatorname{Gal}(\overline{\mathbb{F}_q} / \mathbb{F}_{q^{(n!)^2}})$. 2. [QCA stack case:]{}\ Assume now that $\mathfrak{X}=[X/G]$ is QCA. By Proposition \[prop:strat.qca.1\], there is a stratification $X=\cup X_i$ such that $G|_{X_i^{red}}$ can be embedded as a closed subgroup in $\operatorname{GL}_n \times X_i^{red}$, for some $n$. Hence, [by Lemma \[lem:strat\],]{} we can assume that $X$ is reduced and $G$ is [[[a]{}]{}]{} closed subgroup of $\operatorname{GL}_n \times X$. Similarly, using Proposition \[prop:strat.qca\], we can assume that the quotient $Z:=\operatorname{GL}_n \times X / G$ exists. Since the quotient map $\pi:\operatorname{GL}_n \times X \rightarrow Z$ is onto, Lemma \[lem:key.scheme\] implies that there is a natural number ${N}$ such that, for every field $F$ and every $p\in Z(F)$, there is an extension $E \supset F$ of degree at most ${N}$ such that [the composition $\operatorname{Spec}E \rightarrow \operatorname{Spec}F \overset{p}{\rightarrow} Z$]{} factors through $\operatorname{GL}_n \times X$. We will show that, for any field $F$, and $x\in X(F)$ and every $G_x$-torsor $P$ defined over $F$, there is a field extension $E \supset F$ of degree at most ${N}$ such that $P \times \operatorname{Spec}E$ is trivial. Let $F$ be a field, let $x\in X(F)$, and let $P$ be a $G_x$-torsor defined over $F$. The quotient $P \times \operatorname{GL}_n / G_x$ (where $G_x$ acts diagonally on $P \times \operatorname{GL}_n$) is a $\operatorname{GL}_n$-torsor over $F$. By Hilbert 90, this torsor is trivial. This means that there is a $\operatorname{GL}_n$-equivariant isomorphism $P \times \operatorname{GL}_n / G_x \rightarrow \operatorname{GL}_n$. Composing this isomorphism with the map $P \rightarrow P \times \operatorname{GL}_n / G_x$ that sends $p$ to $(p,1)G_x$, we get a morphism $i:P \rightarrow \operatorname{GL}_n$ which is $G_x$-equivariant. Since $P$ is a torsor, for some finite extension $L \supset F$, the base change $P \times \operatorname{Spec}L$ is trivial, so [by Lemma \[lem:base.change.quotient\] we have]{} $i(P \times \operatorname{Spec}L)=\pi ^{-1} (w)$, for some $w\in Z(L)$. Applying Lemma \[lem:Galois\], $i(P)=\pi ^{-1} (z)$, for some $z\in Z(F)$. By the definition of ${N}$, there is a field extension $E \supset F$ of degree at most ${N}$ and a point $g\in i(P)(E)$. It follows that $P$ is trivial over $E$, which is what we wanted to prove. Lemma \[lem:key.BG.special\] and Corollary \[cor:one.for.all\] give the following: \[lem:key.BG\] Theorem \[lem:key\] holds if $\frak{X}=[X/G]$, where $X$ is a scheme and $G$ is a flat group scheme over $X$. In the following, by a gerbe, we mean a gerbe in the fppf topology, see [@SP Definition 95.27.1]. \[lem:key.gerbe.general\] Theorem \[lem:key\] holds if $\frak{X}$ is [a]{} gerbe over an algebraic space $[\frak{X}]$. [Let ${\mathcal{S}}$ be ${\mathcal{F}}$ if ${\mathfrak{X}}$ is QCA and ${\mathcal{F}}_f$ otherwise.]{} Let $\tau:\frak{X} \rightarrow [\frak{X}]$ be the structure map. Consider the following diagram: $$\xymatrix{X \times_{[\frak{X}]} X \ar@{->}[r]^{\rho} \ar@{->}[d] & X \ar@{->}[d]^{\pi} \\ \frak{X} \times_{[\frak{X}]} X \ar@{->}[r] \ar@{->}[d] & \frak{X} \ar@{->}[d]^{\tau} \\ X \ar@{->}[r]^{\tau \circ \pi} & [\frak{X}]}$$ The following hold: 1. The map $\tau \circ \pi : X \rightarrow [\frak{X}]$ is surjective. By Corollary \[cor:alg.space\], it is $(N_1,\mathcal{F})$-almost onto, for some $N_1$. 2. $\tau:\frak{X} \rightarrow [\frak{X}]$ is a gerbe, and so $\frak{X} \times_{[\frak{X}]} X \rightarrow X$ is a gerbe by [@SP [[Lemma 95.27.3](https://stacks.math.columbia.edu/tag/06QB)]{}]. The map $(\pi,id):X \rightarrow \frak{X} \times_{[\frak{X}]} X$ is a section, so $\frak{X} \times_{[\frak{X}]} X$ is isomorphic to [the classifying stack]{} $[G/X]$, for some flat group algebraic space $G$ over $X$, by [@SP [[Lemma 95.27.6](https://stacks.math.columbia.edu/tag/06QB)]{}]. If $\mathfrak{X}$ is QCA, then so is $\frak{X} \times_{[\frak{X}]} X$. 3. \[cond:a.o.gerbe\] The map $X \times_{[\frak{X}]} X \rightarrow \frak{X} \times_{[\frak{X}]} X$ is surjective. By the previous claim and Lemma \[lem:key.BG\], it is [$(N_2,{\mathcal{S}})$-almost onto, for some $N_2$.]{} [We will prove that the map $X \times_{[\frak{X}]} X\to {\mathfrak{X}}$ is $(N_1N_2,{\mathcal{S}})$-almost onto. This is enough by Corollary \[cor:one.for.all\]]{} Let $F$ be a field and let $pt:\operatorname{Spec}F \rightarrow \frak{X}$ be an $F$-point. There is a field extension $K \supset F$ of degree at most $N_1$ such that the composition $\operatorname{Spec}{K} \rightarrow \operatorname{Spec}{F} \rightarrow \frak{X}\rightarrow [\frak{X}]$ factors through a map $q:\operatorname{Spec}{K} \rightarrow X$. The map $(pt,q)$ defines a map $pt':\operatorname{Spec}{K}\rightarrow \frak{X} \times_{[\frak{X}]} X$. By \[cond:a.o.gerbe\], there is a field extension ${E} \supset {K}$ of degree at most $N_2$ such that the composition $\operatorname{Spec}{E} \rightarrow \operatorname{Spec}{K} \rightarrow \frak{X} \times_{[\frak{X}]} X$ factors through a map $r:\operatorname{Spec}{E} \rightarrow X \times_{[\frak{X}]} X$. It follows that the composition $\operatorname{Spec}{E} \rightarrow \operatorname{Spec}{F}\rightarrow \frak{X}$ factors through $\rho \circ r: \operatorname{Spec}{E} \rightarrow X$. [By]{} [@SP [[95.28](https://stacks.math.columbia.edu/tag/06RB)]{}], there is a stratification of $\frak{X}$ by locally closed substacks $\frak{X}_i$ such that $\frak{X}_i$ are fppf gerbes over some algebraic spaces. Since $\mathfrak{X}$ is Noetherian, there are only finitely many $\mathfrak{X}_i$. [The assertion now follows from Lemmas \[lem:key.gerbe.general\] and \[lem:strat\].]{} Onto presentations of stacks (Proof of Theorem \[thm:lift\]) {#sec:lift} ============================================================ The proof of Theorem \[thm:lift\] is based on Theorem \[lem:key\] and the following proposition: \[lem:alm.onto.imp.onto\] Let $\mathcal{S} \subset \mathcal{F}_{\mathrm{perf}}$ and [let]{} ${\mathfrak{X}}$ be a stack. 1. \[lem:alm.onto.imp.onto:1\] If there is an $(n,\mathcal{S})$-almost onto presentation of ${\mathfrak{X}}$ by a scheme, then there is an $\mathcal{S}$-onto presentation of ${\mathfrak{X}}$ by an algebraic space. 2. \[lem:alm.onto.imp.onto:2\] If ${\mathfrak{X}}$ is an algebraic space and there is an $(n,\mathcal{S})$-almost onto presentation of $X$ by a scheme, then there is an $\mathcal{S}$-onto presentation by a scheme. The proof of Proposition \[lem:alm.onto.imp.onto\] will be given in §\[sec:prf.alm.onto\]; the proof uses several auxiliary results which we prove in §\[sec:prf.alm.onto\] and §\[sec:improving.pres\]. We now show how to deduce Theorem \[thm:lift\] from Proposition \[lem:alm.onto.imp.onto\]. We will need the following: \[lem:left.lift\] Let $A$ be a local ring, $I$ an ideal in $A$ such that $(A,I)$ is a Henselian pair (see [@SP 15.11]). Then the embedding $\operatorname{Spec}(A/I) \rightarrow \operatorname{Spec}(A)$ has the left lifting property with respect to smooth maps of schemes, i.e., for any commutative diagram $$\xymatrix{\operatorname{Spec}(A/I) \ar@{->}[r] \ar@{->}[d] & X \ar@{->}[d] \\ \operatorname{Spec}(A) \ar@{->}[r] & Y}$$ such that $X,Y$ are schemes and the map $X \rightarrow Y$ is smooth, there is a map $\operatorname{Spec}(A) \rightarrow X$ such that the diagram $$\xymatrix{\operatorname{Spec}(A/I) \ar@{->}[r] \ar@{->}[d] & X \ar@{->}[d] \\ \operatorname{Spec}(A) \ar@{->}[r] \ar@{->}[ru] & Y}$$ is commutative. Denote the map $X \rightarrow Y$ by $\phi$. There is a Zariski open cover $X=\bigcup U_i$ such that $\phi |_{U_i}$ factors as the composition of an etale map $\psi_i:U_i \rightarrow Y \times \mathbb{A} ^n$ and the projection $Y \times \mathbb{A} ^n \rightarrow Y$. Since $A/I$ is local, we can replace $X$ by some $U_i$, so it is enough to prove the claim in the following cases: 1. $\phi$ is the projection $Y \times \mathbb{A} ^n \rightarrow Y$. The claim follows since the map $A \rightarrow A/I$ is onto. 2. $\phi$ is etale. The claim follows from the definition of Henselian pair. \[cor:Henselian.pair\] Let $A$ be a local ring and let $I$ be an ideal in $A$ such that $(A,I)$ is a Henselian pair. Let $\phi:X \rightarrow \mathfrak{X}$ be a $\left\{ \operatorname{Spec}(A/I) \right\}$-onto presentation. Assume that, for any algebraic space $\mathfrak{B}$ and any $A/I$-point $r:\operatorname{Spec}(A/I) \rightarrow \mathfrak{B}$, there is a presentation $\psi: B \rightarrow \mathfrak{B}$ such that $r$ is $\psi$-liftable. Then $\phi$ is $\left\{ \operatorname{Spec}(A) \right\}$-onto. Suppose that $q:\operatorname{Spec}(A) \rightarrow \mathfrak{X}$ be an $A$-point. Since $\phi$ is $\left\{ \operatorname{Spec}(A/I) \right\}$-onto, we can lift the composition $\operatorname{Spec}(A/I) \rightarrow \operatorname{Spec}(A) \overset{q}{\rightarrow} \mathfrak{X}$ to a map $\operatorname{Spec}(A/I) \rightarrow X$. This gives a map $r:\operatorname{Spec}(A/I) \rightarrow X \times_\mathfrak{X} \operatorname{Spec}(A)$. By assumption, there is a scheme $B$ and a presentation $\psi :B \rightarrow X \times_\mathfrak{X} \operatorname{Spec}(A)$ such that $r$ is $\psi$-liftable. Let $r':\operatorname{Spec}(A/I) \rightarrow B$ be a lift of $r$. Applying Lemma \[lem:left.lift\] to the diagram $$\xymatrix{\operatorname{Spec}(A/I) \ar@{->}[r]^{r'} \ar@{->}[d] & B \ar@{->}[d] \\ \operatorname{Spec}(A) \ar@{->}[r] & \operatorname{Spec}(A)}$$ we get a map $s:\operatorname{Spec}(A) \rightarrow B$. The composition of $s$ and the projection to $X$ is a lift of $q$. We can now prove Theorem \[thm:lift\]: We first show the claim replacing $\mathcal{H}_{f},\mathcal{H}_{r},\mathcal{H}_{perf}$ by $\mathcal{F}_{f},\mathcal{F}_{r},\mathcal{F}_{perf}$. Let $\phi: X \to {\mathfrak{X}}$ be a presentation of ${\mathfrak{X}}$ by a scheme $X$. By Theorem \[lem:key\], there exists an integer $n$ such that $\phi$ is $(n,{\mathcal{F}}_f)$-almost onto [(or $(n,{\mathcal{F}}_{})$]{}-almost onto if ${\mathfrak{X}}$ is QCA). By definition, $\phi$ is also $(2,{\mathcal{F}}_r)$-almost onto. By Proposition \[lem:alm.onto.imp.onto\], there exists an algebraic space $X'$ and a presentation $\psi : X' \rightarrow \mathfrak{X}$ which is $({\mathcal{F}}_r\cup {\mathcal{F}}_f)$-onto (${\mathcal{F}}_{\mathrm{perf}}$-onto if $\mathfrak{X}$ is QCA). Let $X'' \rightarrow X'$ be a presentation of $X'$ by a scheme $X''$. Since $X'$ is $QCA$, applying Theorem \[lem:key\] and Proposition \[lem:alm.onto.imp.onto\], we obtain a scheme $X'''$ and an ${\mathcal{F}}_{\mathrm{perf}}$-onto presentation $\psi :X''' \to X'$. The composition $\phi \circ \psi:X''' \rightarrow \mathfrak{X}$ is a presentation which is $({\mathcal{F}}_r\cup {\mathcal{F}}_f)$-onto (${\mathcal{F}}_{\mathrm{perf}}$-onto if $\mathfrak{X}$ is QCA). The claim of the theorem for $\mathcal{H}_{f},\mathcal{H}_{r},\mathcal{H}_{perf}$ follows now by Corollary \[cor:Henselian.pair\]. Internal Hom {#sec:internal.hom} ------------ Let $X, Y$ be [$S$-]{}schemes. Let $X^{\wedge}_S Y$ be the contravariant functor from the category of $S$-schemes to the category of sets defined by $$(X^{\wedge}_S Y)(T):=Mor(Y\times_S T,X).$$ [[[If the base scheme $S$ is clear from the context we will omit it from the notation or simply denote this functor by $X^{Y}$]{}]{}]{} \[lem:power.representable\] $ $ 1. If $Y \rightarrow S$ is finite and etale, then $X^{\wedge}_S Y$ is representable by an algebraic space (of finite type). 2. If $Y \rightarrow S$ is finite and etale, and $X$ is quasi-projective, then $X^{\wedge}_S Y$ is representable by a scheme (of finite type). Although the statement is standard, we did not find a complete proof in the literature, so we deduce it from a similar statement appearing in [@DeGa I §1 6.6]. For another version, see [@Ol]. For the proof we will need the following simple lemmas: \[lem:etale.trivial\] Let $S$ be a connected scheme and let $S'\to S$ be a finite etale map. Then there exists an etale cover $\eta: T\to S$ such that $$T\times _S S' \cong T \sqcup \cdots \sqcup T,$$ as $T$-schemes. The proof is by induction on the degree $d$ of the map $S'\to S$. Without loss of generality, we may assume that $S'$ is connected. the base $d=0$ is obvious. Without loss of generality we can assume that $S' \to S$ is a cover. Consider the diagram $$\xymatrix{\Delta S' \ar@{->}[r]& S'\times_S S' \ar@{->}[r]^{} \ar@{->}[d]^{} & S' \ar@{->}[d]^{} \\& S' \ar@{->}[r]^{}& S}$$ Let $U:=S'\times_S S' \smallsetminus \Delta S'$. The map $U \to S'$ is finite etale map of degree $d-1$. Thus by induction assumption there is an etale cover $T \to S'$ such that $$T\times _S' U \cong T \sqcup \cdots \sqcup T.$$ Composing $T \to S' \to S$ we obtain the required cover. \[lem:finite.type.local\] The property of being of finite type is local in the fppf topology on the target. Namely, let $X\to S$ be a (not necessarily finite type) $S$-scheme and $\phi:T\to S$ be a faithfully flat morphism of finite type. Assume that $X\times_S T$ is [of]{} finite type over $T$. Then, $X\to S$ is of finite type. $ $ 1. By faithfully flat descent, $X^{\wedge} Y$ is a sheaf in the fpqc topology and in particular in the etale topology. Thus, we need to find a scheme $A$ together with an etale cover $A \to X^{\wedge} Y$. Without loss of generality we can assume that $S$ is connected. By Lemma \[lem:etale.trivial\] there exists an etale cover $T \to S$ such that $T\times _S Y \cong \underbrace{T \sqcup \cdots \sqcup T}_{\text{$n$ copies}}$ as $T$-schemes. Since $T \rightarrow S$ is an etale cover, the map $X^{\wedge}_S Y \times_S T \rightarrow X^{\wedge}_S Y$ is an etale cover. Since $$X^{\wedge}_S Y \times_S T=(X\times_S T)^{\wedge}_T (Y \times_S T) = (X \times_S T)^{\wedge}_T(T \sqcup \cdots \sqcup T),$$ we have that $X^{\wedge}_S Y \times_S T$ is representable by $(X \times_S T)^n$. 2. By [[@DeGa I §1 6.6]]{} $X^{\wedge}_S Y$ is representable by a scheme which is not a-priori of finite type. Let $T \rightarrow S$ be the etale cover from the previous part and consider the Cartesian square $$\xymatrix{ (X \times_S T)^n \ar@{->}[r]^{} \ar@{->}[d]^{} & X^{\wedge}_S Y \ar@{->}[d]^{} \\ T \ar@{->}[r]^{}& S}$$ The horizontal arrows are etale covers, and the morphism $ (X \times_S T)^n \to T$ is of finite type. Therefore, by Lemma \[lem:finite.type.local\], so is the morphism $X^{\wedge}_S Y\to S$. \[cor:hom.diagram.representable\] Let $\mathcal{C}$ be a finite category and let $D_1,D_2:\mathcal{C} \to RamiA{Sch_{/S}}$ be two functors. Let ${D_1}^{\wedge}_S D_2$ be the contravariant functor from the category of $S$-schemes to the category of sets defined by $$({D_1}^{\wedge}_S D_2)(T):=Mor(D_2\times T,D_1),$$ where $D_2\times T$ is the composition of product with $T$ and $D_2$. Assume that the image of $D_2$ consists of schemes which are finite and etale over $S$. Then 1. \[cor:hom.diagram.representable:1\] ${D_1}^{\wedge}_S D_2$ is representable by an algebraic space. 2. \[cor:hom.diagram.representable:2\] If the image of $D_1$ consist of quasi-projective schemes then ${D_1}^{\wedge}_S D_2$ is representable by a scheme. [[[We first prove . ${D_1}^{\wedge}_S D_2$ is a limit of a finite diagram of functors, each represented by an algebraic space. By the Yoneda Lemma, any morphism between such functors comes from a morphism of algebraic spases. The asertion follows now from the fact that the category of algebraic spaces is closed under [finite]{} limits. Part is proved in a similar way.]{}]{}]{} In the rest of the section, we will not distinguish between representable functors and their representing objects. Improving a presentation {#sec:improving.pres} ------------------------ Let $\phi:X \to {\mathfrak{X}}$ be a presentation of an algebraic stack defined over a scheme $S$. Denote by $[\phi]_\bullet$ the simplicial scheme given by $[\phi]_1:=X$, $[\phi]_n:=X \times_{\mathfrak{X}}X_{n-1}$ with the standard boundary and degeneration maps. Denote by $[\phi]_{\bullet\leq 3}$ be the full subdiagram of $[\phi]_\bullet$ with [vertices]{} $[\phi]_1,[\phi]_2,[\phi]_3$. Note that for two maps $\phi$ and $\phi'$ as above we have a canonical isomorphism $[\phi]_\bullet ^{[\phi']\bullet}\cong [\phi]_{\bullet\leq 3} ^{[\phi']_{\bullet\leq 3}}$. The goal of this subsection is to prove the following: \[lem:improving.pres\] Let $\mathfrak{X}$ be an algebraic stack defined over a scheme $S$, let $X$ be a scheme over $S$, let $\phi :X \rightarrow \mathfrak{X}$ be a presentation and let $\psi :S' \rightarrow S$ be a finite etale [and onto]{} map. Then the functor $[\phi]_\bullet ^{[\psi]\bullet}$ is representeble by algebraic space and there is a presentation $f_{\phi,\psi}:[\phi]_\bullet ^{[\psi]\bullet} \rightarrow \mathfrak{X}$ such that, if $T$ is an $S$-scheme and $x:T \to \mathfrak{X}$ is such that the natural map $T\times_S S' \to \mathfrak{X}$ is $\phi$-liftable, then $x$ is $f_{\phi,\psi}$-liftable. Moreover, if $\mathfrak{X}$ is an algebraic space and $X$ quasi-affine, then $[\phi]_\bullet ^{[\psi]\bullet}$ is a scheme. In order to build $f_{\phi,\psi}$ we need to discuss the notion of descent data. Let ${\mathfrak{X}}$ be an algebraic stack defined over a scheme $S$. Let $\psi:Y'\to Y$ be an etale map of $S$-schemes. A [descent datum for ${\mathfrak{X}}$ with respect to $\psi$ (or a $\psi$-descent datum for $\mathfrak{X}$)]{} is a map $s:Y' \to {\mathfrak{X}}$ and an isomorphism $F$ between $s\circ d_1$ and $s\circ d_2$ (where $d_i:[\psi]_2\to [\psi]_1$ are the boundary maps) satisfying the cocycle condition $(F\circ d_{12})(F \circ d_{23})=F\circ d_{13}$ (see [@SP 8.3, 8.4]). The collection of all $\psi$-descent data forms a groupoid. We have a natural functor from ${\mathfrak{X}}(Y)$ to this groupoid. $\mathfrak{X}$ being a stack implies that this functor is an equivalence. Let $ {\mathfrak{X}}$ be an algebraic stack defined over a scheme $S$. Let $\psi:S'\to S$ be a finite etale map. Define a [functor]{} ${\mathfrak{X}}_{S'/S}$ [from $S$-schemes to groupoids]{} by $${\mathfrak{X}}_{S'/S}(Y)=\{\text{ descent data for ${\mathfrak{X}}$ with respect to $Y\times_S S' \to Y$ } \}.$$ By the discussion above, there is a natural equivalence of functors ${\mathfrak{X}}\to {\mathfrak{X}}_{S'/S}$. In particular, $\mathfrak{X}_{S'/S}$ is a stack naturally identified with ${\mathfrak{X}}$, for any such $S'$. Let $\phi:X \to \mathfrak{X}$ be a presentation of an algebraic stack defined over a scheme $S$. Let $\eta:Y'\to Y$ be an etale map of $S$-schemes. An explicit descent datum for a map from $Y$ to $\mathfrak{X}$ with respect to $\phi$ and $\eta$ is a morphism of diagrams $[\eta]_{\bullet \leq 3}\to [\phi]_{\bullet \leq 3}$. Any [explicit]{} [descent]{} datum for a map from $Y$ to $\mathfrak{X}$ with respect to $\phi$ and $\eta$ gives a [descent]{} datum for a map from $Y$ to $\mathfrak{X}$ with respect to $\phi$. We are now ready to define $f_{\phi,\psi}$. Let $\phi:X \to \mathfrak{X}$ be a presentation of an algebraic stack defined over a scheme $S$ and let $\psi :S' \rightarrow S$ be a finite etale [and onto]{} map. In view of the discussion above we obtain a natural map $[\phi]_{\bullet \leq 3}^{[\psi]_{\bullet \leq 3}} \to \mathfrak X_{S'/S}$. This gives us the map $f_{\phi,\psi} :[\phi]_{\bullet}^{[\psi]_{\bullet}} \to \mathfrak X$ In order to prove that $f_{\phi,\psi}$ is a presentation, we will use the following \[lem:base.change.presentation\] Let $\phi:X \to Y$ be a morphism of $S$-schemes and $T$ be an $S$-scheme. 1. If $\phi \times_S T :X\times_S T \to Y\times_S T$ and $T \to S$ are surjective morphisms, then so is $\phi$ 2. If $\phi \times_S T :X\times_S T \to Y\times_S T$ is smooth and $T \to S$ is surjective and smooth, then $\phi$ is smooth. 3. Suppose that $\mathfrak{X}$ is a stack over $S$, $\psi : X \rightarrow \mathfrak{X}$ is an $S$-morphism, and $T \rightarrow S$ is a surjective and smooth morphism. If $\psi \times_S T$ is a presentation, then so is $\psi$. $ $ 1. We denote the underlying topological space of a scheme $A$ by $|A|$. By definition, a map $A\to B$ is surjective iff $|A|\to |B|$ is surjective. Consider the commutative diagram $$\xymatrix{X\times_S T \ar@{->}[r]^{\phi \times_S T} \ar@{->}[d]^{pr_X} & Y\times_S T \ar@{->}[d]^{pr_Y} \\ X \ar@{->}[r]^{\phi}& Y}.$$ It gives rise to a commutative diagram $$\xymatrix{ & |Y|\times_{|S|} |T| \ar@{->}[d]^{} \ar@/^4pc/[dd]^{pr_{|Y|}}\\ |X\times_S T| \ar@{->}[r]^{|\phi \times_S T|} \ar@{->}[d]^{|pr_X|} & |Y\times_S T| \ar@{->}[d]^{|pr_Y|} \\ |X| \ar@{->}[r]^{|\phi|}& |Y|}$$ Since the map $T\to S $ is surjective, so is $|T|\to|S|$ and thus so is $pr_{|Y|}$. This implies that $|pr_{Y}|$ is surjective. Together with the fact that $|\phi \times_S T|$ is surjective this implies that $|\phi|\circ |pr_X| $ is surjective, which implies the assertion. 2. As before $pr_X$ is surjective. Also, since $T\to S$ is smooth so is $pr_X$. Thus by [@SP Lemma 34.11.4] it is left to show that $\phi \circ pr_X$ is smooth. This follows from the fact that $\phi \times_S T$ and $pr_Y$ are smooth. 3. This follows from the previous claims. Let $\phi:X \to {\mathfrak{X}}$ be a morphism of an $S$-scheme to an $S$-stack and let $T\to S$ be a surjective smooth morphism of schemes. Assume that $\phi \times_S T :X\times_S T \to {\mathfrak{X}}\times_S T$ is a presentation. then so is $\phi$. We will also need the following: \[lem:1.to.1.implies.quasi.affine\] Let $\phi:X \to Y$ be a morphism of schemes such that for any scheme $T$, the map $\phi(T):X(T) \to Y(T)$ is 1-1. Then $\phi$ is quasi-affine. We may assume $Y$ is separated. [The assumption implies that the diagram $$\xymatrix{\Delta X \ar@{->}[r] \ar@{->}[d] & X \times X \ar@{->}[d] \\ \Delta Y \ar@{->}[r] & Y \times Y}$$ is cartesian, and, therefore, $$\Delta(X)=(\phi\times\phi)^{-1}(\Delta Y).$$ This implies that $X$ is separated. The assumption also implies that $\phi$ is quasi finite. Thus by [@SP Lemma 36.38.2], $X$ is quasi-affine.]{} \[cor:quasi.affine.presentation\] Let $\phi:X \to Y$ be a presentation of an algebraic space by a quasi-affine scheme. Then $[\phi]_i$ are quasi-affine. The two restriction maps give a morphism $[\phi]_2\to [\phi]_1\times [\phi]_1=X\times X$. This morphism satisfies the conditions of Lemma \[lem:1.to.1.implies.quasi.affine\] and, thus, it is quasi-affine. This implies that $[\phi]_2$ is quasi-affine. Since, for $i>2$, we have $[\phi]_i=[\phi]_{i-1}\times_{X} [\phi]_{2}$, we obtain by induction that $[\phi]_i$ is also quasi-affine. Since $[\phi]_\bullet ^{[\psi]\bullet}\cong [\phi]_{\bullet\leq 3} ^{[\psi]_{\bullet\leq 3}}$, Corollary \[cor:hom.diagram.representable\] implies that $[\phi]_\bullet ^{[\psi]\bullet}$ is representeble by an algebraic space. It follows from the definitions that, if $T$ is an $S$-scheme and $x:T \to \mathfrak{X}$ is such that the natural map $T\times_S S' \to \mathfrak{X}$ is $\phi$-liftable, then $x$ is $f_{\phi,\psi}$-liftable. If $\mathfrak{X}$ is an algebraic space and $X$ is quasi-affine, then $\phi_i$ are quasi-affine, and, by Corollary \[cor:hom.diagram.representable\], $[\phi]_\bullet ^{[\psi]\bullet}$ is a scheme of finite type. It remains to prove that $f_{\phi,\psi}$ is a presentation. 1. $S'=S \sqcup \cdots \sqcup S$ and $\psi$ is the projection.\ In this case it is easy to see that $[\phi]_\bullet ^{[\psi]_\bullet} \cong X \times_X [\phi]_2 \times_X \cdots \times_X [\phi]_2$ where the maps $[\phi]_2 \to X$ are the first boundary maps, and the number of appearances of $ \times_Y$ and $\sqcup$ is the same. It is also easy to see that under this identification the map $f_{\phi,\psi}$ is the composition of the projection to $X$ and the $\phi$. This proves the assertion. 2. The general case\ By Lemma \[lem:etale.trivial\] there exists an etale cover $\eta: T\to S$ such that $T\times _S S' \cong T \sqcup \cdots \sqcup T$ as an $S'$ scheme. By Lemma \[lem:base.change.presentation\] it is enough to show that $f_{\phi,\psi} \times_S T:[\phi]_\bullet ^{[\psi]_\bullet}\times_S T \to {\mathfrak{X}}\times_S T$ is a presentation. Equvivalently we have to show that $$f_{\phi\times T,\psi\times T} :({[\phi \times T]_\bullet})\,^\wedge_{T}\, ([\psi\times T]_\bullet) \to {\mathfrak{X}}\times T$$ is a presentation. This follows from the previous case. Proof of Proposition \[lem:alm.onto.imp.onto\] {#sec:prf.alm.onto} ---------------------------------------------- Let $n$ be a positive integer. Let $\mathbb{U}_n \subset \mathbb{A} ^1 \sqcup \cdots \sqcup \mathbb{A} ^{n}$ be the $\mathbb{Z}$-scheme of separable monic polynomials of degree at most $n$ and let $\mathbb{U}_n' = \left\{ (f,a)\in \mathbb{U} \times \mathbb{A}^1 \mid f(a)=0 \right\}$. Note that there is an obvious finite etale and onto map $\mathbb{U}_n' \rightarrow \mathbb{U}_n$. Let $X \to {\mathfrak{X}}$ be an $(n,{\mathcal{S}})$-almost onto presentation of a stack by a scheme. Without loss of generality, we may assume that $X$ is affine. Let $S_n=S \times_{\operatorname{Spec}\mathbb{Z}} \mathbb{U}_n$, $S_n'=S\times_{\operatorname{Spec}\mathbb{Z}} \mathbb{U}_n'$, $\mathfrak{X}_n:=\mathfrak{X} \times_{\operatorname{Spec}\mathbb{Z}} \mathbb{U}_n$, and $X_n:=X \times_{\operatorname{Spec}\mathbb{Z}} \mathbb{U}_n$. Applying Lemma \[lem:improving.pres\] to $(S_n,S_n',\mathfrak{X}_n,X_n)$ instead of $(S,S',\mathfrak{X},X)$, we get a presentation $$({[\phi_n]_\bullet})\,^\wedge_{S_n}\, ([\psi_n]_\bullet) \rightarrow \mathfrak{X}_n$$ of $S_n$-stacks. The composition $$({[\phi_n]_\bullet})\,^\wedge_{S_n}\, ([\psi_n]_\bullet)\rightarrow \mathfrak{X}_n \rightarrow \mathfrak{X}$$ is an $\mathcal{S}$-onto presentation by an algebraic space. This proves part \[lem:alm.onto.imp.onto:1\]. Since $\mathbb{U}_n$ is quasi-affine so is $X_n$. Thus if $\mathfrak{X}$ is an algebraic space then $({[\phi_n]_\bullet})\,^\wedge_{S_n}\, ([\psi_n]_\bullet)$ is a scheme. This proves part \[lem:alm.onto.imp.onto:2\]. Group schemes and their classifying spaces {#sec:group.schemes} ========================================== In this appendix we will [deduce some statements about group algebraic spaces from the corresponding statements for algebraic groups. The statements are about existence of some stratification of the base, so the transition between algebraic groups and group algebraic spaces is standard. We included it for completeness, since we could not find them in the literature for the generality of group schemes.]{} The generic point ----------------- For any scheme $X$ denote by $Alg(X)$ the category of algebraic spaces of finite type over $U$. We consider the assignment $X \mapsto Alg(X)$ as a contravariant 2-functor to the 2-category of categories. \[prop:gen.pt\] [Assume that $S$ is irreducible and reduced]{}, and let $\eta$ be its generic point. Then the natural functor $$\lim_{\underset{U \subset S}{\longrightarrow}}Alg(U) \to Alg(\eta)$$ is an equivalence of categories. The affine case follows from a standard argument: \[lem:affine.case\] For a scheme $X$, let $Aff(X)$ be the category of schemes affine over $X$. If $S$ is a reduced and irreducible scheme with generic point $\eta$, then the natural functor $$\lim_{\underset{U \subset S}{\longrightarrow}}Aff(U) \rightarrow Aff(\eta)$$ is an equivalence of categories. \[lem:two.diagrams\] Let $X,Y$ be algebraic spaces and let $\varphi_1,\varphi_2:X \rightarrow Y$ be two morphisms. Then there are affine schemes $\widetilde{X},\widetilde{Y}$, etale covers $\pi_X:\widetilde{X} \rightarrow X,\pi_Y:\widetilde{Y} \rightarrow Y$, and morphisms $\widetilde{\varphi_1},\widetilde{\varphi_2}:\widetilde{X} \rightarrow \widetilde{Y}$ such that the diagrams $$\xymatrix{\widetilde{X} \ar@{->}[r]^{\widetilde{\varphi_1}} \ar@{->}[d]^{\pi_X} & \widetilde{Y} \ar@{->}[d]^{\pi_Y} \\ X \ar@{->}[r]^{\varphi_1} & Y} \quad\quad \xymatrix{\widetilde{X} \ar@{->}[r]^{\widetilde{\varphi_2}} \ar@{->}[d]^{\pi_X} & \widetilde{Y} \ar@{->}[d]^{\pi_Y} \\ X \ar@{->}[r]^{\varphi_2} & Y}$$ commute. Let $\pi_Y:\widetilde{Y} \rightarrow Y$ be an etale cover of $Y$ [by an affine scheme]{}. For $i=1,2$, let ${X}_i=\widetilde{Y} \times_Y X$, where the map $X \rightarrow Y$ is $\varphi_i$. Let ${X}_3={X_1} \times_X {X_2}$, let $\widetilde{X} \rightarrow X_3$ be an etale cover of $X_3$ [by an affine scheme]{}, and let $\pi_X:\widetilde{X} \rightarrow X$ be the composition $\widetilde{X} \rightarrow X_3 \rightarrow X$. It is easy to see that $\pi_X$ is etale and that there are $\widetilde{\varphi_i}$ as requested by the lemma. [The following is standard.]{} \[lem:generic.cover\] If [a morphism]{} $X \rightarrow Y$ between schemes is an etale (respectively, Zariski) cover over the generic point, then it is an etale (respectively, Zariski) cover over an open set. $ $ Faithful: : Let $U \subset S$ be an open set, $X,Y\in Alg(U)$, and $\varphi_1,\varphi_2:X \rightarrow Y$ be morphisms such that $\varphi_1|_\eta:X_\eta \rightarrow Y_\eta$ is equal to $\varphi_2|_\eta:X_\eta \rightarrow Y_\eta$. We need to show that there is an open $U' \subset U$ such that $\varphi_1|_{U'}=\varphi_2|_{U'}$. Apply Lemma \[lem:two.diagrams\] to get the diagrams $$\xymatrix{\widetilde{X} \ar@{->}[r]^{\widetilde{\varphi_1}} \ar@{->}[d]^{\pi_X} & \widetilde{Y} \ar@{->}[d]^{\pi_Y} \\ X \ar@{->}[r]^{\varphi_1} & Y} \quad\quad \xymatrix{\widetilde{X} \ar@{->}[r]^{\widetilde{\varphi_2}} \ar@{->}[d]^{\pi_X} & \widetilde{Y} \ar@{->}[d]^{\pi_Y} \\ X \ar@{->}[r]^{\varphi_2} & Y}.$$ By Lemma \[lem:affine.case\], there is an open set $U' \subset U$ such that $\widetilde{\varphi_1}|_{U'}=\widetilde{\varphi_2}|_{U'}$. Since $\pi_X|_{U'}$ is an etale cover, it is an epimorphism, and therefore $\varphi_1|_{U'}=\varphi_2|_{U'}$. Full: : Let $U \subset S$ be open, $X,Y\in Alg(U)$, and [$\varphi:X_{\eta} \rightarrow Y_\eta$]{}. We need to show that there is an open $U' \subset U$ and $\psi:X_{U} \rightarrow Y_U$ such that $\psi|_\eta=\varphi$. Let $\pi_X:\widetilde{X} \rightarrow X$ and $\pi_Y:\widetilde{Y} \rightarrow Y$ be etale covers [by affine schemes]{}. We have maps $\widetilde{X}_\eta \rightarrow X_\eta \rightarrow Y_\eta$. Let $p_\mathcal{Z}:\mathcal{Z} \rightarrow \widetilde{X}_\eta \times_{Y_\eta} \widetilde{Y}_\eta$ be an etale cover [by an affine scheme]{}. By Lemma \[lem:affine.case\], there is an open subset $U' \subset U$, a scheme $Z\in Aff(U')$, morphisms $\alpha:Z \rightarrow \widetilde{X}$ and $\beta:Z \rightarrow \widetilde{Y}$, and an isomorphism $Z_\eta \rightarrow \mathcal{Z}$ such that $\alpha{|}_\eta$ is equal to the composition $Z_\eta \rightarrow \mathcal{Z} \rightarrow \widetilde{X}_\eta \times_{Y_\eta} \widetilde{Y}_\eta\rightarrow \widetilde{X}_\eta$ and $\beta{|}_\eta$ is equal to the composition $Z_\eta \rightarrow \mathcal{Z} \rightarrow \widetilde{X}_\eta \times_{Y_\eta} \widetilde{Y}_\eta\rightarrow \widetilde{Y}_\eta$. By lemma \[lem:generic.cover\], there is an open subset $U'' \subset U'$ such that the [restriction $\alpha{|}_{U''}:Z_{U''} \rightarrow \widetilde{X}_{U''}$]{} is etale. Let $\gamma$ be the composition $Z \rightarrow \widetilde{Y} \rightarrow Y$, and let $p_1,p_2:Z \times_X Z \rightarrow Z$ be the two projections. Since $(\gamma \circ p_1){|}_\eta=(\gamma \circ p_2){|}_\eta$, the faithfullness implies that there is an open subset $U''' \subset U''$ such that $(\gamma \circ p_1){|}_{U'''}=(\gamma \circ p_2){|}_{U'''}$. By [faithfully flat]{} descent, there is a morphism $\psi:X_{U'''} \rightarrow Y_{U'''}$ such that [the composition $Z\to X\overset{\psi}{\to} Y$ is equal to $\gamma|_{U'''}$. Since $Z_\eta\to X_\eta$ is epimorphism, this implies that]{} $\psi{|}_\eta={\varphi}$. Essentially surjective: : We divide the proof to the following steps 1. [We prove that i]{}f $\mathcal{X}$ is a separated scheme over $\eta$, then there is an open $U \subset S$ and a scheme $X$ over $U$ such that $X_\eta=\mathcal{X}$:\ Let $\widetilde{\mathcal{X}} \rightarrow \mathcal{X}$ be [a Zariski cover by an affine scheme]{}, and let $\mathcal{R}:=\widetilde{\mathcal{X}}\times_{\mathcal{X}}\widetilde{\mathcal{X}}$. Note that $\mathcal{R}$ is an affine scheme, the two projections $\mathcal{R} \rightarrow \widetilde{\mathcal{X}}$ are Zariski covers, and the embedding $\mathcal{R} \rightarrow \widetilde{\mathcal{X}} \times \widetilde{\mathcal{X}}$ makes $\mathcal{R}$ into an equivalence relation. By Lemmas \[lem:affine.case\] and \[lem:generic.cover\], there is an open set $U \subset S$, $U$-schemes $R,\widetilde{X}$, and a monomorphism $R \rightarrow \widetilde{X}\times \widetilde{X}$ such that the two projections $R \rightarrow \widetilde{X}$ are Zariski covers, $R$ is an equivalence relation on $\widetilde{X}$, and the map $R_\eta \rightarrow \widetilde{X}_\eta\times\widetilde{X}_\eta$ is isomorphic to $\mathcal{R} \rightarrow \widetilde{\mathcal{X}}\times \widetilde{\mathcal{X}}$. Let $X$ be the gluing of $\widetilde{X}$ along $R$. Then $\widetilde{X}$ is a scheme and $X_\eta$ is isomorphic to $\mathcal{X}$. 2. [We prove that i]{}f $\mathcal{X}$ is an arbitrary scheme over $\eta$, then there is an open $U \subset S$ and a scheme $X$ over $U$ such that $X_\eta=\mathcal{X}$:\ The proof is similar. The only difference is that, in this case, $\mathcal{R}$ is separated by not necessarily affine. Instead of using Lemma \[lem:affine.case\], we use the previous step and the fully faithfulness. 3. [We prove that i]{}f $\mathcal{X}$ is an algebraic space over $\eta$, then there is an open $U \subset S$ and an algebraic space $X$ over $U$ such that $X_\eta=\mathcal{X}$:\ The proof is similar to the previous step, replacing Zariski covers by etale covers. Group algebraic spaces ---------------------- Let $X$ be a scheme and $G$ be a group algebraic space over it. Then there exists a stratification $X=\bigcup X_i$ such that $G|_{X^{{red}}_i}$ is a group scheme. Here $X^{{red}}_i$ is the reduction of $X_i$. ### Stratification of QCA stacks \[prop:strat.qca.1\] Let $X$ be a scheme and $G$ be a group [scheme]{} over it. Assume that ${\mathfrak{X}}:=(BG)_{fppf}$ is a QCA stack. Then there exists a stratification $X=\bigcup X_i$ such that $G|_{X^{{red}}_i}$ can be embedded as a closed subgroup in $GL_n\times_{spec {\mathbb{Z}}} X^{{red}}_i$ for some $n$. Without loss of generality, we may assume that $X$ is reduced, irreducible and affine. By Noetherian induction, it is enough to prove that there exist open $U \subset X$ and an integer $n$ such that $G|_{U}$ can be embedded as a closed subgroup in $GL_n\times_{spec {\mathbb{Z}}} U$. Let $\eta$ be the generic point of $X$. Denote the composition $\eta \to X \to {\mathfrak{X}}$ by $x$. By definition, the group $Aut(x)$ is $G_\eta$. Therefore $G_\eta$ is linear. Thus we have a closed embeding morphism $G_{\eta}\to GL_n\times_{spec {\mathbb{Z}}} \eta$. By Proposition \[prop:gen.pt\], we can extend this embeding to an embeding $\phi:G_V \to GL_n\times_{spec {\mathbb{Z}}} V$ for some affine open $V \subset X$, as required. ### Quotients of group schemes Let $H \subset G$ be an embedding of group schemes and let $Y$ be a scheme. A morphism of schemes $G \to Y$ is a quotient iff the map $G \times H \to G\times_Y G$ given by $(g,h)\mapsto (g,gh)$ is an isomorphism. \[lem:base.change.quotient\] Let $X$ be a scheme, let $H \subset G$ be group schemes over $X$ such that the quotient $p:G \rightarrow G/H$ exists, and let $f:H \rightarrow G$ be an $H$-equivariant map. Denote the structure map of $H$ by $s_H:H \rightarrow X$ and the identity map of $H$ by $1_H:X\rightarrow H$. Let $\nu := p \circ f \circ 1_H:X \rightarrow G$. Then the diagram $$\xymatrix{H \ar@{->}[r]^f \ar@{->}[d]^{s_H} & G \ar@{->}[d]^p\\ X \ar@{->}[r]^{\nu}& G/H}$$ is cartesian. Without loss of generality we can assume that $f$ is the group embedding. We get the desired square by composing the following two: $$\xymatrix{G\times_X H \ar@{->}[rr]^{(g,h)\mapsto gh} \ar@{->}[d]^{(g,h)\mapsto g} && G \ar@{->}[d]^p\\ G \ar@{->}[rr]^{p}&& G/H}$$ and $$\xymatrix{H \ar@{->}[rr]^{h\mapsto (1,h)} \ar@{->}[d]^{(g,h)\mapsto g} && G\times_X H \ar@{->}[d]^{(g,h)\mapsto g}\\ X \ar@{->}[rr]^{1_G}&& G}$$ \[prop:strat.qca\] Let $X$ be a scheme, let $G$ be a smooth and affine group algebraic space over $X$, and let $H \subset G$ be a subgroup algebraic space over $X$. Then there exists a stratification $X=\bigcup X_i$ such that $G|_{\bar X_i}$ and $H|_{\bar X_i}$ are group schemes and the quotients $G|_{\bar X_i}/H|_{\bar X_i}$ exist. Without loss of generality, we may assume that $X$ is reduced, irreducible and affine. Using Proposition \[prop:group.algebraic.space\], we can assume that $G$ is a group scheme. By Noetherian induction, it is enough to prove that there exists open $U \subset X$ such that the quotient $G|_{U}/H|_{U}$ exists. Let $\eta$ be the generic point of $X$. By [@Con Corollary 1.2], the quotient $Y:=G|_{\eta}/H|_{\eta}$ exists. The assertion now follows from Proposition \[prop:gen.pt\]. [WWW]{} Behrend, Kai A. [*Derived $\ell$-adic categories for algebraic stacks.*]{} Mem. Amer. Math. Soc. 163 (2003), no. 774, viii+93 pp. Conrad, B. [*Notes from a course on algebraic groups*]{} availabel at <http://math.stanford.edu/~conrad/252Page/handouts/qtformalism.pdf> Demazure, Michel; Gabriel, Pierre [*Groupes algebriques. Tome I: Geometrie algebrique, generalites, groupes commutatifs.*]{} North-Holland Publishing Co., Amsterdam, 1970. xxvi+700 pp. Drinfeld, Vladimir; Gaitsgory, Dennis [*On some finiteness questions for algebraic stacks.*]{} Geom. Funct. Anal. 23 (2013), no. 1, 149–294. Grothendieck, A. [*Elements de geometrie algebrique. IV. Etude locale des schemas et des morphismes de schemas IV.*]{} Inst. Hautes Etudes Sci. Publ. Math. No. 32 1967 361 pp. Hrushovski, Ehud; Martin, Ben; Rideau, Silvain [*Definable equivalence relations and zeta functions of groups.*]{} With an appendix by Raf Cluckers. J. Eur. Math. Soc. (JEMS) 20 (2018), no. 10, 2467–2537. Laumon, Gerard; Moret-Bailly, Laurent [*Champs algebriques.*]{} Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. 39. Springer-Verlag, Berlin, 2000. xii+208 pp. Olsson, Martin C. [*Hom–stacks and restriction of scalars.*]{} Duke Math. J. 134 (2006), no. 1, 139–164. Sakellaridis, Yiannis [*The Schwartz space of a smooth semi-algebraic stack.*]{} Selecta Math. (N.S.) 22 (2016), no. 4, 2401–2490. <https://stacks.math.columbia.edu/> [^1]: The definition in [@Sak §2.3] is slightly more restrictive, though we believe the result is true without the restriction.
--- abstract: 'Even though the recently discovered high-magnification event MOA-2010-BLG-311 had complete coverage over the peak, confident planet detection did not happen due to extremely weak central perturbations (fractional deviations of $\lesssim 2\%$). For confident detection of planets in extremely weak central perturbation (EWCP) events, it is necessary to have both high cadence monitoring and high photometric accuracy better than those of current follow-up observation systems. The next-generation ground-based observation project, KMTNet (Korea Microlensing Telescope Network), satisfies the conditions. We estimate the probability of occurrence of EWCP events with fractional deviations of $\leq 2\%$ in high-magnification events and the efficiency of detecting planets in the EWCP events using the KMTNet. From this study, we find that the EWCP events occur with a frequency of $> 50\%$ in the case of $\lesssim 100\ M_{\rm E}$ planets with separations of $0.2\ {\rm AU} \lesssim d \lesssim 20\ {\rm AU}$. We find that for main-sequence and subgiant source stars, $\gtrsim 1\ M_{\rm E}$ planets in EWCP events with the deviations $\leq 2\%$ can be detected $> 50\%$ in a certain range that changes with the planet mass. However, it is difficult to detect planets in EWCP events of bright stars like giant stars, because it is easy for KMTNet to be saturated around the peak of the events with a constant exposure time. EWCP events are caused by close, intermediate, and wide planetary systems with low-mass planets and close and wide planetary systems with massive planets. Therefore, we expect that a much greater variety of planetary systems than those already detected, which are mostly intermediate planetary systems regardless of the planet mass, will be significantly detected in the near future.' author: - 'Sun-Ju Chung, Chung-Uk Lee, and Jae-Rim Koo' title: 'Detection of planets in extremely weak central perturbation microlensing events via next-generation ground-based surveys' --- INTRODUCTION ============ High-magnification events for which the background source star passes close to the host star are very sensitive for detection of planets [@griest98]. This is because the central caustic induced by a planet is formed near the host star and thus produces central perturbations near the peak of the lensing light curve. @rattenbury02 studied planet detectability in high-magnification events and they showed that Earth-mass planets may be detected with 1 m class telescopes. Hence, current microlensing follow-up observations have focused on high-magnification events and have resulted so far in the detection of 14 planets out of 25 extrasolar planets detected by microlensing (Udalski et al. 2005; Gould et al. 2006; Gaudi et al. 2008; Bennett et al. 2008; Dong et al. 2009; Janczak et al. 2010; Miyake et al. 2011; Bachelet et al. 2012; Yee et al. 2012; Han et al. 2013; Choi et al. 2013; Suzuki et al. 2013). High-magnification events are sensitive to the diameter of the source star because the source star passes close to the central caustic. If the source diameter is bigger than the central caustic and thus the finite-source effect is strong, central perturbations induced by the central caustic are greatly washed out, thus making it difficult to realize the existence of planets. Events MOA-2007-BLG-400 (Subo et al. 2009), MOA-2008-BLG-310 (Janczak et al. 2010), and MOA-2010-BLG-311 (Yee et al. 2013) were high-magnification events with strong finite-source effects. All three events had complete coverage over the peak, but a secure planet detection occurred in only two events, MOA-2007-BLG-400 and MOA-2008-BLG-310. Even though the event MOA-2010-BLG-311 had complete coverage over the peak, central perturbations around the peak were extremely weak with a fractional deviation of $\lesssim 2\%$ so that it gave rise to a $\Delta \chi^2 \sim 80$ of the best-fit planetary lens model from the single lens model. @yee13 reported that the planetary signal of $\Delta \chi^2 \sim 80$ is below the detection threshold range of $\Delta \chi^2 = 350-700$ suggested by @gould10, and thus it is difficult to claim a secure detection of the planet. This suggests that extremely weak central perturbations (hereafter EWCPs) of the deviations $\lesssim 2\%$ produce planetary signals below the detection threshold and obstruct a confident detection of planets. Current follow-up observations are intensively monitoring high-magnification events and their photometric error reaches $\sim 1\%$ at the peak, but it is not enough to get a confident detection of planets in EWCP events with deviations $\lesssim 2\%$, as shown in the event MOA-2010-BLG-311. For confident planet detection in EWCP events, it is necessary to have both high cadence monitoring and high photometric accuracy better than those of current follow-up observation systems. The next-generation ground-based observation project, KMTNet (Korea Microlensing Telescope Network), satisfies the conditions. The KMTNet will use a 1.6 m wide-field telescope at each of three southern sites, Chile, South Africa, and Australia, to perform a 24 hr continuous observation toward the Galactic bulge [@kim10]. Each telescope has a $18 {\rm K} \times 18 {\rm K}$ CCD camera that covers a field of view (FOV) of $2\arcdeg \times 2\arcdeg$ and it will observe four fields with a total FOV of $4\arcdeg \times 4\arcdeg$, in which each field will be monitored with an exposure time of about 2 minutes and a detector readout time of about 30 seconds giving a cadence of 10 minutes (Kim et al. 2010; Atwood et al. 2012). Hence, KMTNet has high potential for the detection of planetary signals in EWCP events. Here, we study how well planets in EWCP high-magnification events can be detected by the KMTNet. This paper is organized as follows. In § 2, we briefly describe the properties of the central caustic induced by a planet. In § 3, we estimate the probability of occurrence of EWCP events with deviations $\leq 2\%$ in high-magnification events and the efficiency of detecting planets in the EWCP events using KMTNet. In § 4, we discuss the observational limitations and other potential studies of the KMTNet. We summarize the results and conclude in § 5. CENTRAL CAUSTIC =============== In planetary lensing composed of a host star and a planet, the signal of the planet is a short-duration perturbation on the standard single lensing light curve of the host star. The perturbation is caused by the central and planetary caustics, which are typically separated from each other. The central caustic is always formed close to the host star and thus the perturbation by the central caustic occurs near the peak of the lensing light curve, while the planetary caustic is formed away from the host star and thus the perturbation by the planetary caustic can occur at any part of the light curve. Central perturbations caused by the central caustic have generally a property of the $s \leftrightarrow 1/s$ degeneracy (Griest & Safizadeh 1998; Dominik 1999). The degeneracy arises due to the similarity in the size and shape of the central caustics for $s$ and $1/s$. The duration of the central perturbations is proportional to the size of the central caustic. The size of the central caustic defined by the separation of the cusps on the star-planet axis [@chung05] is expressed by $$\Delta \xi \sim {4q \over {(s - 1/s)^2}},$$ where $q$ is the planet/star mass ratio. According to Equation (1), the size of the central caustic is $\propto s^2$ for $s \ll 1$ and is $\propto s^{-2}$ for $s \gg 1$. The finite source effect for high-magnification events becomes important because the source star passes close to the central caustic, as mentioned before. The magnification of a finite source corresponds to the magnification averaged over the source surface, i.e., $$A = {{\int^{\rho_\star}_{0} I(r)A_{p}(|\bold{r} - \bold{r}_{L}|)rdr}\over{\int^{\rho_\star}_{0}I(r)rdr}},$$ where $A_p$ is the point source magnification, $I(r)$ represents the source brightness profile, $\bold{r}$ is the vector to a position on the source star surface with respect to the center of the source star, $\bold{r}_L$ is the displacement vector of the source center with respect to the lens, and $\rho_\star$ is the source radius normalized to the Einstein radius of the lens system, ${\theta_{\rm E}}$, which is given by $${\theta_{\rm E}}= \sqrt{{ {4GM \over {c^2}} \left({1\over D_L} - {1\over D_S}\right)}},$$ where $D_{\rm L}$ and $D_{\rm S}$ are the distances to the lens and the source from the observer, respectively. DETECTION EFFICIENCY ==================== *Probability* ------------- Based on the result of the high-magnification event MOA-2010-BLG-311 (Yee et al. 2013) mentioned in Section 1, we choose that the threshold of EWCPs is a fractional deviation of $\delta = 2\%$, which is defined as $$\delta = {A - A_{0} \over {A_0}}\ ,$$ where $A$ and $A_0$ are the lensing magnifications with and without a planet, respectively. To investigate the frequency of EWCP events in high-magnification events of $A_{\rm max} \geq 100$, we estimate the probability of occurrence of EWCP events with $\delta \leq 2\%$. In consideration of typical Galactic bulge events, we assume that the mass and distance of the host star lens are $M_{\rm L} = 0.3\ M_{\odot}$ and $D_{\rm L} = 6\ \rm{kpc}$, and the distance of the source star is $D_{\rm S} = 8\ \rm{kpc}$. Then, the angular and physical Einstein radii of the lens system are ${\theta_{\rm E}}= 0.32\ \rm{mas}$ and $r_{\rm E} = 1.9\ \rm AU$. We adopt three different source stars including main-sequence, subgiant, and giant stars, which have radii of $1.0\ R_\odot$, $2.0\ R_\odot$, and $10.0\ R_\odot$, respectively. The radii of the three source stars normalized to the Einstein radius are $\rho_{\star} = 0.0018, 0.0036,$ and $0.018$. For high-magnification events, the effect of limb darkening of the finite source surface is not negligible, so we adopt a brightness profile for the source star of the form $${I(\theta)\over{I_0}} = {1 - \Gamma \left(1-{3\over{2}}{\rm cos}\theta \right) - \Lambda \left(1-{5\over{4}}{\rm cos}^{1/2}\theta \right) },$$ where $\Gamma$ and $\Lambda$ are the linear and square-root coefficients and $\theta$ is the angle between the normal to the surface of the source star and the line of sight [@an02]. We assume that the coefficients ($\Gamma$, $\Lambda$) of main-sequence, subgiant, and giant stars are (0.08, 0.52), (0.11, 0.51), and (0.21, 0.46), respectively. Figure 1 shows the probability of occurrence of EWCP events with $\delta \leq 2\%$ as a function of the projected star-planet separation in units of ${\theta_{\rm E}}$, $s$, and planet/star mass ratio, $q$. The physical separation, $d$, and planet mass in units of Earth mass, $m_{\rm p}$, are also presented in the figure, where they are determined by $d = r_{\rm E}s$ and $m_{\rm p} = qM_{\rm L}$, respectively. In each panel, different shades of grey represent the areas with the probabilities of $\geq 10\%$, $\geq 40\%$, $\geq 80\%$, and $100\%$, respectively. As one may expect, the probability increases as the mass ratio decreases and the separation decreases for $s < 1$ and increases for $s > 1$. From the figure, we find that for $\lesssim 100\ M_{\rm E}$ planets with separations of $0.2\ {\rm AU} \lesssim d \lesssim 20\ {\rm AU}$, EWCP events with $\delta \leq 2\%$ in high-magnification events of $A_{\rm max} \geq 100$ occur with a frequency of $> 50\%$. This implies that high-magnification events of $A_{\rm max} \geq 100$ are mostly EWCP events of $\delta \leq 2\%$, thus it is important to resolve the EWCP events for the detection of many different planetary systems. In Figure 1, the reason a bump occurs at $s = 1.0$ is that regions with small fractional deviations around the center of the resonant caustic are rather widely formed [@chung09]. The vertical line of the $100\%$ probability in the figure represents the boundary of the lensing zone of $0.6 \lesssim s \lesssim 1.6$, where the planetary caustic is located within the Einstein ring. Because of the planetary caustic, the probability of occurrence of $\delta \leq 2\%$ events cannot reach $100\%$ within the lensing zone, as shown in Figure 1. However, if the finite source effect is strong, the probability can reach $100\%$ at $s \sim 1$, since there exist regions with small fractional deviations within the resonant caustic, as mentioned before (see the bottom panel of Figure 1). Table 1 shows in detail the probability for the three source stars presented in Figure 1. *Detectability* ---------------- To estimate the efficiency of detecting planets in EWCP events of $\delta \leq 2\%$, we compute the detectability defined as the ratio of the fractional deviation ($\delta$) to the photometric accuracy ($\sigma_{\rm ph}$), i.e., $$D = {|\delta|\over{\sigma_{\rm ph}}}, \ \ \ \sigma_{\rm ph} = {{\sqrt{AF_{\rm S} + F_{B}}}\over{(A - 1)F_{\rm S}}},$$ where $F_{\rm S}$ and $F_{\rm B}$ represent the baseline flux of the lensed source star and the blended background flux, respectively. We assume that the I-band absolute magnitudes of the main-sequence, subgiant, and giant stars are $M_I = 3.8, 3.0,$ and $0.0$. We assume that the extinction toward the Galactic bulge is $A_I = 1.0$ and the blended flux $F_{\rm B}$ is equivalent to the flux of the background star with the apparent magnitude of $I = 20.0$. The apparent magnitudes of the three source stars affected by the assumed extinction are $I = 19.3, 18.5,$ and $15.5$, respectively. Considering typical Galactic bulge events, we also assume that the Einstein timescale is ${t_{\rm E}}= 20$ days. Based on the specification of KMTNet systems, we assume that the instrument can detect 31 photons $s^{-1}$ for a $I = 20.0$ star and the monitoring frequency is once per 10 minutes, having an exposure time of 2 minutes, and the lower limit of the photometric accuracy is $0.001$, which corresponds to the value at $I = 13.8$ mag (Atwood et al. 2012). We assume that the planetary signal is detectable if $D \geq 3$. We also assume that the planet is detected only if the planetary signal with $D \geq 3$ is detected at least five times during the event. The five points with $D \geq 3$ do not need to be consecutive, because events with this requirement have $\Delta \chi^{2} \gtrsim 350$, which satisfies the detection threshold for high-magnification events [@gould10]. We note that since the single lensing magnification $A_0$ is unknown in actual observations, it is determined from the best-fit single lens model to the observed planetary lensing event. The lensing parameters of the best-fit single lens event are obtained by a $\chi^2$ minimization method [@chung06]. Figure 2 shows the planet detection efficiency of EWCP events with $\delta \leq 2\%$ for three different source stars, as a function of $s$ and $q$. From the figure, we find the following results. 1. EWCP events of $q \gtrsim 10^{-5}$ generally have two separations with the maximum detection efficiency due to the $s \leftrightarrow 1/s$ degeneracy, and the efficiency decreases as $s$ becomes smaller and/or larger than the maximum efficiency separation that changes with the planet mass. This is because the photometric accuracy of the observation systems is limited, thus it does not allow the efficiency for each $q$ to continuously increase as $s$ decreases for $s < 1$ and increases for $s > 1$. This result implies that the planet detection in EWCP events occurs only within a limited separation range which depends on the planet mass. 2. For main-sequence and subgiant stars, $\gtrsim 1\ M_{\rm E}$ planets in EWCP events of $\delta \leq 2\%$ can be detected $> 50\%$ in a certain range that changes with the planet mass. The range for the two stars is presented in Tables 2 and 3. EWCP events are caused by close, intermediate, and wide planetary systems with low-mass planets and close and wide planetary systems with massive planets. These planetary systems are quite different from those already detected, which are mostly intermediate planetary systems with $s \sim 1$ regardless of the planet mass. Therefore, the above results imply that a much greater variety of planetary systems can be significantly detected in the near future. We also compare the estimated detection efficiency with the probability in the same range. As a result, the efficiency for $\gtrsim 1\ M_{\rm E}$ planets increases to $\sim 70\%$, $\sim 80\%$, and $\sim 80\%$ for main-sequence, subgiant, and giant stars, respectively. This means that $\gtrsim 1\ M_{\rm E}$ planets located within the certain range can almost be detected by KMTNet. The efficiency compared with the probability, i.e., the ratio of the efficiency to the probability, is presented in Tables 2, 3, and 4. In Figure 2, the white solid and dashed lines represent the set of points where the probabilities of occurrence of events with $\delta \leq 1\%$ and $\delta \leq 0.5\%$ are both $80\%$. From the figure, we find that KMTNet can readily resolve up to EWCP events of $\delta = 0.5\%$. The planet detection efficiency is sensitive to the Einstein timescale of a lensing event. This is because under the condition of the limited monitoring frequency of a system, the decrease of the timescale gives rise to a decrease of the chance of detecting planets during the event. We thus test the change of the detection efficiency depending on the Einstein timescale. Figure 3 represents the detection efficiency changing with the Einstein timescale for the planetary system of $s = 0.5$ and $q = 10^{-4}$. The efficiency dramatically increases until ${t_{\rm E}}< 10$ days, but it becomes constant for ${t_{\rm E}}\gtrsim 10$ days, as shown in the figure. This means that the estimated efficiency for three source stars is valid up to EWCP events with ${t_{\rm E}}\sim 10$ days, whereas for events with ${t_{\rm E}}< 10$ days it considerably decreases with ${t_{\rm E}}$. DISCUSSION ========== Most high-magnification events of $A_{\rm max} \geq 100$ are EWCP events of $\delta \leq 2\%$ and KMTNet is capable of resolving the EWCP events, as shown in the results of Figures 1 and 2. However, the KMTNet is more favorable for the detection of events caused by the planetary caustic than those caused by the central caustic (high-magnification events). This is because the KMTNet plans to do 24-hour continuous observation with a constant exposure time of 2 minutes and the exposure time is applied for stars of $13 < I < 20$ mag [@kim10]. Hence, if a source star is highly magnified by $I < 13$ mag, it is easy for the KMTNet to be saturated around the peak of the high-magnification event with a 2-minute exposure time. This means that it is difficult to detect planets in EWCP events of bright stars like giant stars using KMTNet. The estimated detection efficiency for main-sequence and subgiant stars also includes those cases where the source stars are highly magnified by $I < 13$ mag, and thus the efficiency would decrease in real observations. However, since one considers only high-magnification events with $A_{\rm max} \geq 100$, KMTNet has a chance of detecting EWCP events of more dark stars of $I \gtrsim 20$ mag. The result of the test for the $I \gtrsim 20$ stars shows that the KMTNet can resolve up to EWCP events of $I \sim 22.0$ stars. Figure 4 shows an example light curve of a $I = 21.9$ star highly magnified by the planetary lens system of $s = 2.3$ and $q = 2.4\times 10^{-4}$. The planetary lensing event has a planetary signal of $\delta \lesssim 2\%$ and the planetary signal can be detected by KMTNet, according to the assumed detection conditions. Moreover, if follow-up spectroscopic observations would be carried out while observing high-magnification events using KMTNet, the chemical information for many faint bulge stars could be obtained, and it would be very helpful in studying the origin of the Galactic bulge. @yee13 mentioned that based on detected planetary lensing events, the detection threshold for central caustic events seems higher than for planetary caustic events. To confirm whether this is true or not, the detection of many more events caused by the central and planetary caustics is needed. In particular, the detection of more planetary caustic events is needed because only six of 25 microlensing planets have been detected in the planetary caustic events (Beaulieu et al. 2006; Sumi et al. 2010; Muraki et al. 2011; Bennett et al. 2012; Poleski et al. 2013; Furusawa et al. 2013; Tsapras et al. 2013 ). Fortunately, a vast number of planetary caustic events will be detected by the observation strategy of KMTNet. In addition, Figure 2 shows a total range of central caustic events of $A_{\rm max} \geq 100$ that can be detected by KMTNet. The region includes both EWCP events (grey region) and non-EWCP events (white region marked as $``A"$), which represents events of $\delta > 2\%$. KMTNet can readily detect events within the white region, because it is much easier to detect events of $\delta > 2\%$ than those of $\delta \leq 2\%$. A very wide range of grey and white regions implies that a large number of central caustic events (i.e., high-magnification events) will also be detected by KMTNet. Therefore, by using the KMTNet, one can find out whether the detection threshold for high-magnification events is higher than that for planetary caustic events and determine more accurate detection thresholds of both events. CONCLUSION ========== We have estimated the probability of occurring EWCP events of $\delta \leq 2\%$ in high-magnification events of $A_{\rm max} \geq 100$ and the efficiency of detecting planets in the EWCP events using the next-generation ground-based observation project, KMTNet. From this study, we found that the EWCP events occur with a frequency of $> 50\%$ in the case of $\lesssim 100\ M_{\rm E}$ planets with separations of $0.2\ {\rm AU} \lesssim d \lesssim 20\ {\rm AU}$. This implies that most of high-magnification events of $A_{\rm max} \geq 100$ are EWCP events of $\delta \leq 2\%$, and thus it is important to resolve the EWCP events for the detection of many different planetary systems. We found that for main-sequence and subgiant stars, $\gtrsim 1\ M_{\rm E}$ planets in EWCP events of $\delta \leq 2\%$ can be detected $> 50\%$ in a certain range that varies depending on the planet mass. However, it is difficult to detect planets in EWCP events of bright stars like giant stars, because it is easy for KMTNet to be saturated around the peak of the EWCP events with a constant exposure time. EWCP events are caused by close, intermediate, and wide planetary systems with low-mass planets and close and wide planetary systems with massive planets. Therefore, we expect that a much greater variety of planetary systems than those already detected, which are mostly intermediate planetary systems with $s \sim 1$ regardless of the planet mass, will be significantly detected in the near future. [lccccccc]{} 300.0$M_{\rm E}$ && 35.9 && 40.1 && 61.3\ 100.0$M_{\rm E}$ && 51.6 && 56.2 && 78.9\ 10.0$M_{\rm E}$ && 79.7 && 85.7 && 96.2\ 5.0$M_{\rm E}$ && 85.7 && 91.5 && 96.5\ 1.0$M_{\rm E}$ && 95.3 && 96.9 && 97.0\ 0.5$M_{\rm E}$ && 97.1 && 97.5 && 97.2 [lccccccccccccc]{} 300.0$M_{\rm E}$ && $0.2 \lesssim d \lesssim 0.6 $ &&& 53.9 && 75.3 && 71.6\ && $6.6 \lesssim d \lesssim 17.8 $ &&& 54.4 && 75.4 && 72.1\ \ 100.0$M_{\rm E}$ && $0.3 \lesssim d \lesssim 0.8 $ &&& 54.7 && 76.7 && 71.3\ && $4.6 \lesssim d \lesssim 12.4 $ &&& 54.7 && 75.8 && 72.2\ \ 10.0$M_{\rm E}$ && $0.6 \lesssim d \lesssim 1.6 $ &&& 50.9 && 68.7 && 74.1\ && $2.3 \lesssim d \lesssim 5.7 $ &&& 50.5 && 68.5 && 73.7\ \ 5.0$M_{\rm E}$ && $0.8 \lesssim d \lesssim 1.7 $ &&& 51.7 && 70.0 && 73.9\ && $2.1 \lesssim d \lesssim 4.6 $ &&& 51.3 && 69.5 && 73.8\ \ 1.0$M_{\rm E}$ && $1.3 \lesssim d \lesssim 2.8 $ &&& 58.0 && 75.7 && 76.6\ \ 0.5$M_{\rm E}$ && $1.5 \lesssim d \lesssim 2.4 $ &&& 57.4 && 85.0 && 67.5 [lccccccccccccc]{} 300.0$M_{\rm E}$ && $0.2 \lesssim d \lesssim 0.6 $ &&& 64.3 && 79.7 && 80.7\ && $6.4 \lesssim d \lesssim 15.5 $ &&& 64.6 && 80.1 && 80.6\ \ 100.0$M_{\rm E}$ && $0.3 \lesssim d \lesssim 0.8 $ &&& 65.8 && 80.9 && 81.3\ && $4.5 \lesssim d \lesssim 10.7 $ &&& 66.0 && 80.5 && 82.0\ \ 10.0$M_{\rm E}$ && $0.7 \lesssim d \lesssim 1.6 $ &&& 64.8 && 81.2 && 79.8\ && $2.3 \lesssim d \lesssim 4.9 $ &&& 64.9 && 81.7 && 79.4\ \ 5.0$M_{\rm E}$ && $0.9 \lesssim d \lesssim 1.8 $ &&& 62.6 && 79.1 && 79.1\ && $2.0 \lesssim d \lesssim 3.9 $ &&& 62.4 && 79.0 && 79.0\ \ 1.0$M_{\rm E}$ && $1.5 \lesssim d \lesssim 2.5 $ &&& 55.4 && 85.6 && 64.7\ \ 0.5$M_{\rm E}$ && $1.7 \lesssim d \lesssim 2.1 $ &&& 39.0 && 90.1 && 43.3 [lccccccccccccc]{} 300.0$M_{\rm E}$ && $0.4 \lesssim d \lesssim 0.8 $ &&& 68.2 && 87.8 && 77.7\ && $4.4 \lesssim d \lesssim 9.8 $ &&& 67.0 && 89.6 && 74.8\ \ 100.0$M_{\rm E}$ && $0.6 \lesssim d \lesssim 1.3 $ &&& 69.9 && 86.7 && 80.6\ && $2.9 \lesssim d \lesssim 6.5 $ &&& 71.0 && 89.1 && 79.7\ \ 10.0$M_{\rm E}$ && $1.3 \lesssim d \lesssim 2.8 $ &&& 57.6 && 81.7 && 70.5\ \ 5.0$M_{\rm E}$ && $1.6 \lesssim d \lesssim 2.2 $ &&& 66.9 && 83.5 && 80.1 Numerical simulations were performed by using a high performance computing cluster at the KASI (Korea Astronomy and Space Science Institute). This work was supported by the KASI grant 2014-1-400-06. [99]{} An, J. H., et al. 2002, , 572, 521 Atwood, B., et al. 2012, Proc. SPIE, 7021, 9 Bachelet, E., Shin, I.-G., Han, C., et al. 2012, , 754, 73 Beaulieu, J.-P., Bennett, D. P., Fouque, P., et al. 2006, NATURE, 439, 437 Bennett, D. P., Bond, I. A., Udalski, A., et al. 2008, , 684, 663 Bennett, D. P., Sumi, T., Bond, I. A., et al. 2012, , 757, 119 Choi, J.-Y., Han, C., Udalski, A., et al. 2013, , 768, 129 Chung, S.-J., Han, C., Park, B.-G, et al. 2005, , 630, 535 Chung, S.-J., Kim, D., Darnley, M. J., et al. 2006, , 650, 432 Chung, S.-J. 2009, , 705, 386 Dominik, M. 1999, A&A, 349, 108 Dong, S., Bond, I. A., Gould, A., et al. 2009, , 698, 1826 Furusawa, K., Udalski, A., Sumi, T., et al. 2013, arXiv:1309.7714 Gaudi, B. S., Bennett, D. P., Udalski, A., et al. 2008, Science, 319, 927 Gould, A., Udalski, A., An, D., et al. 2006, , 644, L37 Gould, A., Dong, S., Gaudi, B. S., et al. 2010, , 720, 1073 Griest, K., & Safizadeh, N. 1998, , 500, 37 Han, C., Udalski, A., Choi, J.-Y. et al. 2013, 762, L28 Janczak, J., Fukui, A., Dong, S., et al. 2010, , 711, 731 Kim, S.-L., Park, B.-G., Lee, C.-W., et al. 2010, Proc. SPIE, 7733, 77333F Miyake, N., Sumi, T., Dong, S., et al. 2011, , 728, 120 Muraki, Y., Han, C., Bennett, D. P., et al. 2011, , 741, 22 Poleski, R., Udalski, A., Dong, S, et al. 2013, arXiv:1307.4084 Rattenbury, N. J., Bond, I. A., Skuljan, J., et al. 2002, , 335, 159 Sumi, T., Bennett, D. P., Bond, I. A., et al. 2010, , 710, 1641 Suzuki, D., Udalski, A., Sumi, T., et al. 2013, arXiv:1311.3424 Tsapras, Y., Choi, J.-Y., Street, A., et al. 2013, arXiv:1310.2428 Udalski, A., Jaroszyński, M., Paczyński, B., et al. 2005, , 628, L109 Yee, J. C., Shvartzvald, Y., Gal-Yam, A., et al. 2012, , 755, 102 Yee, J. C., Hung, L.-W., Bond, I. A., et al. 2013, , 769, 77
--- author: - 'J.O. Sundqvist' - 'J. Puls' - 'A. Feldmeier' date: 'Received 7 July 2009 / Accepted 17 November 2009' subtitle: 'I. Resonance line formation in 2D models' title: Mass loss from inhomogeneous hot star winds --- [ The mass-loss rate is a key parameter of hot, massive stars. Small-scale inhomogeneities (clumping) in the winds of these stars are conventionally included in spectral analyses by assuming optically thin clumps, a void inter-clump medium, and a smooth velocity field. To reconcile investigations of different diagnostics (in particular, unsaturated UV resonance lines vs. $\rm H_{\alpha}$/radio emission) within such models, a highly clumped wind with very low mass-loss rates needs to be invoked, where the resonance lines seem to indicate rates an order of magnitude (or even more) lower than previously accepted values. If found to be realistic, this would challenge the radiative line-driven wind theory and have dramatic consequences for the evolution of massive stars. ]{} [ We investigate basic properties of the formation of resonance lines in small-scale inhomogeneous hot star winds with non-monotonic velocity fields.]{} [ We study inhomogeneous wind structures by means of 2D stochastic and pseudo-2D radiation-hydrodynamic wind models, constructed by assembling 1D snapshots in radially independent slices. A Monte-Carlo radiative transfer code, which treats the resonance line formation in an axially symmetric spherical wind (without resorting to the Sobolev approximation), is presented and used to produce synthetic line spectra.]{} [ The optically thin clumping limit is only valid for very weak lines. The detailed density structure, the inter-clump medium, and the non-monotonic velocity field are all important for the line formation. We confirm previous findings that radiation-hydrodynamic wind models reproduce observed characteristics of strong lines (e.g., the black troughs) without applying the highly supersonic ‘microturbulence’ needed in smooth models. For intermediate strong lines, the velocity spans of the clumps are of central importance. Current radiation-hydrodynamic models predict spans that are too large to reproduce observed profiles unless a very low mass-loss rate is invoked. By simulating lower spans in 2D stochastic models, the profile strengths become drastically reduced, and are consistent with higher mass-loss rates. To simultaneously meet the constraints from strong lines, the inter-clump medium must be non-void. A first comparison to the observed Phosphorus V doublet in the O6 supergiant $\lambda$ Cep confirms that line profiles calculated from a stochastic 2D model reproduce observations with a mass-loss rate approximately ten times higher than that derived from the same lines but assuming optically thin clumping. Tentatively this may resolve discrepancies between theoretical predictions, evolutionary constraints, and recent derived mass-loss rates, and suggests a re-investigation of the clump structure predicted by current radiation-hydrodynamic models. ]{} Introduction {#Introduction} ============ Mass loss through supersonic stellar winds is pivotal for the physical understanding of hot, massive stars and their surroundings. A change of only a factor of two in the mass-loss rate has a dramatic effect on massive star evolution [@Meynet94]. Winds from these stars are described by the line-driven wind theory [@Castor75; @Pauldrach86], which traditionally assumes the wind to be stationary, spherically symmetric, and homogeneous. Despite this theory’s apparent success [e.g., @Vink00], evidence for an inhomogeneous and time-dependent wind has over the past years accumulated, recently summarized in the proceedings from the workshop ‘Clumping in hot star winds’ [@Hamann08] and in a general review of mass loss from hot, massive stars [@Puls08]. That line-driven winds should be intrinsically unstable was already pointed out by @Lucy70, and was later confirmed first by linear stability analyses and then by direct, radiation-hydrodynamic modeling of the time-dependent wind [e.g., @Owocki84; @Owocki88; @Feldmeier95; @Dessart05], where the line-driven (or line-deshadowing) instability causes a small-scale, inhomogeneous wind in both density and velocity. *Direct observational* evidence of a small-scale, clumped stellar wind has, for O-stars, so far only been given for two objects, $\zeta$ Pup and HD93129A [@Eversberg98; @Lepine08]. Much *indirect* evidence, however, has arisen from quantitative spectroscopy, where the standard way of deriving mass-loss rates from observations nowadays is via line-blanketed, non-LTE (LTE: local thermodynamic equilibrium) model atmospheres that include a treatment of both the photosphere and the wind. Wind clumping has been included in such codes (e.g., CMFGEN [@Hillier98], PoWR [@Grafener02], FASTWIND [@Puls05]) by assuming statistically distributed *optically thin* density clumps and a void inter-clump medium, while keeping the smooth velocity law. The major result from this methodology is that any mass-loss rate derived from smooth models and density-squared diagnostics ($\rm H_{\alpha}$, infra-red and radio emission) needs to be scaled down by the square root of the clumping factor (which describes the over density of the clumps as compared to the mean density, see Sect. \[wind\_stoch\]). For example, @Crowther02, @Bouret03, and @Bouret05 have concluded that a reduction of ‘smooth’ mass-loss rates by factors $3 \dots 7$ might be necessary. Furthermore, from a combined optical/IR/radio analysis of a sample of Galactic O-giants/supergiants, @Puls06 derived upper limits on observed rates that were factors of $2 \dots 3$ lower than previous $\rm H_{\alpha}$ estimates based on a smooth wind. On the other hand, the strength of UV resonance lines (‘P Cygni lines’) in hot star winds depends linearly on the density and is therefore not believed to be directly affected by optically thin clumping. By using the Sobolev with exact integration technique (SEI; cf. @Lamers87 1987) on the unsaturated Phosphorus V (PV) lines, @Fullerton06 for a large number of Galactic O-stars derived rates that were factors of $10 \dots 100$ lower than corresponding smooth $\rm H_{\alpha}$/radio values (provided PV is the dominant ion in spectral classes O4 to O7). Such large revisions would conflict with the radiative line-driven wind theory and have dramatic consequences for the evolution of, and the feedback from, massive stars [cf. @Smith06; @Hirschi08]. Indeed, a puzzling picture has emerged, and it appears necessary to ask whether the present treatment of wind clumping is sufficient. Particularly the assumptions of optically thin clumps, a void inter-clump medium, and a smooth velocity field may not be adequate to infer proper rates under certain conditions. #### Optically thin vs. optically thick clumps. @Oskinova07 used a porosity formalism [@Feldmeier03; @Owocki04] to scale the opacity from smooth models and investigate impacts from *optically thick* clumps on the line profiles of $\zeta$ Pup. Due to a reduction in the effective opacity, the authors were able to reproduce the PV lines without relying on a (very) low mass-loss rate, while simultaneously fitting the optically thin $\rm H_{\alpha}$ line. This formalism, however, was criticized by @Owocki08 who argued that the original porosity concept had been developed for continuum processes, and that line transitions rather should depend on the non-monotonic velocity field seen in hydrodynamic simulations. Proposing a simplified analytic description to account for this velocity-porosity, or ‘vorosity’, he showed how also this effect may reduce the effective opacity. In this first paper we attempt to clarify the most important concepts by conducting a detailed investigation on the synthesis of UV resonance lines from inhomogeneous two-dimensional (2D) winds. We create both pseudo-2D, radiation-hydrodynamic wind models and 2D, stochastic wind models, and produce synthetic line profiles via Monte-Carlo radiative transfer calculations. We account for and analyze the effects from a wind clumped in *both* density and velocity as well as the effects from a non-void inter-clump medium. Especially we focus on lines with intermediate line strengths, comparing the behavior of these lines with the behavior of both optically thin lines and saturated lines. Follow-up studies will include a treatment of emission lines (e.g., $\rm H_{\alpha}$) and an extension to 3D, and the development of simplified approaches to incorporate effects into non-LTE models. In Sect. \[wind\] we describe the wind models and in Sect. \[rt\] the Monte-Carlo radiative transfer code. First results from 2D inhomogeneous winds are presented in Sect. \[2d\], and an extensive parameter study is carried out in Sect. \[ps\]. We discuss some aspects of the interpretations of these results and perform a first comparison to observations in Sect. \[Discussion\], and summarize our findings and outline future work in Sect. \[Conclusions\]. Wind models {#wind} =========== For wind models, we use customary spherical coordinates $(r,\Theta,\Phi)$ with $r$ the radial coordinate, $\Theta$ the polar angle, and $\Phi$ the azimuthal angle. We assume spherical symmetry in 1D models and symmetry in $\Phi$ in 2D models. In all 2D models $\Theta$ is sliced into $N_{\Theta}$ equally sized slices, giving a lateral scale of coherence (or an opening angle) $180 / N_{\Theta}$ degrees. This 2D approximation is discussed in Sect. \[3d\]. Below we describe the model types primarily used in the present analysis; two are of stochastic nature and two are of radiation-hydrodynamic nature. Radiation-hydrodynamic wind models {#wind_rh} ---------------------------------- We use the time-dependent, radiation-hydrodynamic (hereafter RH) wind models from @Puls93 [hereafter ‘POF’], calculated by S. Owocki, and from @Feldmeier97 [hereafter ‘FPP’], and the reader is referred to these papers for details. Here we summarize a few important aspects. POF assume a 1D, spherically symmetric outflow, and circumvent a detailed treatment of the wind energy equation by assuming an isothermal flow. Perturbations are triggered by photospheric sound waves. The wind consists of 800 radial points, extending to roughly 5 stellar radii. FPP also assume a 1D, spherically symmetric outflow, but include a treatment of the energy equation. Perturbations are triggered either by photospheric sound waves or by Langevin perturbations that mimic photospheric turbulence. The wind consists of 4000 radial points, extending to roughly 30 stellar radii. Tests have shown that the FPP winds yield similar results for both flavors of perturbations, and, for simplicity, we therefore use only the results of the turbulence model. Due to the computational cost of obtaining the line force, only initial attempts to 2D RH simulations have been carried out [@Dessart03; @Dessart05]. These authors first used a strictly radial line force, yielding a complete lateral incoherent structure due to Rayleigh-Taylor or thin-shell instabilities, and in the follow-up study uses a restricted 3-ray approach to approximate the lateral line drag, yielding a larger lateral coherence but lacking quantitative results. Therefore, and because of the general dominance of the radial component in the radiative driving, we create fragmented 2D wind models from our 1D RH ones by assembling snapshots in the $\Theta$ direction, assuming independence between each slice consisting of a pure radial flow. After the polar angle has been sliced into $N_{\Theta}$ equally sized slices, one random snapshot is selected to represent each slice. This method for creating more-D models from 1D ones is essentially the same as the ‘patch method’ used by @Dessart02, when synthesizing emission lines for Wolf-Rayet stars, and the method used by, e.g., @Oskinova04, when synthesizing X-ray line emission from stochastic wind models. Fig. \[Fig:contours\] displays typical velocity and density structures from this type of 2D model. Stochastic wind models {#wind_stoch} ---------------------- We also study clumpy wind structures created by means of distorting a smooth, stationary, and spherically symmetric wind via stochastic procedures. This allows us to investigate the impacts from, and to set constraints on, different key parameters without being limited by the values predicted by the RH simulations. For the underlying smooth winds we adopt a standard $\beta$ velocity law $v_\beta(r)=(1-b/r)^{\beta}$. Here and throughout the paper, we measure [*all*]{} velocities in units of the terminal velocity, $\vinf$, and [*all*]{} distances and length scales in units of the stellar radius, $R_{\star}$. $b$ is given by $v(r=1)=v_{\rm min}$, the velocity at the base of the wind. $v_{\rm min}=0.01$ is assumed, roughly corresponding to the sound speed. For a given $\dot{M}$, the homogeneous density structure then follows directly from the equation of continuity. We choose $\beta=1$, which is appropriate for a standard O-star wind and allows us to derive simple analytic expressions for wind masses and flight times. #### A model clumped in density. First we consider a two component density structure consisting of clumps and a rarefied inter-clump medium (hereafter ICM), but keep the $\beta=1$ velocity law. Clumps are released randomly in radial direction at the inner boundary, independently from each slice. The release in radial direction means that a given clump stays within the same slice during its propagation through the wind. The average time interval between the release of two clumps is $\delta t$, which here and in the following is expressed in units of the wind’s dynamic time scale $t_{\rm dyn}=R_{\star}/\vinf$. The average distance between clumps thus is $v_{\beta}\, \delta t$, i.e. clumps are spatially closer in the inner wind than in the outer wind, and for example $\delta t = 0.5$ (in $t_{\rm dyn}$) gives an average clump separation of 0.5 (in $R_{\star}$) at the point where $v = 1$ (in $\vinf$). We further assume that the clumps preserve mass and lateral angle when propagating outwards, and that the underlying model’s total wind mass is conserved within every slice. This radial clump *distribution* is the same as the one used by @Oskinova06 when simulating X-ray emission from O-stars, but differs from the one used by @Oskinova07 when investigating porosity effects on resonance lines (see discussion in Sect. \[oskow\]). The radial clump *widths* are here calculated from the actual wind geometry and clump distribution by assuming a *volume filling factor* $f_{\rm v}$, defined as the fractional volume of the dense gas[^1]. A related quantity is the *clumping factor* $$f_{\rm cl} \equiv \frac{\langle \rho^2 \rangle}{\langle \rho \rangle^2}, \label{Eq:fcl}$$ as defined by @Owocki88, where angle brackets denote temporal averages. Identifying temporal with spatial averages one may write for a two component medium [cf. @Abbott81] $$f_{\rm cl} = \frac{f_{\rm v}+(1-f_{\rm v})\fic^2}{[f_{\rm v}+(1-f_{\rm v})\fic]^2}, \label{Eq:fclfv}$$ with $$\fic \equiv \frac{\rho_{\rm ic}}{\rho_{\rm cl}}, \label{Eq:fic}$$ the ratio of low- to high-density gas (subscript ic denotes inter-clump and cl denotes clump). For a void ($\fic=0$) ICM, $\rho_{\rm cl}/ \langle \rho \rangle$ = $f_{\rm v}^{-1}=f_{\rm cl}$, i.e, $f_{\rm cl}$ then describes the over density of the clumps as compared to the mean density. #### A model clumped in density and velocity. Next we consider also a non-monotonic velocity law, using the spatial distribution and widths of the clumps described in the previous paragraph. The RH simulations indicate that, generally, strong shocks separate denser and slower material from rarefied regions with higher velocities. Building on this basic result, we now modify the velocity fields in our stochastic models by adding a random perturbation to the local $v_{\beta}$ value prior to the starting point of each clump, so that the new velocity becomes $v_{\rm pre}$. A ‘jump velocity’ is thereafter determined by a random subtraction from $v_{\beta}$, now using the added perturbation as the maximum subtraction. That is, $$v_{\rm pre} = v_{\beta}+v_{\rm j} \times 2R_{\rm 1} \qquad v_{\rm post} = v_{\beta}-v_{\rm j} \times 2R_{\rm 1}R_{\rm 2}, \label{Eq:vj}$$ where $R_{\rm 1}$ and $R_{\rm 2}$ are two random numbers in the interval 0 to 1. $v_{\rm pre}-v_{\rm post}$ is the jump velocity as determined by the parameter $v_{\rm j}$. By multiplying $R_{\rm 1}$ by two, we make sure that the mean perturbation at the ‘pre’ point is $v_{\rm j}$, and $R_{\rm 2}$ allows for an asymmetry about $v_{\beta}$ (see Fig. \[Fig:fclvcl\]). The clump is assumed to start at $v_{\rm post}$, and its velocity span is set by assuming a value for $\delta v/\delta v_{\beta}$, where $\delta v$ is the velocity span of the clump and $\delta v_{\beta}$ the corresponding quantity for the same clump with a smooth velocity law (see Fig. \[Fig:fclvcl\]). Inspection of our RH models suggests that velocity gradients within density enhancements primarily are negative (see also Sect. \[vgrad\]), and negative gradients are also adopted in most of our stochastic models. Finally we assume a constant velocity gradient through the ICM. Overall, the above treatment provides a phenomenological description of the non-monotonic velocity field seen in RH simulations. The description differs from the one suggested by @Owocki08, who uses only one parameter to characterize the velocity field (whereas we have two). Our new formulation is motivated by both observational and modeling constraints from strong and intermediate lines, as discussed in Sect. \[oskow\]. The basic parameters defining a stochastic model are listed in Table \[Tab:par\]. Fig. \[Fig:contours\] (right panel) shows the density and velocity structures of one slice in a stochastic model, with density parameters $f_{\rm v}=0.1$, $\delta t = 1.0$, $\fic=0.005$, and velocity parameters $v_{\rm j}=0.15 v_\beta$ and $\delta v = -\delta v_{\beta}$. Clump positions have been highlighted with filled dots and a comparison to a RH model (FPP) is given. In the RH model, we have identified clump positions by highlighting all density points with values higher than the corresponding smooth model. The left panel shows the density contours of the same models, where, for clarity, only the wind to $r=5$ is displayed. =1.7mm -------------------------------------------------------------------------------------------------------------------- Name Parameter ------------------------------------------------- ------------- ----------------------------- ---------------------- Volume filling factor $f_{\rm v}$ $f_{\rm v}$ $= 0.01 \dots 1.0$ Average time interval between release of clumps $ \delta t$ $\delta t\,[t_{\rm dyn}]$ $= 0.05 \dots 1.5$ ICM density parameter, Eq. \[Eq:fic\] $\fic$ $\fic$ $= 0 \dots 0.1$ Velocity span of clump $\delta v$ $\delta v / $= -10.0 \dots 1.0$ \delta v_{\beta}$ Parameter determining the jump velocity $v_{\rm j}$ $v_{\rm j} / v_{\beta}$ $= 0.01 \dots 0.15$ -------------------------------------------------------------------------------------------------------------------- : Basic parameters defining a stochastic wind model clumped in density and with a non-monotonic velocity field. \[Tab:par\] ![Non-monotonic velocity field and corresponding parameters in a stochastic model.[]{data-label="Fig:fclvcl"}](fig2.ps){width="6cm"} Radiative transfer {#rt} ================== To compute synthetic line profiles from the wind models, we have developed a Monte-Carlo radiative transfer code (MC-2D) that treats resonance line formation in a spherical and axially symmetric wind using an ‘exact’ formulation (e.g., without resorting to the Sobolev approximation). The restriction to 2D is of course a shortage, but has certain geometrical and computational advantages and should be sufficient for the study of general properties, as discussed in Sect. \[3d\]. A thorough description and verification of the code can be found in Appendix \[rt\_code\]. Photons are released from the lower boundary (the photosphere) and each path is followed until the photon has either left the wind or been backscattered into the photosphere. Basic assumptions are a line-free continuum with no limb darkening emitted at the lower boundary, no continuum absorption in the wind, pure scattering lines, instantaneous re-emission, and no overlapping lines (i.e., singlets). These simplifying assumptions, except for doublet formation, are all believed to be of minor importance to the basic problem. By the restriction to singlet line formation we avoid confusion between effects on the line profiles caused by line overlaps and by other important parameters, but on the other hand it also prevents a direct comparison to observations for many cases (but see Sect. \[cmp\_obs\]). A consistent treatment of doublet formation will be included in the follow-up study. First results from 2D inhomogeneous winds {#2d} ========================================= Throughout this section we assume a thermal velocity, $v_{\rm t} = 0.005$ (in units of $\vinf$ and $\sim 10 \ \rm km\,s^{-1}$, appropriate for a standard O-star wind), and apply no microturbulence. After a brief discussion on the impact of the observer’s position and opening angles, we concentrate on investigating the formation of strong, intermediate, and weak lines. In our definition, an intermediate line is characterized by a line strength[^2] $\kappa_0 = 5.0$ chosen such as to almost precisely reach the saturation limit in a [*smooth*]{} model (cf. Fig. \[Fig:profs\]). By investigating these different line types, we account for the tight constraints that exist for each flavor: i) *weak lines* should be independent of density-clumping properties as long as the clumps remain optically thin, ii) for *intermediate lines* either smooth models overestimate the profile strengths or mass-loss rates are lower than previously thought (e.g. the PV problem, see Sect. \[Introduction\]), and iii) *strong saturated lines* are clearly present in hot star UV spectra, and observed features need to be reproduced, such as high velocity ($> \vinf$) absorption, the black absorption trough, and the reduction of re-emitted flux blueward of the line center. Observer’s position and opening angles {#ang_dep} -------------------------------------- The observed spectrum as calculated from a 2D wind structure depends on the observer’s placement relative to the star (see Appendix \[rt\_code\]). As it turns out, however, this dependence is relatively weak in both the stochastic and the RH models (the latter is demonstrated in the upper panel of Fig. \[Fig:profs\]). Tests have shown that the variability of the line profile’s emission part is insignificant. The variability of the absorption part may be detectable, at least near the blue edge, but is still insignificant for the integrated profile strength; the equivalent width of the absorption part is almost independent of the observer’s position. Also the opening angle, $180^{\circ}/N_{\Theta}$, primarily has a smoothing effect on the profiles. In Fig. \[Fig:profs\], prominent discrete absorption features appear near the blue edge in the model with $N_{\Theta}=1$ (spherical symmetry), but are smoothed out in the ‘broken-shell’ models with $N_{\Theta}=30$ and $60$. The equivalent widths of the absorption parts are approximately equal for all three models. Because our main interest here is the general behavior of the line profiles, we choose to work only with $N_{\Theta}=30$ and profiles averaged over all observer angles from here on. Working with averaged line profiles has great computational advantages, because roughly a factor of $N_{\Theta}$ fewer photons are needed. Radiation-hydrodynamic models {#rh} ----------------------------- Fig. \[Fig:profs\] (lower panel) shows line profiles from FPP and POF hydrodynamical models. For the strong lines, the constraints stated in the beginning of this section are reproduced without adopting a highly supersonic and artificial microturbulence. These features arise because of the multiple resonance zones in a non-monotonic velocity field, and are present in spherically symmetric RH profiles as well (see POF for a comprehensive discussion); the main difference between 1D and 2D is a smoothing effect, partly stemming from averaging over all observer angles (see above). The absorption at velocities higher than the terminal is stronger in FPP than in POF, due to both a higher velocity dispersion and a larger extent of the wind ($r_{\rm max} \sim 30$ as compared to $r_{\rm max} \sim 5$, see Sect. \[wind\_rh\]); more overdense regions are encountered in the outermost wind, which (because of the flatness of the velocity field) leads to an increased probability to absorb at almost the same velocities. For the intermediate lines, we again see the qualitative features of the strong lines, though less prominent. As compared to smooth models, a minor *absorption* reduction is present at velocities lower than the terminal, but compensated by the blue edge smoothing. Therefore the equivalent width of the line profile’s absorption part in the FPP model is approximately equal to that of the smooth model, whereas in the POF model it is reduced by $\sim 10 \%$. This minor reduction agrees with that found by @Owocki08, and is not strong enough to explain the observations without having to invoke a very low mass-loss rate. For the weak lines, the absorption part is marginally stronger than from a smooth, 1D model. Stochastic models {#st} ----------------- [llllllll]{} Model name & $f_{\rm v}$ & $\delta t \ [\rm t_{\rm dyn}] $ & $\fic$ & $\delta v/\delta v_{\beta}$ & $v_{\rm j}/v_{\beta}$ & $r_{\rm st}$$^{\rm a}$ & $r_{\rm ext}$$^{\rm b}$\ Default & 0.25 & 0.5 & 0.0025 & -1.0 & 0.15 & 1.3 & $\sim$25\ RHcopy & 0.1 & 0.5 & 0.005 & -10.0 & 0.15 & 1.3 & $\sim$5\ Obs1 & 0.11 & 0.5,4.0$^{\rm c}$ & 0.005,0.0025$^{\rm c}$ & -1.0 & 0.15 & 1.02 & $\sim$25\ \ \ \ \[Tab:mod\] In this subsection we use a ‘default’ 2D, stochastic model with parameters as specified in Table \[Tab:mod\]. By comparing this model to models in which one or more parameters are changed, we demonstrate key effects in the behavior of the line profiles. #### Strong lines. For strong lines, the line profiles from the default model reproduce the observational constraints described in the first paragraph of this section. As in the RH models, we apply no microturbulence. Fig. \[Fig:stochs\] (left panels) demonstrates the importance of the ICM in the default model; the absorption part of a very strong line is not saturated when $\fic=0$. That is, with a void ICM we will, regardless of the opacity, always have line photons escaping their resonance zones without ever interacting with any matter, thereby de-saturating the line. This ICM finding agrees with that of @Zsargo08, who point out that a non-void ICM is crucial for the formation of highly ionized species such as OVI. We also notice that $\delta v = - \delta v_{\beta}$ (used in the default model) does not permit clumps to have velocities higher than the local $v_\beta$ value, preventing absorption at velocities higher than the terminal one when the ICM is void. #### Intermediate lines. For intermediate lines, the line profiles from the default model display the main observational requirement if to avoid a drastic reduction in ‘smooth’ mass-loss rates[^3], namely a strong absorption reduction as compared to a smooth model. The left panels of Fig. \[Fig:stochs\] show how the integrated profile strength of the default model with $\kappa_0=5.0$ roughly corresponds to that of a smooth model having $\kappa_0=0.5$, i.e., the smooth model would result in a mass-loss rate (as estimated from the integrated profile strength) ten times *lower* than the clumped model. The figure also illustrates how the main effect is on the absorption part of the line profile. In addition to the reduction in profile *strength*, the profile *shapes* of the absorption parts are noticeably different for the default and smooth models (the shapes of the re-emission parts, not shown here, are similar for the two models). We further discuss the shapes of the profiles in Sect. \[shapes\]. The dramatic reduction in integrated profile strength occurs because of large velocity gaps between the clumps, in which the wind is unable to absorb (at this opacity the ICM may not ‘fill in’ these gaps with absorbing material). We have identified $|\delta v|$ as a critical parameter for the formation of intermediate lines. The importance of the velocity spans of the clumps is well illustrated by the absorption part profiles in Fig. \[Fig:stochs\] (lower-left panel, middle plot). The absorption is much stronger in the comparison model with $\delta v=-5 \delta v_{\beta}$ than in the default model with $\delta v=-\delta v_{\beta}$, because the former model covers more of the total velocity space *within* the clumps, thereby closing the gaps *between* the clumps. Consequently the wind may, on average, absorb at many more wavelengths. In principle, however, this effect is counteracted by a decrease in the clump’s optical depths, because of the now higher velocity gradients ($|\delta v/\delta v_{\beta}| >1$). Consider the *radial* Sobolev optical depth (proportional to $\rho/|\partial v/\partial r|$, see Appendix \[rt\_code\]) in a stochastic wind model. As compared to a smooth model, the density inside a clump is enhanced by a factor of $f_{\rm v}^{-1}$ (assuming a negligible ICM), but also the velocity gradient is enhanced by a factor of $|\delta v/\delta v_\beta|$. Thus we may write for the radial Sobolev optical depth inside a clump, $$\tau_{\rm Sob} \approx \frac{\tau_{\rm Sob,sm}}{f_{\rm v}|\delta v/\delta v_\beta|} \approx \frac{\kappa_{\rm 0}}{v_{\beta}f_{\rm v}|\delta v/\delta v_\beta|}, \label{Eq:tau_s}$$ where ‘sm’ indicates a quantity from a smooth wind, and the expression to the right is valid for an underlying $\beta~=~1$ velocity law. From Eq. \[Eq:tau\_s\], we see how the effects on the optical depth from the increased density ($f_{\rm v }=0.25$) and the increased velocity gradients ($|\delta v/\delta v_{\beta}|=5$) almost cancel each other in this example. Thus, the clumps are still optically thick for the intermediate line ($\kappa_0=5$), which means that the larger coverage of the total velocity space ‘wins’, and the net effect becomes an increase in absorption (as seen in Fig. \[Fig:stochs\], lower-left panel, middle plot). This will be true as long as not $f_{\rm v}|\delta v/\delta v_\beta| \gg 1$, which is never the case in the parameter range considered here. Finally, the prominent absorption dip toward the blue edge in the default model turns out to be a quite general feature of our stochastic models, and is discussed in Sects. \[eta\] and \[outer\]. #### Weak lines. The statistical treatment of density clumping included in atmospheric codes such as CMFGEN, PoWR, and FASTWIND is valid for optically thin clumps and a negligible ICM, and gives no direct effect on resonance lines scaling linearly with density. Here we test this prediction using detailed radiative transfer[^4]. Our default model recovers the smooth results when $\kappa_{\rm 0}=0.05$ (Fig. \[Fig:stochs\], left panels), confirming the expected behavior. However, from calculating spectra using different values of $\kappa_0$, we have found that significant deviations from smooth models occur for the default model already before $\kappa_{\rm 0}$ reaches unity. This occurs because the clumps start to become optically thick, which may again be understood by considering the radial Sobolev optical depth (Eq. \[Eq:tau\_s\]). With $f_{\rm v} \leq 0.25$ and $\kappa_0 \geq 0.25$, one finds $\tau_{\rm Sob} \geq 1.0$. Comparison between stochastic and radiation-hydrodynamic models {#cmp} --------------------------------------------------------------- Our stochastic wind models have been constructed to contain all essential ingredients of the RH models. Therefore they should also reproduce the RH results, at least qualitatively, if a suitable parameter set is chosen. To test this we used the POF model. In this model, the clumping factor increases drastically at $r \sim 1.3$, from $f_{\rm cl} \sim 1.0$ to $f_{\rm cl} \sim 10$, after which it stays basically constant. The average clump separation in the outer wind is roughly half a stellar radius. Important for the velocity field is that the velocity spans of the clumps are generally *larger* than corresponding ‘$\beta$ spans’, i.e., $|\delta v/\delta v_{\beta}| > 1$ (this is the case in FPP as well), a characteristic behavior that primarily affects the intermediate lines (details will be discussed in Sect. \[vgrad\]). Finally, a suitable $v_{\rm j}$ can be assigned from the position of the blue edge in a strong line calculated from POF. Table \[Tab:mod\] (entry RHcopy) summarizes all parameters used to create this stochastic, ‘pseudo-RH’ model. Fig. \[Fig:hs\] displays one slice of the velocity and density structures in the POF and RHcopy models, and Fig. \[Fig:stochs\] (right panels) displays the line profiles. The line profiles of POF are matched reasonably well by RHcopy. The intermediate lines again demonstrate the importance of the velocity spans of the clumps; for an alternative model with $\delta v = -\delta v_{\beta}$, there is much less absorption in the stochastic model than in POF, i.e., we encounter the same effect as discussed in the previous subsection. We conclude that in RH models it is the large velocity spans inside the density enhancements that prevent a reduction in profile strength (as compared to smooth models) for intermediate lines. Parameter study {#ps} =============== Having established basic properties, we now use our stochastic models to analyze the influence from different key parameters in more detail. First, however, we introduce a quantity that turns out to be particularly useful for our later discussion. The effective escape ratio {#eta} -------------------------- For the important intermediate lines, it is reasonable to assume that the clumps are optically thick and the ICM negligible (see Sect. \[st\] and the next paragraph). Under these assumptions, a decisive quantity for photon absorption will be the velocity gap [ *not*]{} covered by the clumps, as compared to the thermal velocity (the latter determining the width of the resonance zone in which the photon may interact with the wind material). This is illustrated in the left panel of Fig. \[Fig:fer\], and we shall call this quantity the ‘effective escape ratio’ $$\eta \equiv \frac{\Delta \it v}{v_{\rm t}}, \label{Eq:fer}$$ where $\Delta v$ is the velocity gap between two subsequent clumps, made up by all velocities not covered by *any* of the clumps (see Fig. \[Fig:fer\]). In principle, $\eta$ determines to which extent the vorosity effect [i.e., the velocity gaps between the clumps, cf. @Owocki08] is important for the line formation. As defined, $\eta$ does not contain any assumptions on the *spatial* structure of the wind. $\eta << 1$ means that the velocity gaps between the clumps are much smaller than the thermal velocity, which in turn means that the probability for a photon to encounter a clump within its resonance zone is high. If we assume each clump to be optically thick, every encounter will lead to an absorption. Thus the probability for photon absorption is high when the value of $\eta$ is low. Vice versa, $\eta >> 1$ results in a high probability for the photon to escape its resonance zone without interacting with the wind material, i.e., a low absorption probability. If the entire velocity space were covered by clumps, $\eta=0$. For the wind geometry used in our stochastic models, we may write (see Appendix \[app\_eta\] for a derivation) $$\eta \approx \frac{v_{\beta}\delta t(1-f_{\rm v}|\delta v/\delta v_{\beta}|)}{L_{\rm r}} \approx \frac{\delta t (1-f_{\rm v}|\delta v/\delta v_{\beta}|)}{v_{\rm t}}\frac{v_{\beta}}{r^2}, \label{Eq:heff}$$ where $L_{\rm r}$ is the radial Sobolev length of a smooth model, which for $\beta=1$ is $L_{\rm r} \approx v_{\rm t}r^2$ (as usual, $r$ and $L_{\rm r}$ in $R_\star$ and $\delta t$ in $t_{\rm dyn}$). Note that in Eq. \[Eq:heff\] also the density-clumping parameters have entered the expression for $\eta$, illustrating that there is an intimate coupling with the *spatial* clumping parameters, even though the vorosity effect initially depends on velocity parameters alone. For example, consider a wind with clumps that follow a smooth $\beta$ velocity law. By bringing the clumps spatially closer together (for example by decreasing $\delta t$), the velocity gaps between them decrease as well. Thus one may choose to describe the changed situation either in terms of a less efficient porosity, because of fewer ‘density holes’ in the resonance zone through which the photons can escape [as done by @Oskinova07], *or* in terms of a less efficient vorosity, because of smaller velocity gaps between the clumps. Of course, one may also obtain a lower velocity gap between the clumps by increasing the actual velocity spans inside the clumps, as simulated in our stochastic models when $|\delta v/\delta v_{\beta}| >1$. This effect, leading to a rather low vorosity, has already been demonstrated to be at work in the RH models (Sect. \[cmp\]). Using the parameters of our default model, Fig. \[Fig:fer\] (right panel) displays $\eta$ as a function of velocity and shows that $\eta$ increases rapidly in the inner wind, reaches a maximum at $v \approx 0.33$, and then drops in the outer wind. To compare this behavior with that of the line profiles, we can associate absorption at some frequency $x_{\rm obs}$ with the corresponding value of the velocity, because absorption occurs at $x_{\rm obs} \approx \mu v \approx v$ (radial photons dominate). In the default model’s absorption-part line profile (see Fig. \[Fig:stochs\], the middle plot in the lower-left panel), a strong de-saturation occurs directly after the clumping is set to start (at $r=1.3$, $v \approx 0.23$), followed by a maximum at $x_{\rm obs} \approx 0.35$, and finally an absorption dip toward the blue edge. The behavior of the line profile is thus well mapped by $\eta$, and we may explain the absorption dip as a consequence of the low value of $\eta$ in the outer wind, which in turn stems from the slow variation of the velocity field (i.e., from radially extended resonance zones). Density parameters {#dens_par} ------------------ To isolate density-clumping effects, we use a smooth $\beta =1$ velocity law in this subsection. Despite the smooth velocity field, there are still holes in velocity space (because of the density clumping, at the locations where the ICM is present), and the expression for $\eta$ (Eq. \[Eq:heff\]) remains valid. Since a smooth velocity field corresponds to $\delta v=\delta v_{\beta}$, also the run of $\eta$ is equal to the one displayed in Fig. \[Fig:fer\]. In this subsection we work only with integrated profile strengths (characterized by the equivalent width $W_{\lambda}$ of the line’s absorption part). The shapes of the line profiles are discussed in Sect. \[shapes\]. Fig. \[Fig:ew\_1d\] shows $W_{\lambda}$ as a function of $\kappa_0$, for smooth models as well as for stochastic models with and without a contributing ICM. The figure directly tells: i) The default model ($\fic = 0.0025$) for the intermediate line ($\kappa_0=5.0$) displays a $W_{\lambda}$ corresponding to a smooth model with a $\kappa_0$ roughly ten times lower. ii) Lines never saturate if the ICM is (almost) void. iii) The run of $W_{\lambda}$ for the smooth and clumped models decouple well before $\kappa_0$ reaches unity. iv) For intermediate lines, the response of $W_{\lambda}$ on variations of $\kappa_0$ is weak for clumped models. Points one to three confirm our findings from Sect. \[st\]. A variation of $\delta t$ in the stochastic models affects primarily the high $\kappa_0$ part ($\kappa_0 \ga 1.0$) of the curves in Fig. \[Fig:ew\_1d\]. For example, lowering $\delta t$ in the model with a void ICM results in an upward shift of the dashed curve and vice versa. To obtain saturation with a void ICM, $\delta t \approx 0.05$ is required, which may be understood in terms of Eq. \[Eq:heff\]. For $\delta t = 0.05$, the $\eta$-values corresponding to the default model are decreased by a factor of ten, and $\eta$ reaches a maximum of only about unity, with even lower values for the majority of the velocity space (cf. Fig. \[Fig:fer\], right panel). The velocity gaps between the clumps then become closed, and the line saturates. In this situation, however, the intermediate line becomes saturated as well, again demonstrating the necessity of a [*non-void*]{} ICM to simultaneously saturate a strong line and not saturate an intermediate line. Only a properly chosen $\fic$ parameter ensures that the velocity gaps between the clumps become filled by low-density material able to absorb at strong line opacities, but *not* (or only marginally) at opacities corresponding to intermediate lines. When varying $\fic$, the primary change occurs at the high $\kappa_0$ end of Fig. \[Fig:ew\_1d\]. For higher (lower) values of $\fic$, this part becomes shifted to the left (right), and the curve decouples earlier (later) from the corresponding curve for the void ICM. A higher ICM density obviously means that the ICM starts absorbing photons at lower line strengths and vice versa. Thus, observed saturated lines could potentially be used to derive the ICM density (or at least to infer a lower limit), *if* the mass-loss rate (and abundance) is known from other diagnostics. The behavior of the absorption with respect to the volume filling factor is as expected from the expression for $\eta$; the higher $f_{\rm v}$, the lower the value of $\eta$, and the stronger the absorption. This is because a higher $f_{\rm v}$ for a fixed $\delta t$ implies that the clumps become more extended, whereas the distances between clump centers remain unaffected. Consequently, a larger fraction of the total wind velocity is covered by the clumps, leading to stronger absorption. For weak lines ($\kappa_0 \approx 0.05$), the ratio $W_{\lambda}/W_{\rm \lambda,sm}$ deviates significantly from unity only when $f_{\rm v} \la 0.1$. Only for such low values can high enough clump densities be produced so that the clumps start to become optically thick. From Fig. \[Fig:ew\_1d\] it is obvious that, generally, clumped models have a different (slower) response in $W_{\lambda}$ to an increase in $\kappa_0$ than do smooth models. This behavior may be observationally tested using UV resonance doublets [@Massa08], because the only parameter that differs between the two line components is the oscillator strength. Thus, if a smooth wind model is used and the fitted ratio of line strengths (i.e., $\kappa_{\rm 0,blue}/\kappa_{\rm 0,red}$) does not correspond to the expected ratio of oscillator strengths, one may interpret this as a signature of a clumped wind. Such behavior was found by @Massa08, where the observed ratios of the blue to red component of SiIV $\lambda\lambda$1394,1403 in B supergiants showed a wide spread between unity and the expected factor of two. This result indicates precisely the slow response to an increase in $\kappa_0$ that is consistent with inhomogeneous wind models such as those presented here, but not with smooth ones. In inhomogeneous models, the expected profile strength (or $W_{\lambda}$) ratio between two doublet components will depend on the adopted clumping parameters (as demonstrated by Fig. \[Fig:ew\_1d\] and the discussion above) and may in principle take any value in the range found by @Massa08. That is, while a profile-strength ratio deviating from the value expected by smooth models might be a clear indication of a clumped wind, the opposite is not necessarily an indication of a smooth wind. Furthermore, the degeneracy between a variation of clumping parameters and $\kappa_0$ suggests that un-saturated resonance lines should be used primarily as consistency tests for mass-loss rates derived from other diagnostics rather than as direct mass-loss estimators. We will return to this problem in Sect. \[cmp\_obs\], where a first comparison to observations is performed for the PV doublet. Velocity parameters {#vel_par} ------------------- The jump velocity parameter, $v_{\rm j}$, affects only the strong lines (or, more specifically, the lines for which the ICM is significant), and determines the maximum velocity at which absorption can occur. For example, by setting $v_{\rm j}=0$, no absorption at frequencies higher than $x=1$ is possible (unless $\delta v$ is positive and very high). A higher $v_{\rm j}$ also implies more velocity overlaps, and thereby an increased amount of backscattering due to multiple resonance zones. Both effects are illustrated in Fig. \[Fig:vj\]. Judging from the line profiles of the lower panel, the blue edge and the reduction of the re-emitted flux blueward of the line center may both be used to constrain $v_{\rm j}$. The upper panel shows one slice of the corresponding velocity fields, illustrating that the underlying $\beta$ law is recovered almost perfectly when using $v_{\rm j}=0.01v_\beta$ and $\delta v=\delta v_\beta$. With this velocity law and a non-void ICM, the corresponding strong line profile is equivalent to a profile from a smooth model. In Sects. \[st\] and \[cmp\], we showed that a higher value of the clumps’ velocity spans led to stronger absorption for intermediate lines. In principle this is as expected from Eq. \[Eq:heff\], where $\eta$ always decreases with increasing $|\delta v/\delta v_\beta|$. However, with the very high value of $|\delta v/\delta v_\beta|$ used in, e.g., the RHcopy model, one realizes that $\eta$ in Eq. \[Eq:heff\] becomes identically zero, because $f_{\rm v}|\delta v/\delta v_\beta|=1$. An $\eta=0$ corresponds to the whole velocity space being covered by clumps, and the saturation limit should be reached. As is clear from Fig. \[Fig:stochs\], however, this is not the case. This points out two important details not included when deriving the expression for $\eta$ and interpreting the absorption in terms of this quantity, namely that clumps are distributed randomly (with $\delta t$ determining only the average distances between them) and that the parameter $v_{\rm j}$ allows for an asymmetry in the velocities of the clumps’ starting points (see Sect. \[wind\_stoch\]). These two issues lead to overlapping velocity spans for some of the clumps, whereas for others there is still a velocity gap left between them, through which the radiation can escape. Therefore the profiles do not reach complete saturation, despite that on average $\eta=0$. This illustrates some inherent limitations when trying to interpret line formation in terms of a simplified quantity such as $\eta$. The impact from the velocity spans of the clumps on the line profiles also depends on the density-clumping parameters. To achieve approximately the same level of absorption, a higher value of $\delta v/\delta v_{\beta}$ was required in the RHcopy model ($f_{\rm v}=0.1$) than in the default model ($f_{\rm v}=0.25$), see Fig. \[Fig:stochs\]. Since $\delta v_{\beta} \propto f_{\rm v} \delta t$ (see Appendix \[app\_eta\]), the actual velocity spans of the clumps are different for different density-clumping parameters, even if $\delta v / \delta v_{\beta}$ remains unchanged. By changing the sign of $\delta v$ in the default model (that is, assuming a positive velocity gradient inside the clumps), we have found that our results qualitatively depend only on $|\delta v|$. Some details differ though. For example, a $\delta v > 0$ in our stochastic models permits absorption at velocities higher than the terminal one also within the clumps, whereas $\delta v < 0$ restricts the clump velocities to below the local $v_{\beta}$ (see Fig. \[Fig:fclvcl\]). In this matter $v_{\rm j}$ plays a role as well, since $v_{\rm j}$ controls where, with respect to the local $v_{\beta}$, the clumps begin. For reasonable values of $v_{\rm j}$, however, its influence is minor on lines where the ICM is insignificant. Finally, tests have confirmed that optically thin lines are only marginally affected when varying $\delta v/\delta v_{\beta}$. Discussion {#Discussion} ========== The shapes of the intermediate lines {#shapes} ------------------------------------ For intermediate lines, the shape of the absorption part of the default model differs significantly from the shape of a smooth model (see Fig. \[Fig:stochs\], the middle plot in the lower-left panel). We showed in Sect. \[eta\] that the shapes could be qualitatively understood by the behavior of $\eta$. This is further demonstrated here by scaling the line strength parameter of a 1D, smooth model, using a parameterization $\kappa_0 \propto \eta^{-1}$ outside the radius $r=1.3$ where clumping is assumed to start. Fig. \[Fig:1d\_fer\] displays the line profiles of 1D, smooth models with $\kappa_0=5.0$ and $\kappa_0=5.0/(2\eta)$. These profiles are compared to those calculated from a ‘real’ 2D stochastic model with density-clumping parameters as the default model, but with a $\beta=1$ velocity field. $\eta$ was calculated from Eq. \[Eq:heff\], using the parameters of the default model and a $\beta=1$ velocity law, and the factor of 2 in the denominator of the scaled $\kappa_0$ was chosen so that the *integrated* profile strength of the 2D model was roughly reproduced. From Fig. \[Fig:1d\_fer\] it is clear that the 1D model with scaled $\kappa_0$ well reproduces the 2D results, indicating that indeed $\eta$ governs the shape of the line profile. We notice also that these profiles display a completely black absorption dip in the outermost wind, as opposed to the default model with a non-monotonic velocity field (see Fig. \[Fig:stochs\], the middle plot in the lower-left panel). This is because the $\beta$ velocity field does not allow for any clumps to overlap in velocity space (see the discussion in Sect. \[vel\_par\]), making the mapping of $\eta$ almost perfect. Let us also point out that the line shapes can be somewhat altered by using a different velocity law, e.g., $\beta \ne 1$. Such a change would affect the distances between clumps as well as the Sobolev length, and thereby the line shapes of both absorption and re-emission profiles. However, in all cases is the shape of the re-emission part similar in the clumped and smooth models. The onset of clumping and the blue edge absorption dip {#outer} ------------------------------------------------------ We have used $r=1.3$ as the onset of wind clumping in our stochastic models, which roughly corresponds to the radius where significant structure has developed from the line-driven instability in our RH models. However, @Bouret03 [@Bouret05] analyzed O-stars in the Galaxy and the SMC, assuming optically thin clumps, and found that clumping starts deep in the wind, just above the sonic point. Also @Puls06 used the optically thin clumping approach, on $\rho^2$-diagnostics, and found similar results, at least for O-stars with dense winds. With respect to our stochastic models, the qualitative results from Sects. \[2d\] and \[ps\] remain valid when choosing an earlier onset of clumping. Quantitatively, the integrated absorption in intermediate lines becomes somewhat weaker, because the clumping now starts at lower velocities, and of course the line shapes in this region are affected as well. The onset of wind clumping will be important when comparing to observations, as discussed in Sect. \[cmp\_obs\]. The stochastic models that de-saturate an intermediate line generally display an absorption dip toward the blue edge (see Figs. \[Fig:stochs\] and \[Fig:1d\_fer\]), which has been interpreted in terms of low values of $\eta$ in the outer wind (see Sect. \[eta\]). However, this characteristic feature (not to be confused with the so-called DACs, discrete absorption components) is generally not observed, and one may ask whether it might be an artifact of our modeling technique. In the following we discuss two possibilities that may cause our models to overestimate the absorption in the outer wind; the ionization fraction and too low clump separations. Starting with the former, we have so far assumed a constant ionization factor, $q=1$ (cf. Eq. \[Eq:kappa0\]). This is obviously an over-simplification. For example, an outwards decreasing $q$ would result in less absorption toward the blue edge. Here we merely demonstrate this general effect, parameterizing $q = v_0/v_{\beta}$ in the stochastic default model (see Table \[Tab:mod\]), with $v_{\rm 0}=0.1$ the starting point below which $q=1$. Fig. \[Fig:holes\] (lower panel, dashed-dotted lines) shows how the absorption in the outer wind becomes significantly reduced. The temperature structure of the wind is obviously important for the ionization balance. Whereas an isothermal wind is assumed in POF (see Sect. \[wind\_rh\]), the FPP model has shocked wind regions with temperatures of several million Kelvin. To roughly map corresponding effects on the line profiles, we re-calculated profiles based on FPP models assuming $q=0$ in all regions with temperatures higher than $T=10^5\rm\,K$, and $q=1$ elsewhere. Since the hot gas resides primarily in the low-density regions, however, the emergent profiles were barely affected, and particularly intermediate lines remained unchanged. On the other hand, the X-ray emission from hot stars (believed to originate in clump-clump collisions, see FPP) is known to be crucial for the ionization balance of highly ionized species such as CIV, NV, and OVI [see, e.g., the discussion in @Puls08]. X-rays have not been included here, but could in principle have an impact on our line profiles, by illuminating the over-dense regions and thereby changing the ionization balance. @Krticka09, however, find that incorporating X-rays does not influence the PV ionization significantly. Finally, non-LTE analyses including feedback from optically thin clumping have shown that this as well has a significant effect on the derived ionization fractions of, e.g., PV [@Bouret05; @Puls08b]. To summarize, it is clear that a full analysis of ionization fractions must await a future non-LTE application that includes relevant feedback effects from an inhomogeneous wind on the occupation numbers. In RH models, the average distance between clumps increases in the outer wind, due to clump-clump collisions and velocity stretching [@Feldmeier97; @Runacres02]. Neglecting the former effect, our stochastic models have clumps much more closely spaced in the outer wind[^5]. We have therefore modified the default model by setting $\delta t = 3$ outside a radius corresponding to $v_{\beta}=0.7$. This is illustrated in the upper panel of Fig. \[Fig:holes\]. The mass loss in the new stochastic model is preserved (because the clumps are more extended, see the figure), and this model now better resembles FPP. Recall that differences in the widths of the clumps are expected, since in the default model $f_{\rm cl} \approx f_{\rm v}^{-1}=4$, whereas in FPP $f_{\rm cl} \approx 10$. The corresponding line profile shows how the absorption outside $x \approx 0.7$ has been reduced, as expected from the higher $\delta t$. The velocity spans of the clumps {#vgrad} -------------------------------- In Sect. \[cmp\] it was found that $|\delta v| > \delta v_{\beta}$ in the RH models. Fig. \[Fig:dvdvb\], upper panel, shows the velocity spans of density enhancements (identified as having a density higher than the corresponding smooth value) in the FPP model, and demonstrates that, after structure has developed, $|\delta v|$ is much higher than $\delta v_{\beta}$ throughout the whole wind. These high values essentially stem from the location of the starting points of the density enhancements, which generally lie *before* the velocities have reached their post shock values (see Fig. \[Fig:dvdvb\], middle and lower panels). By using a $\beta$ velocity law (which in principle corresponds to a stochastic velocity law with $v_{\rm j}=0$ and $\delta v = \delta v_{\beta}$, see Fig. \[Fig:vj\]) together with the density structure from FPP, we simulated a RH wind with low velocity spans. Indeed, for the corresponding intermediate line the equivalent width of the absorption part was $\sim 35 \%$ lower than that of the original FPP model. The strong line, on the other hand, remained saturated, because the ICM in FPP is not void. So, again, the RH models would in parallel display de-saturated intermediate lines and saturated strong lines, were it not for the large velocity spans inside the clumps. We suggest that the large velocity span inside a shell (clump) is primarily of kinematic origin, and reflects the formation history of the shell. The shell propagates outwards through the wind, essentially with a $\beta=1$ velocity law [@Owocki88]. Fast gas is decelerated in a strong reverse shock at the inner rim of the shell. The shell collects ever faster material on its way out through the wind. This new material collected at higher speeds resides on the star-facing side, i.e. at smaller radii, of the slower material collected before. Thus, a negative velocity gradient develops inside the shell. The fact that $|\delta v| \gg \delta v_{\beta}$ in FPP seems to reflect that the shell is formed at small radii, and then advects outwards maintaining its steep interior velocity gradient[^6]. From this formation in the inner, steeply accelerating wind, velocity spans within the shells up to (a few) hundred $\rm km\,s^{-1}$, as seen in Fig. \[Fig:dvdvb\], appear reasonable. However, the dynamics of shell formation in hot star winds is very complex due to the creation and subsequent merging of subshells, as caused by nonlinear perturbation growth and the related excitation of harmonic overtones of the perturbation period at the wind base [see @Feldmeier95]. Future work is certainly needed to clarify to which extent the large velocity spans inside the shells in RH models are a stable feature (see also Sect. \[future\]). 3D effects {#3d} ---------- A shortcoming of our analysis is the assumed symmetry in $\Phi$. The 2D rather than 3D treatment has in part been motivated by computational reasons (see Appendix \[rt\_code\]). More importantly though, we do not expect our *qualitative* results to be strongly affected by an extension to 3D. Within the broken-shell wind model, all wind slices are treated independently, and distances between clumps increase only in the radial direction. Therefore the expected outcome from extending to 3D is a smoothing effect rather than a reduction or increase in integrated profile strength (similar to the smoothing introduced by $N_{\Theta}$, see Sect. \[ang\_dep\]). Also, we have shown that the main effect from the inhomogeneous winds is on the absorption part of the line profiles (see, e.g., Sect. \[shapes\]). The formation of this part is dominated by radial photons, especially in the outer wind, because of the dependence only on photons released directly from the photosphere. This implies that most photons stay within their wind slice, restricting the influence from any additional ‘holes’ introduced by a broken symmetry in $\Phi$ to the inner wind. Of course, these expectations hold only within the broken shell model, because in a real 3D wind the clumps will, for example, have velocity components also in the tangential directions. Comparison to other studies {#oskow} --------------------------- To scale the smooth opacity in the formal integral of the non-LTE atmospheric code PoWR, @Oskinova07 used a porosity formalism in which both $f_{\rm v}$ and the average distance between clumps enter. Other assumptions were a void ICM, a smooth $\beta$ velocity field, and a microturbulent velocity $v_{\rm t} \approx 50 \rm\,km\,s^{-1}$, the last identified as the velocity dispersion within a clump. However, a direct comparison between their study and ours is hampered by the different formalisms used for the spacing of the clumps. Here we have used the ‘broken-shell’ wind model as a base (see Sect. \[wind\_stoch\]), in which each wind slice is treated independently and the distance between clumps increases only in the radial direction (clumps preserve their lateral angles). This gives a radial number density of clumps, $n_{\rm cl} \propto v^{-1}$, the same as used by, e.g., @Oskinova06, when synthesizing X-ray emission from hot stars. In @Oskinova07, on the other hand, the distance between clumps increases in *all* spatial directions. In a spherical expansion, this gives a radial number density of clumps $n_{\rm cl} \propto v^{-1}r^{-2}$, i.e., clumps are distributed much more sparsely within this model, especially in the outer wind. Therefore their choice of $L_{\rm 0}=0.2$ is not directly comparable with $\delta t =0.2$ in our models. The shapes of the clumps differ between the two models as well; in @Oskinova07 clumps are assumed to be ‘cubes’, whereas here the exact shapes of the clumps are determined by the values of the clumping parameters. Despite these differences, our findings confirm the qualitative results of @Oskinova07 that the line profiles become weaker with an increasing distance between clumps as well as with a decreasing $v_{\rm t}$. These results may be interpreted on the basis of the effective escape ratio, $\eta$ (see Eq. \[Eq:heff\]). Both a decrease in $v_{\rm t}$ and an increase in the distance between clumps mean that the velocity span covered by a resonance zone becomes smaller when compared to the velocity gap between two clumps (see Fig. \[Fig:fer\], left panel), leading to higher probabilities for line photons to escape their resonance zones without interacting with the wind material. An important result of this paper is that models that de-saturate intermediate lines require a non-void ICM to saturate strong lines. This is confirmed by the @Oskinova07 model, in which the ICM is void and strong lines indeed do not saturate [@Hamann09]. @Owocki08 proposed a simplified description of the non-monotonic velocity field to account for vorosity, i.e., the velocity gaps between the clumps. Here, the vorosity effect has been discussed using the quantity $\eta$ (see Sect. \[eta\]), and we have introduced two new parameters to characterize a non-monotonic velocity field, $\delta v$ and $v_{\rm j}$. The reason for introducing a new parameterization is that when using a single velocity parameter, we have not been able to simultaneously meet the constraints from strong, intermediate, and weak lines as listed in Sect. \[2d\]. Tests using a ‘velocity clumping factor’ $f_{\rm vel}=\delta v / \Delta v$ as proposed by @Owocki08, together with a smooth density structure, have shown that this treatment indeed can reduce the line strengths of intermediate lines, but that the observational constraints from strong lines may not be met. Still, the basic concept of vorosity holds within our analysis. For example, one may phrase the high values of $\delta v$ in the RH models in terms of insufficient vorosity. Comparison to observations -------------------------- We finalize our discussion by performing a first comparison to observations. The two components of the Phosphorus V $\lambda\lambda$1118-1128 doublet are rather well separated, and the singlet treatment used here suffices to model the major part of the line complex. Nevertheless, the two components overlap within a certain region (indicated in Fig. \[Fig:cmp\_obs\]), so when interpreting the results of this subsection, one should bear in mind that the overlap is not properly accounted for, but treated as a simple multiplication of the two profiles. We used observed FUSE spectra (kindly provided by A. Fullerton) from HD210839 ($\lambda$ Cep), a supergiant of spectral type O6I(n)fp. When computing synthetic spectra, we first assumed optically thin clumping with a constant clumping factor $f_{\rm cl}=9$ and a smooth $\beta=1$ velocity field. $f_{\rm cl}=9$ agrees fairly well with the analysis of @Puls06, who derived clumping factors $f_{\rm cl}=6.5$ for $r \approx 1.2 \dots 4.0$ and $f_{\rm cl}=10$ for $r \approx 4.0 \dots 15$, assuming an un-clumped outermost wind.[^7] We took the ionization fraction $q=q(r)$ of PV from @Puls08b, calculated with the unified non-LTE atmosphere code FASTWIND for an O6 supergiant, using the Phosphorus model atom from @Pauldrach01. The feedback from optically [*thin*]{} clumping was accounted for and X-rays were neglected. This ionization fraction was then used as input in our MC-1D code when computing the synthetic spectra. We assigned a thermal plus a highly supersonic ‘microturbulent’ velocity $v_{\rm t}=0.05$ (corresponding to 110 kms$^{-1}$), as is conventional in this approach. The mass-loss rate was derived using the well known relation between $\kappa_0$ and $\dot{M}$ [e.g., @Puls08]. For atomic and stellar parameters, we adopted the same values as in @Fullerton06. The dashed line in Fig. \[Fig:cmp\_obs\] represents our fit to the observed spectrum, assuming optically thin clumping, resulting in a mass-loss rate $\dot{M}=0.24$, in units of $10^{-6}\rm\,M_{\odot}\,yr^{-1}$. @Fullerton06 derived $\langle q \rangle \dot{M} = 0.23$ for this star. Because our clumped FASTWIND model predicts an averaged ionization fraction $\langle q \rangle \approx 0.9$ in the velocity regions utilized by @Fullerton06, the two rates are in excellent agreement. On the other hand, @Repolust04 for HD210839 derived $\dot{M}=6.9$ from $\rm H_{\alpha}$ assuming an unclumped wind, yielding $\dot{M}_{\rm H_{\alpha}}=2.3$ when accounting for the reduction implied by our assumed $f_{\rm cl}=9$ ($\dot{M}_{\rm H \alpha}=\dot{M}_{\rm H \alpha,sm}f_{\rm cl}^{-1/2}$). This rate is almost ten times higher than that inferred from PV, and thus results in PV line profiles that are much too strong (see Fig. \[Fig:cmp\_obs\], dashed-dotted line). That is, to reconcile the $\rm H_{\alpha}$ and PV rates for HD210839 with models that assume optically [*thin*]{} clumps also in PV, we would have to raise the clumping factor to $f_{\rm cl} > 100$. In addition to this very high clumping factor, the low rate inferred from the PV lines conflicts with the theoretical value $\dot{M}=3.2$ provided by the mass-loss recipe in @Vink00 [using the stellar parameters of @Repolust04], and is also strongly disfavored by current massive star evolutionary models [@Hirschi08]. Next we modeled the PV lines using our MC-2D code together with a stochastic 2D wind model. The same clumping factor ($f_{\rm cl}=9$) and ionization fraction (calculated from FASTWIND, see above) were used. This time, we assigned $v_{\rm t}=0.005$, i.e., applied no microturbulence. In previous sections, e.g. \[st\] and \[shapes\], we showed that stochastic models generally display a line shape different from smooth models, with a characteristic absorption dip at the blue edge as well as a dip close to the line center. Such shapes are not seen in the PV lines in $\lambda$ Cep. Thus, to better resemble the observed line shapes, we used different values for $\delta t$ and $\fic$ in the inner and outer wind (the former modification already discussed in Sect. \[outer\]) and let clumping start close to the wind base. Clumping parameters are given in Table \[Tab:mod\], model Obs1. As illustrated in Fig. \[Fig:cmp\_obs\], the synthetic line profiles using $\dot{M}=2.3$, as inferred from $\rm H_{\alpha}$, are now at the observed levels. Because of our insufficient treatment of line overlap, we gave higher weight to the $\lambda$1118 component when performing the fitting, but the profile-strength ratio between the blue and red component was nevertheless reasonably well reproduced (see also discussion in Sect. \[dens\_par\]). However, though the fit appears quite good, we did not aim for a perfect one, and must remember the deficits of our modeling technique. For example, while the early onset of clumping definitely improved the fit (using our default value, there was a dip close to line center) and might be considered as additional evidence that clumping starts close to the wind base, the same effect could in principle be produced by non-LTE effects close to the photosphere or by varying the underlying $\beta$ velocity law. Such effects will be thoroughly investigated in a follow-up paper, which will also include a comparison to observations from many more objects. Clearly, a consistent modeling of resonance lines (at least of intermediate strengths) requires the consideration of a much larger parameter set than if modeling via the standard diagnostics assuming optically thin clumping, and a reasonable fit to a single observed line complex can be obtained using a variety of different parameter combinations. The analysis of PV lines as done here can therefore, at present, only be considered as a consistency check for mass-loss rates derived from other, independent diagnostics, and not as a tool for directly estimating mass-loss rates. Additional insight might be gained by exploiting more resonance doublets, due to the different reactions of profile strengths and shapes on $\kappa_0$. The different slopes of the equivalent width as a function of $\kappa_0$ in smooth and clumped models, especially at intermediate line strengths (Sect. \[dens\_par\]), may turn out to be decisive. However, because of, e.g., the additional impact from the ICM density, also this diagnostics requires additional information from saturated lines. Taken together, only a consistent analysis using different diagnostics and wavelength bands, and embedded in a suitable non-LTE environment, will (hopefully) provide a unique view. \[cmp\_obs\] Summary and future work {#Conclusions} ======================= Summary ------- Below we summarize our most important findings: - When synthesizing resonance lines in inhomogeneous hot star winds, the detailed density structure, the non-monotonic velocity field, and the inter-clump medium are all important for the line formation. Adequate models must be able to simultaneously meet observational and theoretical constraints from strong, intermediate, and weak lines. - Resonance lines are basically unaffected by the inhomogeneous wind structure in the limit of optically thin clumps, but the clumps remain optically thin only for very weak lines. - We confirm the basic effects of porosity (stemming from optically thick clumps) and vorosity (stemming from velocity gaps between the clumps) in the formation of primarily lines of intermediate strengths. - We point out the importance of a non-void ICM for the simultaneous formation of strong and intermediate lines that meet observational constraints. - Porosity and vorosity are found to be intrinsically coupled and of similar importance. To characterize their mutual effect on intermediate lines, we have identified a crucial parameter, the ‘effective escape ratio’, that describes to which extent photons may escape their resonance zones without ever interacting with the wind material. - We confirm previous results that time-dependent, radiation-hydrodynamic wind models reproduce observed characteristics for strong lines, without applying the highly supersonic microturbulence needed in smooth models. - A significant profile strength reduction of intermediate lines (as compared to smooth models) is for the radiation-hydrodynamic models prevented by the large velocity spans of the density enhancements, implying that the wind structures predicted by present day RH models are not able to reproduce the observed strengths of intermediate lines unless invoking a very low mass-loss rate. - Provided a non-void ICM and not too large velocity spans inside the clumps, 2D *stochastic* wind models saturate strong lines, while simultaneously not saturating intermediate lines (that are saturated in smooth models). Using typical volume filling factors, $f_{\rm v} \approx 0.25$, the resulting integrated profile strength reductions imply that these inhomogeneous models would be compatible with mass loss rates roughly a factor of ten higher than those derived from resonance lines using smooth models. - A first comparison to observations was made for the O6 supergiant $\lambda$ Cep. It was found that, indeed, the line profiles of PV based on a 2D stochastic wind model, accounting for a detailed density structure and a non-monotonic velocity field, reproduced the observations with a mass-loss rate almost ten times higher than the rate derived from the same lines, but with a model that used the optically thin clumping approach. This alleviated the discrepancies between theoretical predictions, evolutionary constraints, and previous mass-loss rates based on winds assumed either to be smooth or to have optically thin clumps. Future work {#future} ----------- We have investigated general properties of resonance line formation in inhomogeneous 2D wind models with non-monotonic velocity fields. To perform a detailed and quantitative comparison to observations, and derive mass-loss rates, simplified approaches need to be developed and incorporated into non-LTE models to obtain reliable occupation numbers. Extending our Monte-Carlo radiative transfer code to include line overlap effects in doublets is critical for more quantitative applications, and an extension to 3D is also necessary. Further applications involve synthesizing emission lines, for example to test the optically thin clumping limit both in the parameter range where this is thought to be appropriate (e.g., for O-/early B-stars), and in other more complicated situations. Indeed, the present generation of line-blanketed model atmospheres does not seem to be able to reproduce $\rm H_{\alpha}$ line profiles from A-supergiants, which are observed as P-Cygni profiles with *non-saturated* troughs, whereas the simulations (assuming optically thin clumping) result in saturated troughs (R.-P. Kudritzki, private communication). Since $\rm H_{\alpha}$ is a quasi-resonance line and not a recombination line in these cooler winds [e.g., @Kudritzki00], this behavior might be explained by the presence of optically thick clumps. Finally, it needs to be clarified if the large velocity span inside clumps generated in RH models is independent of additional physics that is not, or only approximately, accounted for in present simulations (such as more-D effects and/or various exciting mechanisms). If the large velocity span is a stable feature, one might come to the (rather unfortunate) conclusion that either the observed clumping features are not, or only weakly, related to the line-driven instability, or the discrepancies between observed and synthetic flux distribution (from the X-ray to the radio regime) might involve processes different from the present paradigm of wind clumping. [We would like to acknowledge our anonymous referee and A. Fullerton for useful comments and suggestions on the first version of this manuscript. Many thanks to A. Fullerton also for providing us with reduced PV spectra for his O-star sample, and W.-R. Hamann for suggesting the term ‘velocity span’ for the parameter $\delta v$. K. Lind is also thanked for a careful reading of the manuscript. J.O.S gratefully acknowledges a grant from the International Max-Planck Research School of Astrophysics (IMPRS), Garching.]{} [50]{} natexlab\#1[\#1]{} , D. C., [Bieging]{}, J. H., & [Churchwell]{}, E. 1981, , 250, 645 , J.-C., [Lanz]{}, T., & [Hillier]{}, D. J. 2005, , 438, 301 , J.-C., [Lanz]{}, T., [Hillier]{}, D. J., [et al.]{} 2003, , 595, 1182 , J. R. & [Hillier]{}, D. J. 2000, , 531, 1071 , J. I. 1970, , 149, 111 , J. I., [Abbott]{}, D. C., & [Klein]{}, R. I. 1975, , 195, 157 , P. A., [Hillier]{}, D. J., [Evans]{}, C. J., [et al.]{} 2002, , 579, 774 , L. & [Owocki]{}, S. P. 2002, , 383, 1113 , L. & [Owocki]{}, S. P. 2003, , 406, L1 , L. & [Owocki]{}, S. P. 2005, , 437, 657 , T., [Lepine]{}, S., & [Moffat]{}, A. F. J. 1998, , 494, 799 , A. 1995, , 299, 523 , A., [Oskinova]{}, L., & [Hamann]{}, W.-R. 2003, , 403, 217 , A., [Puls]{}, J., & [Pauldrach]{}, A. W. A. 1997, , 322, 878 , A. W., [Massa]{}, D. L., & [Prinja]{}, R. K. 2006, , 637, 1025 , G., [Koesterke]{}, L., & [Hamann]{}, W.-R. 2002, , 387, 244 , W. ., [Graefener]{}, G., [Oskinova]{}, L. M., & [Feldmeier]{}, A. 2009, ArXiv e-prints , W.-R. 1981, , 93, 353 , W.-R., [Feldmeier]{}, A., & [Oskinova]{}, L. M., eds. 2008, [Clumping in hot-star winds]{} , D. J. & [Miller]{}, D. L. 1998, , 496, 407 , R. 2008, in Clumping in Hot-Star Winds, ed. W.-R. [Hamann]{}, A. [Feldmeier]{}, & L. M. [Oskinova]{}, 9–+ , J. & [Kub[á]{}t]{}, J. 2009, , 394, 2065 , R.-P. & [Puls]{}, J. 2000, , 38, 613 , H. J. G. L. M., [Cerruti-Sola]{}, M., & [Perinotto]{}, M. 1987, , 314, 726 , S. & [Moffat]{}, A. F. J. 2008, , 136, 548 , L. B. 1983, , 274, 372 , L. B. & [Solomon]{}, P. M. 1970, , 159, 879 , D. L., [Prinja]{}, R. K., & [Fullerton]{}, A. W. 2008, in Clumping in Hot-Star Winds, ed. [W.-R. Hamann, A. Feldmeier, & L. M. Oskinova]{}, 147–+ , G., [Maeder]{}, A., [Schaller]{}, G., [Schaerer]{}, D., & [Charbonnel]{}, C. 1994, , 103, 97 , D., [Kunasz]{}, P. B., & [Hummer]{}, D. G. 1975, , 202, 465 , L. M., [Feldmeier]{}, A., & [Hamann]{}, W.-R. 2004, , 422, 675 , L. M., [Feldmeier]{}, A., & [Hamann]{}, W.-R. 2006, , 372, 313 , L. M., [Hamann]{}, W.-R., & [Feldmeier]{}, A. 2007, , 476, 1331 , S. P. 2008, in Clumping in Hot-Star Winds, ed. W.-R. [Hamann]{}, A. [Feldmeier]{}, & L. M. [Oskinova]{}, 121–+ , S. P., [Castor]{}, J. I., & [Rybicki]{}, G. B. 1988, , 335, 914 , S. P., [Gayley]{}, K. G., & [Shaviv]{}, N. J. 2004, , 616, 525 , S. P. & [Rybicki]{}, G. B. 1984, , 284, 337 , A., [Puls]{}, J., & [Kudritzki]{}, R. P. 1986, , 164, 86 , A. W. A., [Hoffmann]{}, T. L., & [Lennon]{}, M. 2001, , 375, 161 , J., [Markova]{}, N., & [Scuderi]{}, S. 2008, in Astronomical Society of the Pacific Conference Series, Vol. 388, Mass Loss from Stars and the Evolution of Stellar Clusters, ed. A. [de Koter]{}, L. J. [Smith]{}, & L. B. F. M. [Waters]{}, 101–+ , J., [Markova]{}, N., [Scuderi]{}, S., [et al.]{} 2006, , 454, 625 , J., [Owocki]{}, S. P., & [Fullerton]{}, A. W. 1993, , 279, 457 , J., [Urbaneja]{}, M. A., [Venero]{}, R., [et al.]{} 2005, , 435, 669 , J., [Vink]{}, J. S., & [Najarro]{}, F. 2008, , 16, 209 , T., [Puls]{}, J., & [Herrero]{}, A. 2004, , 415, 349 , M. C. & [Owocki]{}, S. P. 2002, , 381, 1015 , G. B. & [Hummer]{}, D. G. 1978, , 219, 654 , N. & [Owocki]{}, S. P. 2006, , 645, L45 , J. S., [de Koter]{}, A., & [Lamers]{}, H. J. G. L. M. 2000, , 362, 295 , J., [Hillier]{}, D. J., [Bouret]{}, J.-C., [et al.]{} 2008, , 685, L149 The Monte-Carlo transfer code {#rt_code} ============================= The code -------- Here we describe our Monte-Carlo radiative transfer code (MC-2D) in some detail. For an overview of basic assumptions, see Sect. \[rt\] in the main paper. For testing purposes, versions to treat spherically symmetric winds, either in the Sobolev approximation (MCS-1D) or exactly (MC-1D), have been developed as well. #### Geometry. For wind models in which the spherical symmetry is broken, we can no longer restrict photon trajectories to rays with constant impact parameters (see below). Moreover, the observed spectrum will depend on the observer’s placement relative to the star. Fig. \[Fig:coord\] illustrates the geometry in use, a standard right-handed spherical system ($r,\Theta,\Phi$) defined relative to a Cartesian set ($X,Y,Z$) (transformations between the two may be found in any standard mathematical handbook). At each coordinate point we also construct a local coordinate system using the local unit vectors $(r_{\rm u},\Theta_{\rm u},\Phi_{\rm u})$, which for a photon propagating in direction $n_{\rm u}$ is related to the *radiation coordinates* $(\theta,\phi)$ (see Fig. \[Fig:coord\]) via $$\cos \theta \equiv \mu = r_{\rm u} \cdot n_{\rm u}, \label{Eq:setb}$$ $$\sin \phi \sin \theta = \Phi_{\rm u} \cdot n_{\rm u} = \frac{Z_{\rm u} \times r_{\rm u}}{|Z_{\rm u} \times r_{\rm u}|} \cdot n_{\rm u},$$ $$\cos \phi \sin \theta = \Theta_{\rm u} \cdot n_{\rm u} = [\Phi_{\rm u} \times r_{\rm u}] \cdot n_{\rm u}.$$ The radiation coordinates are defined on the intervals $\theta = 0 \dots \pi$ and $\phi = 0 \dots 2 \pi$, but due to the symmetry in $\Phi$, only the range $\phi = 0 \dots \pi$ needs to be considered (see @Busche00 2000). Also, for this symmetry, the direction cosines of $n_{\rm u}$ simplify to $$n_{\rm x} = \mu \sin \Theta + \sqrt{1-\mu^2} \cos \phi \cos \Theta,$$ $$n_{\rm y} = \sqrt{1-\mu^2} \sin \phi,$$ $$n_{\rm z} = \mu \cos \Theta - \sqrt{1-\mu^2} \cos \phi \sin \Theta. \label{Eq:sete}$$ Eqs. \[Eq:setb\]-\[Eq:sete\] are used to update the physical position ($r,\Theta$) of the photon and the local values of the radiation coordinates ($\theta,\phi$). By tracking the photon on a radial mesh, both the physical and radiation coordinates can be updated exactly. Interpolations are necessary only when a photon is scattered or when it crosses a $\Theta$-boundary to another wind slice. Essentially the same coordinate system is used by, e.g., @Busche00. We collect escaped photons according to their $\Theta$-angles at ‘infinity’[^8], and bin them using the same $N_{\Theta}$ bins as in the underlying wind model (see Sect. \[wind\]). For spherically symmetric wind models, we adhere to the customary $(p,z)$ spatial coordinate system with $p$ being the impact parameter and $z$ the direction toward the observer. Each time a photon is scattered and its direction determined, a new impact parameter is computed from the relation $p=r\sqrt(1-\mu^2)$, appreciating that all points on a surface of constant radius can be treated equally in this geometry. #### Releasing photons. We release photons from the lower boundary uniformly in $\phi$ and with a distribution function $\propto \mu d\mu$ in $\mu$ [e.g., @Lucy83]. The angular coordinate $\Theta$ is selected so that photons are uniformly distributed over the surface area $dA=\sin \Theta d\Theta d\Phi$. #### Absorption. The probability of photon absorption is $\propto e^{-\tau}d\tau$, hence the optical depth $\tau$ the photon travels before absorption can be selected according to $\tau = - \ln {R_{\rm 1}}$, where $R_{\rm 1}$ is a random number between 0 and 1. The position for absorption in the wind may then be determined by inverting the line optical depth integral along the photon path $$\label{Eq:tau} \tau_{\nu} = \int \chi_{\nu} ds,$$ with the frequency-dependent opacity $$\chi_{\nu} = \kappa_{\rm L}\rho \phi_{\nu},$$ with $\phi_{\nu}$ the absorption profile, $\kappa_{\rm L}$ the frequency integrated mass absorption coefficient, and $\rho$ the mass density. All dependencies on spatial location are for simplicity suppressed here and in the following. For the opacity we use the parameterization from @Hamann81 and POF, $$\label{Eq:kappa0} \kappa_{\rm L}\lambda \rho = \frac{4\pi R_{\star}\vinf^2}{\dot{M}}\kappa_{\rm 0}\rho q,$$ where $\lambda$ is the wavelength of the considered transition, $\kappa_{\rm 0}$ is a ‘line-strength’ parameter taken to be constant, $\dot{M}$ the radially and laterally averaged mass-loss rate, and $q=q(r,\Theta)$ the fraction of the considered element that resides in the investigated ionic stage. Default here is $q=1$, but effects from other ionization structures are discussed in Sect. \[outer\]. $\kappa_{\rm 0}$ is proportional to the product of mass-loss rate and abundance of the considered ion, and, for a smooth wind, $\kappa_{\rm 0}=1$ and $\kappa_{\rm 0}=100$ give a typical medium and strong line, respectively. The parameterization as defined in Eq. \[Eq:kappa0\] has the advantage that for smooth winds the radial optical depth in the Sobolev approximation collapses to $$\tau_{\rm Sob} = \frac{\kappa_{\rm 0}}{r^2 v {\rm d}v/{\rm d}r}\, q,$$ when $v$ and $r$ are expressed in normalized units. The corresponding expression for clumpy winds is provided in Eq. \[Eq:tau\_s\]. The absorption profile is assumed to be a Gaussian with a Doppler width $v_{\rm t}$ that contains the contributions from thermal and (if present) ‘microturbulent’ velocities. To solve Eq. \[Eq:tau\], we adopt the dimensionless frequency $x$ with the terminal velocity of a smooth outflow as the reference speed, $$x=\frac{\nu-\nu_{\rm 0}}{\nu_{\rm 0}} \frac{c}{\vinf}, \label{Eq:x}$$ and transform to the co-moving frame (hereafter CMF). $\nu_{\rm 0}$ is the rest-frame frequency of the line center and $c$ the speed of light. We now assume that between two grid points the variation of the factor $\kappa_{\rm L}\rho/|Q|$ (see below) is small and may be replaced by an average value. The optical depth $\Delta \tau_{\nu}$ between two subsequent spatial points $(r,\Theta)$ then becomes $$\label{Eq:dtau} \Delta \tau_{\nu} = |\frac{\lambda R_\star}{\vinf} \, \frac{ \kappa_{\rm L}\rho}{Q} \times \frac{-\Delta \rm erf[\it x_{\rm cmf}/v_{\rm t}]}{2}|,$$ where $\Delta \rm erf$ is the difference of the error-function between the points, $x_{\rm cmf}$ the dimensionless CMF frequency, and $v_t$ is calculated in units of $\vinf$. $Q~\equiv~n_{\rm u}~\cdot~\nabla~(n_{\rm u}~\cdot~\vec{v}) $ is the local directional derivative of the velocity in direction $n_{\rm u}$, with velocities measured in units of $\vinf$ and radii in units of $R_\star$. By interpolating to the border whenever a photon crosses a $\Theta$ boundary, we *locally* recover the spherically symmetric expression $$Q = \frac{\partial v}{\partial r}\mu^2+\frac{v}{r}(1-\mu^2).$$ For spherically symmetric winds, we have written a second implementation that allows for line transfer using the Sobolev approximation. With this method each resonance zone is approximated by a point and the line only collects optical depth at atmospheric locations where the observer’s frame frequency $x_{\rm obs}$ has been Doppler shifted to coincide with the CMF frequency for the line center. The condition for interaction thus is $x_{\rm obs}=\mu v$ and the last factor in Eq. \[Eq:dtau\] collapses to unity when calculating the Sobolev optical depth. The Sobolev approach can be expected a reasonable approximation when the variation of the factor $\kappa_{\rm L}\rho/|Q|$ is small within the whole resonance zone contributing to the optical depth in Eq. \[Eq:dtau\], i.e., small on length scales at least a few times the Sobolev length $L \equiv v_{\rm t}/|Q|$. However, also in the Sobolev approximation more than one resonance point may be identified in a wind with a non-monotonic velocity field. #### Re-emission. We assume complete redistribution and isotropic re-emission in the CMF, allowing for a multitude of scattering events within one resonance zone. When the Sobolev approximation is applied, re-emission is assumed to be coherent in the CMF and for the angular re-distribution we then use the corresponding escape probabilities [@Castor70], corrected for a treatment of negative velocity gradients (@Rybicki78 1978;POF). In this case, there is only one effective scattering event inside the localized resonance zone. After the photon has been re-emitted at some atmospheric location, the procedure runs again and searches for another absorption. Radiative transfer code tests {#rt_tests} ----------------------------- In this subsection we describe some of the verification tests of our MC radiative transfer code that we have made. The MC-1D version was first applied on spherically symmetric winds, comparing profiles from smooth, stationary winds to profiles calculated using the well-established CMF (cf. @Mihalas75 1975; @Hamann81 1981) and SEI methods, and profiles from time-dependent RH winds to profiles calculated using the Sobolev method developed in POF. Thereafter we applied the MC-2D version on models in which all lateral slices had the same radial structure, comparing the results to the MC-1D version. First we calculated line profiles for smooth, 1D winds. We have verified that for low[^9] values of $v_{\rm t}$, profiles from all the methods described above agree perfectly, whereas for higher values the MC-1D and CMF give identical results but the SEI deviates significantly, especially for a medium-strong line (see Fig. \[Fig:1d\_prof\], upper panel). This is due to the hybrid nature of the SEI technique, which approximates the source function with its local Sobolev value but carries out the exact formal integral. Because of this, the method does not account for the increasing amount of photons close to line center that are backscattered into the photosphere when the resonance zone grows and overlaps with the lower boundary.[^10] Consequently the re-emitted flux in this region is higher when calculated via the SEI than when calculated via the CMF or MC methods. These discrepancies between the CMF and SEI are quite well documented and discussed [e.g., @Hamann81; @Lamers87], however we still emphasize that one should exercise caution when applying the SEI method with high microturbulence on wind resonance lines. Especially today, when increased computer-power enables us to compute fast solutions using both methods, the CMF is preferable. Next we calculated line profiles for structured, 1D winds. Profiles computed with all three methods agreed for weak and intermediate lines. For strong lines, the agreement between MCS-1D and the method from POF, which uses a Sobolev source function accounting for multiple-resonance points, was satisfactory. However, minor discrepancies between Sobolev and non-Sobolev treatments occurred for the strong line also when no microturbulent velocity was applied (see Fig. \[Fig:1d\_prof\]), as opposed to the smooth case. Finally we performed a simple test of our MC-2D code by applying it on models in which all lateral slices had the same radial structure, i.e., the wind was still spherically symmetric and all observers ought to see the same spectrum. We confirmed that indeed so was the case, both for smooth and structured models (in Fig. \[Fig:1d\_prof\] the latter case is demonstrated). The effective escape ratio {#app_eta} ========================== We define the ratio of the velocity gap $\Delta v$ between two clumps (see Fig. \[Fig:fer\] in the main paper) and the thermal velocity $v_{\rm t}$ as $$\eta \equiv \frac{\Delta v}{v_t}.$$ In the following, we derive an expression for $\eta$, for the wind geometry used throughout this paper. If $\Delta v_{\rm tot} = \Delta v + |\delta v|$ is the velocity difference between two clump *centers*, we may write (omitting the absolute value signs here and in the following) $$\Delta v = \Delta v_{\rm tot} - \delta v = \frac{\Delta v_{\rm tot} } {\Delta v_{\rm tot,\beta}} \Delta v_{\rm tot,\beta} -\frac{\delta v }{\delta v_{\beta}} \delta v_{\beta},$$ where we have normalized the arbitrary velocity intervals to the corresponding $\beta$ intervals. $\beta$ suffixes are used to denote parameters of a smooth velocity law. For notational simplicity we write $$\xi_1 = \frac{\Delta v_{\rm tot} }{\Delta v_{\rm tot,\beta}}, \qquad \xi_2 = \frac{\delta v }{\delta v_{\beta}}.$$ Assuming radial photons, $\Delta v$ may be approximated by $$\Delta v \approx \frac{\partial v_{\beta}}{\partial r} \Delta r_{\rm tot,\beta} (\xi_{\rm 1} - \xi_{\rm 2} \frac{\delta r_{\beta}}{\Delta r_{tot,\beta}} ), \label{Eq:Dv}$$ with the notations of $r$ following those of $v$. The volume filling factor for the geometry in use is $$f_{\rm v} \equiv \frac{V_{\rm cl}}{V_{\rm tot}} \approx \frac{r_{\rm 1}^2 \delta r}{r_{\rm 2}^2 \Delta r_{\rm tot}} \label{Eq:fv}$$ with $V_{\rm cl}$ the volume of the clump, $V_{\rm tot}$ the total volume, and $r_{\rm 1}~\approx~r_{\rm 2}$ the radial points associated with the beginning of the clump and the ICM. Using Eq.  \[Eq:fv\] and $\Delta r_{\rm tot} = v_{\beta} \delta t$ (see Sect. \[wind\_stoch\]), we obtain $$\Delta v \approx \frac{\partial v_{\beta}}{\partial r} v_{\beta} \delta t ( \xi_1 - \xi_2 f_v ),$$ and for $\eta$, using the radial Sobolev length of a smooth flow $L_{\rm r}~=~v_{\rm t}/(\partial v_{\beta}/\partial r)$, $$\eta \approx \frac{v_{\beta} \delta t ( \xi_1 - \xi_2 f_v )}{L_{\rm r}}.$$ In our models $\xi_1$ is not given explicitly, but is on the order of unity, because we distribute clumps according to the underlying smooth $\beta =1$ velocity law. Thus we approximate $$\eta \approx \frac{v_{\beta} \delta t ( 1 - \xi_2 f_v )}{L_{\rm r}}. \label{Eq:fesc}$$ We notice that the porosity length $h$ as defined by @Owocki04 is $h = l/f_{\rm v}$, where $l$ is the length associated with the clump. For the geometry used here this becomes $h \approx \delta r/f_{\rm v} \approx v_{\beta} \delta t$. Hence, using $\xi_2=1$ for a smooth velocity field, $\eta$ represents the porosity length corrected for the finite size of the clump, and divided by the radial Sobolev length. [^1]: We here notice that $f_{\rm v}$ is normalized to the *total* volume, i.e., $f_{\rm v} = 0 \dots 1$. In some literature $f_{\rm v}$ is identified with the straight volume ratio $V_{\rm cl}/V_{\rm ic}$, which then implicitly assumes that $V_{\rm cl} \ll V_{\rm ic}$. [^2]: with $\kappa_0$ proportional to the product of mass-loss rate and abundance of the considered ion, see Appendix \[rt\_code\]. [^3]: Recall that $f_{\rm v}=0.25 \rightarrow f_{\rm cl} \approx 4$, which implies $\dot{M}=\dot{M}_{\rm smooth}/2$, if $f_{\rm cl}$ were derived from $\rho^2$-diagnostics assuming optically thin clumps. [^4]: The *indirect* effect through the feedback on the occupation numbers is not included, because in this section we assume constant ionization. [^5]: The effect is minor in POF, since these RH models only extend to $r \sim 5$ (see Sect. \[wind\_rh\]). [^6]: Actually, the velocity gradient may further steepen during advection, due to faster gas trying to overtake slower gas ahead of it; however, this effect is balanced by pressure forces in the subsonic postshock domain. [^7]: This stratification has been found to be prototypical for O-supergiants and was, together with its well developed PV P Cygni profiles, the major reason for choosing $\lambda$ Cep as comparison object instead of, e.g., $\zeta$ Pup, which displays a somewhat unusual run of $f_{\rm cl}$. [^8]: The full 3D problem would require binning in $\Phi$ as well, which in turn would require a large increase in the number of simulated photons. [^9]: For a typical terminal velocity value $\vinf=2000 \ \rm km \ s^{-1}$, $v_{\rm t}=0.005$ corresponds to $10 \ \rm km \ s^{-1}$ and $v_{\rm t}=0.2$ to $400\ \rm km \ s^{-1}$. [^10]: Remember that neither the SEI nor the CMF, as formulated here, include a transition to the photosphere, but treat the lower boundary as sharp with a minimum velocity $v_{\rm min}$.
--- author: - | Nilesh Tripuraneni[^1] Mitchell Stern$^\ast$ Chi Jin Jeffrey Regier Michael I. Jordan\ `{nilesh_tripuraneni,mitchell,chijin,regier}@berkeley.edu`\ `jordan@cs.berkeley.edu`\ \ University of California, Berkeley bibliography: - 'SHV.bib' title: Stochastic Cubic Regularization for Fast Nonconvex Optimization --- [^1]: Equal contribution.
--- abstract: 'We show how two distrustful parties, “Bob” and “Charlie”, can share a secret key with the help of a mutually trusted “Alice”, counterfactually—that is with no information-carrying particles travelling between any of the three parties.' author: - Hatim Salih title: Tripartite Counterfactual Quantum Cryptography --- In a recent paper [@Shenoy], a quantum cryptography protocol was proposed where an entrusted Alice allows Bob, a bank for example, and Charlie, a client unsure of Bob’s identity, to share a secret key that not even Alice has access to. The protocol’s aim of extending the original N09 counterfactual quantum key distribution (QKD) protocol [@Noh] to three parties is theoretically and practically interesting. But even though no photons travel all the way between Bob and Charlie, making the protocol counterfactual in one sense, an eavesdropper, Eve, still has full access to Alice’s information carrying photons, making the protocol not counterfactual in another crucial sense. This does not in itself make the protocol unsecured, but the powerful promise of security [@Noh] based on the total absence of information-carrying photons from the transmission channels is lost. Here, we show using a two-cycle chained quantum Zeno effect (CQZE) [@Salih], how Alice can enable Bob and Charlie to share a secret key with no information-carrying photons traveling between any of the three parties—achieving complete counterfactuality. Security arguments [@Noh] and proofs [@Yin] based on complete counterfactuality should thus hold. The overall action of the two-cycle CQZE, whose inner working is explained in the caption of FIG. \[fig: TriOne\], on Alice’s horizontally ($H$) polarised photon is the following, $\left| \text{H} \right\rangle \to \left| \text{H} \right\rangle$ when the channel is not blocked, and $\left| \text{H} \right\rangle \to \left| \text{V} \right\rangle$ when the channel is blocked. Crucially, in both cases the photon does not travel through the channel. We know this because a photon going into the channel would either trigger detector $D_4$, for the case of Bob(Charlie) blocking, or else trigger detector $D_3$ for the case of Bob(Charlie) not blocking. Note that with the smallest possible number of cycles used (two inner and two outer cycles) the probability of the photon not being lost due to detection by $D_3$ or $D_4$ is $\approx$ $1/5$ [@Probability]. (This can be made arbitrarily close to one by increasing the number of cycles, but at the cost of practicality.) *Protocol for tripartite counterfactual quantum cryptography*—Alice starts by sending a $H$ photon from the left towards beamsplitter $BS$ of the Michelson interferometer of FIG. \[fig: Protocol\], which applies a $\pi/2$ rotation to the path qubit, putting the photon in an equal superposition of being on path B (leading to Bob) and path C (leading to Charlie). Bob(Charlie) encodes a “0”(“1”) by not blocking his channel and encodes a “1”(“0”) by blocking it. If they encode different bit values, the two parts of the photon superposition reflected back towards Alice’s $BS$ (from top and from right) will be, by the action of the two CQZEs, identically polarised. Constructive interference therefore takes place resulting in Alice’s $D_2$ clicking with certainty (provided the photon was not lost to $D_3$ or $D_4$). If, however, Bob and Charlie encode the same bit value, the two parts of the photon superposition reflected back towards Alice’s $BS$ will be oppositely polarised. Interference does not take place because differing polarisation acts as a which-path “tag”. $D_1$ and $D_2$ are therefore equally likely to click. Since $D_1$ clicking corresponds uniquely to Bob and Charlie randomly agreeing on their bit value, whenever $D_1$ clicks Alice publicly instructs Bob and Charlie to keep the corresponding bits as their sifted key, the rest are discarded. Throughout, no information-carrying photons have traversed either channel. In summary, using a two-cycle chained quantum Zeno effect, we have shown how to achieve completely counterfactual QKD between two distrustful parties assisted by another entrusted party—with no information-carrying particles travelling between any of them. This work is partially supported by Qubet Research, a start-up in quantum information. [99]{} Akshata Shenoy H., R. Srikanth, and T. Srinivas, arXiv:1402.2250 (2014) T.-G. Noh, Phys. Rev. Lett. [**103**]{}, 230501 (2009). H. Salih, Z.H. Li, M. Al-Amri, and M.S. Zubairy, Phys. Rev. Lett. [**110**]{}, 170502 (2013). Z.-Q. Yin, H.-W. Li, W. Chen, Z.-F. Han, and G.-C. Guo, Phys. Rev. A [**82**]{}, 042335 (2010). The probability that the photon avoids detection by $D_3$ for the case of Bob not blocking is given by ${\cos}^{2M}{\theta}_{M}$, where $M$ is the number of outer cycles. For $M=2$ we get a probability of $1/4$. On the other hand, the probability that the photon avoids detection by $D_4$ for the case of Bob blocking is given by $\prod_{m=1}^{M} (1-{\sin}^{2}{m\theta}_{M}{\sin}^{2}{\theta}_{N})^N$, where $M(N)$ is the number of outer(inner) cycles. For $M=2$ and $N=2$ we get a probability of $9/64$. The overall probability of the photon making it back to Alice is therefore $25/128$. ![\[fig: Protocol\]Protocol for tripartite counterfactual quantum cryptography. Bob(Charlie) randomly encodes a “0”(“1”) by not blocking his channel and a “1”(“0”) by blocking it. Bob(Charlie) can block his channel by switching Pockels cell $PC_{B(C)}$ on, which flips polarisation, directing the photon towards $D_4$. Initially, Alice sends a $H$ photon from her photon source $S$ towards beamsplitter $BS$, which puts the photon in an equal superposition of being on path B (leading to Bob) and path C (leading to Charlie). If Bob and Charlie encode different bit values, the two parts of the photon superposition reflected back towards Alice’s $BS$ (from top and from right) will be, by the action of the two CQZEs, identically polarised. Constructive interference therefore takes place resulting in Alice’s $D_2$ clicking with certainty (provided the photon was not lost to $D_3$ or $D_4$). If, however, Bob and Charlie encode the same bit value, the two parts of the photon superposition reflected back towards Alice’s $BS$ will be oppositely polarised. Interference does not take place. $D_1$ and $D_2$ are therefore equally likely to click. A click at $D_1$ uniquely corresponds to Bob and Charlie randomly agreeing in their bit choices. (Here, $OC$ stands for optical circulator, which directs a photon exiting left towards $D_1$.)](Protocol){width="50.00000%"} ![\[fig: TriOne\]The chained quantum Zeno effect (CQZE). Bob(Charlie) can block the channel by switching Pockels cell $PC$ on, directing the photon towards detector $D_4$. Initially, switchable mirror $SM_1$ is switched off allowing Alice’s $H$ photon in before being switched on again. Switchable polarisation rotator $SPR_1$ then applies the following rotation to the photon, $\left| \text{H} \right\rangle \to 1/\sqrt2(\left| \text{H} \right\rangle + \left| \text{V} \right\rangle)$, before being switched off for the rest of this outer cycle. Polarising beamsplitter $PBS_1$ reflects the $V$ part of the superposition towards Bob(Charlie). (Optical delays $OD$ ensure that the effective path lengths correctly match.) Switchable mirror $SM_2$ is then switched off to allow the $V$ part of the superposition into the inner interferometer before being switched on again. Switchable polarisation rotator $SPR_2$ then applies the following rotation, $\left| \text{V} \right\rangle \to 1/\sqrt2(\left| \text{V} \right\rangle - \left| \text{H} \right\rangle)$, before being switched off for the rest of this inner cycle. Polarising beamsplitter $PBS_2$ reflects the $V$ part of the superposition while passing the $H$ part towards Bob(Charlie). There are now two scenarios: (i) If Bob(Charlie) blocks the channel, effectively making a measurement, the part of the photon superposition inside the inner interferometer ends up in the state $\left| \text{V} \right\rangle$, unless the photon is lost to $D_4$. The same applies to the next inner cycle. Switchable mirror $SM_2$ is then switched off to allow this part of the superposition, whose state has remained $\left| \text{V} \right\rangle$, out. In the next outer cycle, $SPR_1$ rotates the photon’s polarisation from $1/\sqrt2(\left| \text{H} \right\rangle + \left| \text{V} \right\rangle)$ all the way to $\left| \text{V} \right\rangle$ before being switched off for the rest of this outer cycle. $PBS_1$ reflects the photon towards Bob(Charlie). As before, after two inner cycles, the photon remains in the state $\left| \text{V} \right\rangle$ unless it is lost to $D_4$. $SM_1$ is then switched off to allow the photon, whose final state is now $\left| \text{V} \right\rangle$, out. (ii) If instead Bob(Charlie) does not block the channel, the part of the photon superposition in the inner interferometer, namely $1/\sqrt2(\left| \text{V} \right\rangle - \left| \text{H} \right\rangle)$, will be rotated all the way to the sate $-\left| \text{H} \right\rangle$ after two inner cycles. Switchable mirror $SM_2$ is then switched off to allow this part of the superposition out. Measurement by $D_3$ leaves the photon in the overall state $\left| \text{H} \right\rangle$ moving towards $SM_1$, unless it is lost to $D_3$. The same applies to the next outer cycle (and two inner cycles). $SM_1$ is then switched off to allow the photon, whose final state is $\left| \text{H} \right\rangle$, out. Counterfactuality is ensured as any photon going into the channel would either trigger $D_3$ for the case of Bob(Charlie) not blocking, or else trigger $D_4$ for the case of Bob(Charlie) blocking.](TriOne){width="50.00000%"}
--- abstract: | We study topological structures of the sets $(0,1/2)^3 \cap \Omega$ and $(0,1/2)^3 \setminus \Omega$, where $\Omega$ is one special algebraic surface defined by a symmetric polynomial in variables $a_1,a_2,a_3$ of degree $12$. These problems arise in studying of general properties of degenerate singular points of dynamical systems obtained from the normalized Ricci flow on generalized Wallach spaces. Our main goal is to prove the connectedness of $(0,1/2)^3 \cap \Omega$ and to determine the number of connected components of $(0,1/2)^3 \setminus \Omega$. Key words and phrases: Riemannian metric, generalized Wallach space, normalized Ricci flow, dynamical system, degenerate singular point of dynamical system, real algebraic surface, singular point of real algebraic surface. [*2010 Mathematics Subject Classification:*]{} 53C30, 53C44, 37C10, 34C05, 14P05, 14Q10. address: 'N.A. AbievTaraz State University after M.Kh. Dulaty, Taraz, Tole bi str., 60, 080000, KAZAKHSTAN' author: - 'N.A. Abiev' title: On topological structure of some sets related to the normalized Ricci flow on generalized Wallach spaces --- Introduction and the main result {#introduction-and-the-main-result .unnumbered} ================================ It is known that determining the connectedness (or the number of connected components) of real algebraic surfaces is a very hard classical problem in algebraic geometry (see e. g. [@Basu], [@Silhol]). In this paper we deal with similar problems relating to the normalized Ricci flow on generalized Wallach spaces. The importance of these problems is due to the need to develop a special apparatus for studying general properties of degenerate singular points of Ricci flows initiated in [@AANS]–[@AANS3]. More concretely, in the above papers, the authors considered some problems concerning the topological structure of the sets $(0,1/2)^3 \cap \Omega$ and $(0,1/2)^3 \setminus \Omega$, where $$\label{surf_Omega} \Omega =\{(a_1,a_2,a_3)\in\mathbb{R}^3 \, | \, Q(a_1,a_2,a_3)=0\}$$ is an algebraic surface (see Fig. \[singsur\] and \[singsur\_new\]) in $\mathbb{R}^3$ defined by a symmetric polynomial $Q(a_1,a_2,a_3)$ in $a_1,a_2,a_3$ of degree $12$: $$\begin{aligned} \label{singval2}\notag Q(a_1,a_2,a_3)\,=\, (2s_1+4s_3-1)(64s_1^5-64s_1^4+8s_1^3+12s_1^2-6s_1+1\\\notag +240s_3s_1^2-240s_3s_1-1536s_3^2s_1-4096s_3^3+60s_3+768s_3^2)\\ -8s_1(2s_1+4s_3-1)(2s_1-32s_3-1)(10s_1+32s_3-5)s_2\\\notag -16s_1^2(13-52s_1+640s_3s_1+1024s_3^2-320s_3+52s_1^2)s_2^2\\\notag +64(2s_1-1)(2s_1-32s_3-1)s_2^3+2048s_1(2s_1-1)s_2^4,\end{aligned}$$ $$s_1 = a_1+a_2+a_3, \quad s_2 = a_1a_2+a_1a_3+a_2a_3, \quad s_3 = a_1a_2a_3.$$ \[1\][ ]{} The surface $\Omega$ naturally arises in studying of general properties of degenerate singular points of the following dynamical system (see [@AANS]–[@AANS3]): $$\label{three_equat} \dfrac {dx_1}{dt} = f(x_1,x_2,x_3), \quad \dfrac {dx_2}{dt}=g(x_1,x_2,x_3), \quad \dfrac {dx_3}{dt}=h(x_1,x_2,x_3),$$ where $x_i=x_i(t)>0$ $(i=1,2,3)$, $$\begin{aligned} f(x_1,x_2,x_3)&=&-1-a_1x_1 \left( \dfrac {x_1}{x_2x_3}- \dfrac {x_2}{x_1x_3}- \dfrac {x_3}{x_1x_2} \right)+x_1B,\\ g(x_1,x_2,x_3)&=&-1-a_2x_2 \left( \dfrac {x_2}{x_1x_3}- \dfrac {x_3}{x_1x_2} - \dfrac {x_1}{x_2x_3} \right)+x_2B,\\ h(x_1,x_2,x_3)&=&-1-a_3x_3 \left( \dfrac {x_3}{x_1x_2}- \dfrac {x_1}{x_2x_3}- \dfrac {x_2}{x_1x_3} \right)+x_3B,\end{aligned}$$ $$B:=\left( \dfrac {1}{a_1x_1}+\dfrac {1}{a_2x_2}+\dfrac {1}{a_3x_3}- \left( \dfrac {x_1}{x_2x_3}+ \dfrac {x_2}{x_1x_3}+ \dfrac {x_3}{x_1x_2} \right) \right) \left( \frac{1}{a_1} +\frac{1}{a_2}+ \frac{1}{a_3} \right)^{-1}.$$ $$a_i \in (0,1/2] \quad (i=1,2,3).$$ \[1\][ ![The surface $(0,1/2)^3 \cap \Omega$[]{data-label="singsur_new"}](singsurf_new.eps "fig:") ]{} It should be noted that the system (\[three\_equat\]) can be obtained from the normalized Ricci flow equation $$\dfrac {\partial}{\partial t} \bold{g}(t) = -2 \operatorname{Ric}_{\bold{g}}+ 2{\bold{g}(t)}\frac{S_{\bold{g}}}{n},$$ where $\bold{g}(t)$ means a $1$-parameter family of Riemannian metrics, $\operatorname{Ric}_{\bold{g}}$ is the Ricci tensor and $S_{\bold{g}}$ is the scalar curvature of the Riemannian metric ${\bold{g}}$, considered on one special class of compact homogeneous spaces called three-locally-symmetric or generalized Wallach spaces, see [@Lomshakov1], [@Nikonorov1]. In the recent papers [@CKL] and [@Nikonorov4], the complete classification of these spaces was obtained. A more detailed information concerning geometric aspects of this problem and the Ricci flows could be found in [@Lomshakov1],[@Nikonorov2], [@ChowKnopf] and [@Topping]. In [@AANS], the authors noted that [*the set $(0,1/2)^3 \cap \Omega$ is connected, and the set $(0,1/2)^3\setminus \Omega$ consists of three connected components $O_1$, $O_2$ and $O_3$ (see Fig. \[singsur\]) containing the points $(1/6,1/6,1/6)$, $(7/15,7/15,7/15)$ and $(1/6, 1/4, 1/3)$ respectively.*]{} The present work is devoted to detailed proof of this observation. The main result is the following \[main\_thm\] The following assertions hold with respect to the standard topology of $\mathbb{R}^3$: 1. The set $(0,1/2)^3 \cap \Omega$ is connected; 2. The set $(0,1/2)^3\setminus \Omega$ consists of three connected components. We note also the following \[main\_cor\] The assertions of Theorem \[main\_thm\] are preserved if $(0,1/2)^3$ is replaced by $(0,1/2]^3$. \[symm\_Omega\] The symmetry of $Q$ with respect to $a_1,a_2,a_3$ implies the invariance of $\Omega$ under the permutation $a_1\rightarrow a_2 \rightarrow a_3\rightarrow a_1$. \[Idea\] Proof of Theorem \[main\_thm\] is based on the idea of Remark 8 in [@AANS2]: One should consider a segment $I$ with one endpoint at $(0,0,0)$ and with the second endpoint at an arbitrary point of any facet of the cube $(0,1/2)^3$ containing $(1/2,1/2,1/2)$. According to Remark \[symm\_Omega\], we can assume without loss of generality that $I$ is defined by the following parametric equations $$\label{parametrization} a_1:=at, \quad a_2:=bt, \quad a_3:=t/2,$$ where $t\in [0,1]$, $a,b\in (0,1/2)$. Substituting (\[parametrization\]) into (\[singval2\]) we obtain some polynomial $p(t):=Q(at,bt,t/2)$ in $t$ of degree $12$. Thus the problems under consideration could be reduced to the problem of determining the possible number of roots of $p(t)$ in $[0,1]$ when $(a,b)\in (0,1/2)^2$. Proof of the main result ======================== Using Maple we have the following explicit expression for $p(t)$: $$\begin{aligned} \notag \label{polynom12} p(t)=-256b^2a^2(2a+1)^2(2b+1)^2(b+a)^2t^{12}+32(16b^3a^3\\ \notag +4b^3a^2+2b^3a+2b^3+8b^2a^2+b^2a+4b^2a^3+2ba^3+ba^2+2a^3)\\ \notag (2a+1)(2b+1)(2b+1+2a)(b+a)t^{10}\\ \notag -32(2a+1)(2b+1)(b+a)(16b^3a^3+4b^3a^2+2b^3a+2b^3+8b^2a^2\\ \notag +b^2a+4b^2a^3+2ba^3+ba^2+2a^3)t^9\\ \notag -(72b^2a^2+104ba^3+208b^3a^2+104b^3a+208b^2a^3+52b^4+176a^4b+208b^4a^2\\ \notag +176b^4a+52ba^2+52b^2a+208a^4b^2+52a^4+352b^3a^3+13b^2\\ \notag +13a^2+44a^3+44b^3+22ba)(2b+1+2a)^2t^8\\ \notag +2(2b+1+2a)(72b^2a^2+104ba^3+208b^3a^2+104b^3a+208b^2a^3+52b^4\\ \notag +176a^4b+208b^4a^2+176b^4a+52ba^2+52b^2a+208a^4b^2+52a^4\\ +352b^3a^3+13b^2+13a^2+44a^3+44b^3+22ba)t^7\\ \notag +(600b^2a^2+392ba^3+784b^3a^2+392b^3a+784b^2a^3+108b^4+14b+14a\\ \notag +128a^6+448ba^5+224a^5+528a^4b+432b^4a^2+528b^4a+196ba^2\\ \notag +196b^2a+432a^4b^2+108a^4+288b^3a^3+224b^5+448b^5a+128b^6\\ \notag +2+27b^2+27a^2+36a^3+36b^3+66ba)t^6\\ \notag -6(8b^3+4b^2a+2b^2+8ba+b+4ba^2+2a^2+8a^3+1+a)(2b+1+2a)^2t^5\\ \notag +(2b+1+2a)(40b^3+24ba+5+40a^3)t^4\\ \notag +(22b+22a+88ba^2+88b^2a+2+44b^2+44a^2+16a^3+16b^3+80ba)t^3\\ \notag -6(2b+1+2a)^2t^2+(8a+8b+4)t-1.\end{aligned}$$ Consider the following set $$K:=\left\{(a,b)\in \mathbb{R}^2~|~a,b\in (0,1/2)\right\}.$$ \[1\] \[Discrim\_of\_p(t)\] If $(a,b)\in K$ then the discriminant $D$ of the polynomial $p(t)$ equals to zero if and only if $a=b$. Easy calculations show that $D$ is non-negative, moreover, $D$ has the same zeroes as the following polynomial: $$\label{resultant} (2b-1)^{12}(2a-1)^{12}(a-b)^{12}\big(F(a,b)\big)^2,$$ where $$\label{F(a,b)} F(a,b):=40a^3-24a^2b-24ab^2+40b^3-12a^2+12ba-12b^2-6a-6b+5.$$ Denote by $\gamma$ the curve determined by $F(a,b)=0$ (see Fig. \[F\]). We will prove that $\gamma$ has no common point with the square $K$. Changing the variables by the formula $$x-y=a\sqrt{2}, \quad x+y=b\sqrt{2}\,,$$ we get a new equation for $\gamma$, from that we can express $y$ explicitely: $$\label{new_equation} \widetilde{F}(x,y):=36\left(8x-\sqrt{2}\right)y^2+\left(8x+5\sqrt{2}\right)\left(2x-\sqrt{2}\right)^2=0.$$ Note that the point $(x',y')=\left(\sqrt 2/2,0\right)$ belongs to $\gamma$, moreover, this is an unique singular point of $\gamma$. Since $$\widetilde{F}_{xx}\widetilde{F}_{yy}-\widetilde{F}_{xy}^2=3888>0$$ at $(x',y')$, then $(x',y')$ is isolated according to the well-known result in differential geometry of planar curves. It is clear that the point $(a,b)=(1/2,1/2) \notin K$ corresponds to $(x',y')$ in the initial variables. It is obvious that every regular point of $\gamma$ satisfies the condition $x<x_0:=\sqrt{2}/8$. Hence only we need is to show that $\gamma$ can not intersect the part of $K$, described by the conditions $x \in\left (0,x_0\right)$, $-x<y<x$. In fact, it suffices to prove the inequality $x<\varphi (x)$, where $$\varphi (x):= \frac{\sqrt2-2x}{6}\sqrt{\frac{8x+5\sqrt{2}}{\sqrt{2}-8x}}$$ is a function determining a part of the curve $\gamma$ in (\[new\_equation\]). Note that $\lim\limits_{x\rightarrow x_0-0}\varphi(x)=+\infty$. It is easy to show that the inequality $x<\varphi (x)$ is equivalent to the inequality $$\psi(x):=320x^3-48\sqrt 2\, x^2-24x+10\sqrt 2>0,$$ which holds for all $x \in\left (0,x_0\right)$, since $\psi(x)$ is positive at $x=x_0$ and decreases: $$\psi\left(x_0\right)=27\sqrt2/4>0, \qquad \psi'(x)=960x^2-96\sqrt 2x-24<0.$$ Therefore, $F(a,b)\ne 0$ for $(a,b)\in K$. Hence there is a unique possibility $a=b$ in order to $D=0$ in $K$ by (\[resultant\]). \[multiple\_isnot\_extremum\] Let $(a,b)\in K$. Then a point of local extremum of $p(t)$ can not be a multiple root of $p(t)$. Multiple roots of $p(t)$ are possible only for $a=b$ by Lemma \[Discrim\_of\_p(t)\]. Therefore, we may assume that $b=a$. Then (\[polynom12\]) takes the following form $$p(t)=-(t+1)\,p_2(t)\,p_3^3(t),$$ $$\begin{aligned} p_2(t) &:=& (2+4a)t^2-2(1+2a)t+1, \\ p_3(t) &:=& 8a^2(2a+1)t^3-(1+4a)t+1.\end{aligned}$$ Denote by $D_2$ and $D_3$ the discriminants of $p_2(t)$ and $p_3(t)$ respectively: $$\begin{aligned} D_2&:=&4(2a+1)(2a-1),\\ D_3&:=&-32(2a+1)(2a-1)(22a^2+14a+1)a^2.\end{aligned}$$ Since $D_2<0,\, D_3>0$ for $a\in (0,1/2)$, then it is clear that the polynomial $p(t)$ has exactly three distinct real roots (each of multiplicity $3$) for every such $a$. It follows from this fact that there is no points of local extrema of $p(t)$ among the roots of $p(t)$. Further we need the curve $\Gamma$ (see Fig. \[pictur1\]), which can be obtained as a result of the intersection $\Omega$ with the plane $a_3=1/2$ for $0<a_1,a_2\le 1/2$. Recall some properties of $\Gamma$ (see details in [@AANS2]): $\Gamma$ determined by the equality $G(a_1,a_2)=0$, where $$\begin{gathered} \label{G_ab} G(a_1,a_2):=4(a_1+a_2)(4a_1a_2-1)(4a_1a_2-a_1-a_2+1)(4a_1a_2+a_1+a_2+1)\\ +(16a_1^2a_2^2+1)(13a_1^2+22a_1a_2+13a_2^2)-4(a_1^2+a_2^2)(11a_1^2+18a_1a_2+11a_2^2),\end{gathered}$$ $\Gamma$ is homeomorphic to the segment $[0,1]$ with the endpoints $(\sqrt 2/4, 1/2)$, $(1/2, \sqrt 2/4)$ and with the unique singular point (a cusp) at $(a_1,a_2)=(\tilde a, \tilde a)$, where $\tilde a:=(\sqrt5 -1)/4\approx 0.3090169942$. It is easy to check that $\Gamma$ separates $K$ into disjoint connected components $K_1$ and $K_2$ containing the points $$(a',b'):=(3/10,3/10)\quad \mbox{ and }\quad (a'',b''):=(31/100,31/100)$$ respectively. \[1\] \[number\_ofRoots\_of\_p(t)\] In the segment $[0,1]$, the polynomial $p(t)$ has 1. one root, if $(a,b)\in K_1$; 2. two distinct roots, if $(a,b)\in K_2 \cup \Gamma$. Let $t^\ast\in [0,1]$ be a root of $p(t)$ given by (\[polynom12\]). We say that $t^\ast$ is a robust root of $p(t)$ in $[0,1]$, if small perturbations of the parameters $a$ and $b$ imply a small perturbation of $t^\ast$ keeping it in $(t^\ast-\varepsilon, t^\ast+\varepsilon)\subset[0,1]$ for some small $\varepsilon>0$ (see e. g. [@Bruce] for more details on singularities of curves and some related problems). Now, assume that $t^\ast$ is a non-robust root of $p(t)$. Then there exist exactly two possibility (recall that $t^\ast\in [0,1]$): [*Case 1.*]{} $t^\ast=0$ or $t^\ast=1$; [*Case 2.*]{} $t^\ast$ belongs to the interval $(0,1)$ and provides $p(t)$ a local extremum. Now, we consider these cases separately. [*Case 2.*]{} Assume that $t^\ast$ is a point of local extremum of $p(t)$. Then $t^\ast$ is a multiple root of $p(t)$. This contradicts to Lemma \[multiple\_isnot\_extremum\], hence, the case 2 is impossible. [*Case 1.*]{} Since $p(0)=-1$ then there exists no pair $(a,b)$ such that $t=0$ is a root of $p(t)$. Suppose that $t=1$ is a root of $p(t)$. Since $$p(1)=-4(a+b)^2G(a,b),$$ where $G$ is given by (\[G\_ab\]), then the equality $p(1)=0$ is possible if and only if $G(a,b)=0$. Recall that the curve $\Gamma$ is determined by $G(a,b)=0$. Since $p(t)$ has only robust roots for every pair $(a,b)\in K_1\cup K_2$ by our construction, then the number of roots of $p(t)$ in $[0,1]$ is constant both in $K_1$ and in $K_2$. Hence, it is sufficient to calculate the number of such roots only for the representative points $(a',b')\in K_1$ and $(a'',b'')\in K_2$. $(1)$ Suppose that $(a,b)=(a',b')\in K_1$. Then (\[polynom12\]) takes the following form $$p(t)= -\frac{1}{9765625}(t+1)(16t^2-16t+5)(144t^3-275t+125)^3.$$ Taking into account Lemma \[multiple\_isnot\_extremum\], we conclude that $p(t)$ has three distinct real roots of multiplicity $3$ besides the root $t=-1$. Since we does not need exact values of these roots then their approximated values are: $$-1.569348118, \quad 0.5345099430, \quad 1.034838175.$$ $(2)$ Now, suppose that $(a,b)=(a'', b'')\in K_2$. Then in (\[polynom12\]) we obtain $$p(t) = -\frac{1}{6103515625000000}(t+1)(81t^2-81t+25)(77841t^3-140000t+62500)^3,$$ with the following real roots (of multiplicity $3$): $$-1.524828329\dots,\quad 0.5285082631\dots,\quad 0.9963200660\dots.$$ It is easy to see that for $(a,b) \in \Gamma$ the polynomial (\[polynom12\]) has two roots in $[0,1]$, one of which is $1$ by the definition of $\Gamma$. Hence, in the segment $[0,1]$, the polynomial (\[polynom12\]) has one root for $(a,b)\in K_1$ and two roots for $(a,b)\in K_2 \cup \Gamma$. [**Proof of Theorem \[main\_thm\]**]{} is based on Lemma \[number\_ofRoots\_of\_p(t)\] and Remark \[Idea\]. Let $(a,b)\in K$. Then the number of intersection points of $\Omega$ with the segment $I$ equals to $1$ or $2$ depending on the number of roots of the polynomial $p(t)$ (see (\[polynom12\])) containing in $[0,1]$. [**(1)**]{} [*Connectedness of the set $(0,1/2)^3 \cap \Omega$*]{}. Let $t_1,t_2$ be roots of $p(t)$ such that $0<t_1 < t_2\le 1$. Then, obviously, $t_1$ and $t_2$ correspond to the “lower” and “upper” (see Fig. \[singsur\_new\]) parts of the surface $\Omega \cap (0,1/2)^3$ respectively. These parts of $\Omega$ have a unique common point $(a_1,a_2,a_3)=(1/4,1/4,1/4)$ (an [*elliptic umbilic*]{} of $\Omega$ according to [@AANS]). [**(2)**]{} [*The number of the connected components of the set $(0,1/2)^3 \setminus \Omega$*]{}. Since the maximal number of roots of $p(t)$ in $[0,1]$ is equal to $2$ and $\Omega \cap (0,1/2)^3$ is the union of two surfaces with one common point, then the number of connected components of $(0,1/2)^3 \setminus \Omega$ equals to $3$. Theorem \[main\_thm\] is proved. In order to prove Corollary \[main\_cor\] we need the following \[number\_ofRoots\_of\_p(t)\_b=1/2\] Let $b=1/2$. Then in the segment $[0,1]$, the polynomial $p(t)$ has 1. one root for $a\in \left(0,\sqrt 2/4\right)$; 2. two roots for $a\in \left[\sqrt 2/4,1/2\right)$; 3. one root (of multiplicity $8$) for $a=1/2$. $(1),(2)$ At $b=1/2$, $a\in (0,1/2)$ we have $$p(t)=-(2ta+1)\,p_2(t)\,p_3^3(t)$$ in (\[polynom12\]), where $$\begin{aligned} p_2(t) &:=& 4a(2a+1)t^2-2(1+2a)t+1,\\ p_3(t) &:=&2 (1+2a)t^3-2(a+1)t+1.\end{aligned}$$ For the discriminants $D_2$ and $D_3$ of the polynomials $p_2(t)$ and $p_3(t)$ we have $$\begin{aligned} D_2&:=&-4(2a-1)(2a+1)>0,\\ D_3&:=&4(2a+1)(2a-1)(8a^2+28a+11)<0.\end{aligned}$$ Since the cubic polynomial $p_3(t)$ achieves a positive local maximum at the point $t=-\frac{(6a+3)(a+1)}{6a+3}<0$, then its unique real root must be a negative number. Therefore, required roots of $p(t)$ can be provided only by $p_2(t)$, moreover, first of them belongs to $[0,1]$ for all $a\in(0,1/2)$; the second of them — only for $a\in \left[\sqrt 2/4,1/2\right)$. $(3)$ The case $b=a=1/2$ leads (\[polynom12\]) to the polynomial $$p(t)=-(t+1)^4(2t-1)^8$$ with the unique root $t=1/2$ of multiplicity $8$ on $[0,1]$. It should be noted that we get an elliptic umbilic $(a_1,a_2,a_3)=(1/4,1/4,1/4)$ of the surface $\Omega$ in this case. [**Proof of Corollary \[main\_cor\]**]{}. According to Theorem \[main\_thm\] it is sufficient to consider the case when $a=1/2$ or $b=1/2$. Taking into account Remark \[symm\_Omega\], assume without loss of generality that $b=1/2$. Then the proof of Corollary \[main\_cor\] follows from Lemma \[number\_ofRoots\_of\_p(t)\_b=1/2\] and Remark \[Idea\]. When this paper has been written the author was informed about the recent preprint [@Batkhin], where a more detailed description of the surface $\Omega$ was obtained without the restriction $(a_1,a_2,a_3)\in (0,1/2)^3$. The author is indebted to Prof. Yu.G.Nikonorov and to Prof. A.Arvanitoyeorgos for helpful discussions concerning this paper. [\[99\]]{} The dynamics of the Ricci flow on generalized Wallach spaces // Differential Geometry and its Applications. (2014), V. 35. P. 26–43. The Ricci flow on some generalized Wallach spaces // Geometry and its Applications. Springer Proceedings in Mathematics. Switzerland: Springer. (2014), V. 72.  P.3–37. The normalized Ricci flow on generalized Wallach spaces // Mathematical Forum, Vol. 8, part 1. Studies on Mathematical Analysis. Vladikavkaz: SMI VSC RAS. (2014), 298 p., P. 25–42 (in Russian). Algorithms in Real Algebraic Geometry. Algorithms and Computation in Mathematics. Berlin: Springer-Verlag. (2006), V. 10, x+662 pp. On investigation of the certain real algebraic surface. Preprint No. 83 of Keldysh Institute of Applied Mathematics RAS. (2014) (in Russian); also available at <http://library.keldysh.ru//preprint.asp?lg=e&id=2014-83>. Curves and singularities. A geometrical introduction to singularity theory. Cambridge University Press, Cambridge. (1984), xii+222 p. Invariant Einstein metrics on three-locally-symmetric spaces. Preprint, arXiv:1411.2694 (2014). The Ricci Flow: an Introduction. Mathematical Surveys and Monographs, V. 110, AMS, Providence, RI. (2004), xii+325 pp. On invariant Einstein metrics on three-locally-symmetric spaces // Doklady Mathematics (2002), V. 66, No. 2, P. 224–227. On a class of homogeneous compact Einstein manifolds // Sibirsk. Mat. Zh. (2000), V. 41, No. 1, P. 200–205 (in Russian); English translation in: Siberian Math. J. (2000), V. 41, No. 1, P. 168–172. Classification of generalized Wallach spaces. Preprint, arXiv:1411.3131 (2014). Geometry of homogeneous Riemannian manifolds // Journal of Mathematical Sciences (New York) (2007), V. 146, No. 7, P. 6313–6390. Real Algebraic Surfaces. Lecture notes in Mathematics, 1392. Berlin: Springer-Verlag. (1989), x+215 p. Lectures on the Ricci flow. London Mathematical Society Lecture Note Series. Vol. 325, Cambridge University Press, Cambridge. (2006), x+113 pp.
--- abstract: 'We prove the McKay conjecture on characters of odd degree. A major step in the proof is the verification of the inductive McKay condition for groups of Lie type and primes $\ell$ such that a Sylow $\ell$-subgroup or its maximal normal abelian subgroup is contained in a maximally split torus by means of a new equivariant version of Harish-Chandra induction. Specifics of characters of odd degree, namely that they only lie in very particular Harish-Chandra series then allow us to deduce from it the McKay conjecture for the prime $2$, hence for characters of odd degree.' address: - 'FB Mathematik, TU Kaiserslautern, Postfach 3049, 67653 Kaiserslautern, Germany.' - 'FB Mathematik, TU Kaiserslautern, Postfach 3049, 67653 Kaiserslautern, Germany' author: - Gunter Malle - Britta Späth title: Characters of odd degree --- [^1] Introduction ============ In his 1972 note [@McK] dedicated to Richard Brauer on the occasion of his 70th birthday John McKay put forward the following conjecture, based on observations on the known character tables of finite simple groups and of symmetric groups: *For a finite simple group $G$, $m_2(G)=m_2({\ensuremath{{\mathrm{N}}}}_G(S_2))$, where\ $S_2$ is a Sylow 2-group of $G$. * Here, for a finite group $H$, $m_2(H)$ denotes the number of complex irreducible characters of $H$ of odd degree. Soon after the appearance of [@McK], this observation was generalised to arbitrary finite groups and primes. The *McKay conjecture* thus claims that for every finite group $G$ and every prime $\ell$ the number of ordinary irreducible characters $\chi\in{\operatorname{Irr}}(G)$ with $\ell\nmid\chi(1)$ is locally determined, namely $$|{\mathrm{Irr}_{\ell'}}(G)|=|{\mathrm{Irr}_{\ell'}}({\ensuremath{{\mathrm{N}}}}_G(P))|,$$ where $P$ is a Sylow $\ell$-subgroup of $G$ and ${\ensuremath{{\mathrm{N}}}}_G(P)$ denotes its normaliser in $G$. The main result of our paper is the proof of that conjecture for *all* finite groups and the prime $2$. \[thm:McKayp=2\] Let $G$ be a finite group. Then the numbers of odd degree irreducible characters of $G$ and of the normaliser of a Sylow $2$-subgroup of $G$ agree. McKay’s conjecture had a decisive influence on the development of modern representation theory of finite groups. Its prediction of how local structures like the normaliser of a Sylow $\ell$-subgroup should influence the representation theory of a group gave rise to a whole array of stronger and more far reaching conjectures, like those of Alperin, of Broué and of Dade. Simultaneously, functors relating the representation theory of certain families of finite (nearly simple) groups with those of suitable subgroups were introduced. For example Harish-Chandra induction and its generalisation by Deligne–Lusztig are key to the parametrisation of characters of groups of Lie type. It was Gabriel Navarro who, in his work and in his talks, insisted that the McKay conjecture lays at the heart of everything. His insights led to the proof of the fundamental result [@IMN] that the McKay conjecture holds for all finite groups at a prime $\ell$, if every finite non-abelian simple group satisfies a set of properties, the now so-called *inductive McKay condition*, for $\ell$. (A streamlined version of this reduction was presented in [@Sp13] while a novel approach to groups with self-normalising Sylow $2$-subgroups was recently devised by Navarro and Tiep [@NT15].) This opens the possibility to solve the conjecture through the classification of finite simple groups. Thanks to the work of several authors this inductive condition has been shown for all but seven infinite series of simple groups of Lie type $S$ at primes $\ell$ different from the defining characteristic of $S$, see [@CS15; @ManonLie; @Sp12]. The second main result of our paper is meant to provide an important step towards verifying the McKay conjecture in the case of odd primes, showing that the inductive McKay condition holds for most simple groups of Lie type in the maximally split case: \[thm:d=1good\] Let ${{\mathbf G}}$ be a simple linear algebraic group of simply connected type defined over ${{\mathbb{F}}}_q$ with respect to the Frobenius endomorphism $F:{{\mathbf G}}\rightarrow {{\mathbf G}}$ such that $S:={{\mathbf G}}^F/{\operatorname Z}({{\mathbf G}}^F)$ is simple. Assume that $S\notin\{{\mathsf D}_{l,{\operatorname{sc}}}(q),{\mathsf E}_{6,{\operatorname{sc}}}(q)\}$ for any prime power $q$. Then the inductive McKay condition from [@IMN §10] holds for $S$ and all primes $\ell $ dividing $q-1$. For many simple groups $S$ of Lie type and primes $\ell$ different from the defining characteristic of $S$ the authors had constructed a bijection satisfying some (but not all) of the required properties from the inductive McKay condition. Moreover, in the cases where the associated algebraic group has connected centre, Cabanes and the second author [@CS13] could then verify the inductive McKay condition. It thus remains to deal with simple groups of Lie type coming from algebraic groups of simply connected type with disconnected centre. One decisive ingredient in our proof is a criterion for the inductive McKay condition tailored to groups of Lie type, see [@Sp12 Thm. 2.12], which we recall here in Theorem \[thm:Sp12\]. It had already been used for groups of type ${\mathsf A}_l$ as well as in the defining characteristic, see [@CS15] and [@Sp12]. The main assumption of that theorem on the universal covering group $G$ of a simple group $S$ of Lie type and the prime $\ell$ consists of three requirements: - the global part concerns the stabilisers in the automorphism group and the extendibility of elements in ${\mathrm{Irr}_{\ell'}}(G)$, see assumption \[2\_2glo\]; - for a suitably chosen subgroup $N$ that has properties similar to the normaliser of a Sylow $\ell$-subgroup of $G$, the elements of ${\mathrm{Irr}_{\ell'}}(N)$ have only stabilisers of specific structures and have an analogous property with respect to extendibility, see assumption \[thm2\_2loc\]; - there exists an equivariant global-local bijection between the relevant characters of certain groups containing $G$ and $N$, respectively, see assumption \[thm2\_2bij\]. We successively establish those assumptions in the cases relevant to our Theorems \[thm:McKayp=2\] and \[thm:d=1good\]. In accordance with [@MaH0] we choose $N$ to be the normaliser of a suitable Sylow $d$-torus. Extending earlier results of the second author we derive the required statement about the stabilisers of local characters, see Section \[sec:IrrlN\]. We also establish that a similar result holds in type ${\mathsf C}_l$ for the characters of the normaliser of a certain torus, see Section \[type C\]. Afterwards we study the parametrisation of irreducible characters in terms of Harish-Chandra induction, control how automorphisms act on these characters and express this in terms of their labels, see Theorem \[thm:equiv\_HC\]. The proof requires an equivariant version of Howlett–Lehrer theory describing the decomposition of Harish-Chandra induced cuspidal characters and relies on an extendibility result of Howlett–Lehrer and Lusztig. We then prove that many characters of $G$ have stabilisers of the structure required in the criterion. [**Structure of the paper.**]{} After introducing some notation in Section \[sec:Not\], we start by recalling the parametrisation of characters of normalisers of Sylow $d$-tori for $d\in \{1,2\}$ and describe how automorphisms act on the characters and the associated labels in Section \[sec:IrrlN\]. This result has an analogue for the characters of the normaliser of a certain torus in type ${\mathsf C}_l$, see Section \[type C\]. In Section \[sec:HC\], after recalling the basic results on the endomorphism algebra of Harish-Chandra induced modules of $G$, we describe the action of outer automorphisms $\sigma\in{\operatorname{Aut}}(G)$ on such modules. This enables us in Theorem \[thm:bij\] to construct an equivariant local-global bijection given by Harish-Chandra induction. The aforementioned results on stabilisers of characters of the normaliser of a maximally split torus leads to a description of the stabilisers of some characters of $G$ in a similar way, see Corollary \[cor:7\_3\]. The remaining part of the paper is devoted to the completion of the proof of our main Theorems \[thm:McKayp=2\] and \[thm:d=1good\]. First we show that all necessary assumptions of Theorem \[thm:Sp12\] are satisfied for proving Theorem \[thm:d=1good\] and clarify which additional properties need to be proved for obtaining an even more general statement. Then, after the classification of odd degree characters of quasi-simple groups of Lie type in Theorem \[thm:odd degree\] which may be of independent interest, we complete the proof of Theorem \[thm:McKayp=2\]. [**Acknowledgement.**]{} The second author thanks Michel Enguehard for comments on an early draft of results now contained in Section \[sec:HC\]. Background {#sec:Not} ========== In this section we recall the criterion from [@Sp12] for the inductive McKay condition that is the main tool in the proof of our main result. Afterwards we introduce the groups of Lie type that play a central role in the paper and describe their automorphisms. A criterion for the inductive McKay condition --------------------------------------------- We first introduce some notation. If a group $A$ acts on a finite set $X$ we denote by $A_{x}$ the stabiliser of $x\in X$ in $A$, analogously we denote by $A_{X'}$ the setwise stabiliser of $X'\subseteq X$. For an element $a\in A$ we denote by $o(a)$ the order of $a$. If $A$ acts on a group $G$ by automorphisms, there is a natural action of $A$ on ${\operatorname{Irr}}(G)$ given by $${}^{a^{-1}}\chi (g)=\chi^a(g)=\chi(g^{a^{-1}})\quad \text{ for every } g \in G,\,\, a\in A \text{ and } \chi\in{\operatorname{Irr}}(G).$$ For $P\leq G$ and $\chi\in {\operatorname{Irr}}(H)$ for some $A_P$-stable subgroup $H\leq G$, we denote by $A_{P,\chi}$ the stabiliser of $\chi$ in $A_P$. We denote the restriction of $\chi\in{\operatorname{Irr}}(G)$ to a subgroup $H\leq G$ by $\restr \chi|H$, while $\chi^G$ denotes the character induced from $\psi\in{\operatorname{Irr}}(H)$ to $G$. For $N\lhd G$ and $\chi\in {\operatorname{Irr}}(G)$ we denote by ${\operatorname{Irr}}(N\mid \chi)$ the set of irreducible constituents of the restricted character $\restr\chi|N$, and for $\psi \in {\operatorname{Irr}}(N)$, the set of irreducible constituents of the induced character $\psi^G$ is denoted by ${\operatorname{Irr}}(G\mid \psi)$. For a subset ${{\mathcal N}}\subseteq {\operatorname{Irr}}(N)$ we define $${\operatorname{Irr}}(G\mid {{\mathcal N}}){:=}\bigcup_{\chi\in{{\mathcal N}}}{\operatorname{Irr}}(G\mid \chi).$$ Additionally, for $N\lhd G$ we sometimes identify the characters of $G/N$ with the characters of $G$ whose kernel contains $N$. For a prime $\ell$ we let ${\operatorname{Irr}}_{\ell'}(G){:=}\{\chi\in{\operatorname{Irr}}(G)\mid \ell\nmid\chi(1)\}$. The following criterion was proved in Späth [[@Sp12 Thm. 2.12]]{}: \[thm:Sp12\] Let $S$ be a finite non-abelian simple group and $\ell$ a prime dividing $|S|$. Let $G$ be the universal covering group of $S$ and $Q$ a Sylow $\ell$-subgroup of $G$. Assume there exist groups $A$, ${\widetilde}G\leq A$, $D\leq A$ and $N\lneq G$, such that with ${\ensuremath{{{\widetilde}N}}}{:=}N{\ensuremath{{\mathrm{N}}}}_{{\ensuremath{{{\widetilde}G}}}}(Q)$ the following conditions hold: 1. \[thm2\_2gen\] 1. $G\lhd A$, $G\le{\ensuremath{{{\widetilde}G}}}$ and $A={\ensuremath{{{\widetilde}G}}}\rtimes D$, 2. ${\ensuremath{{{\widetilde}G}}}/G$ is abelian, 3. ${\ensuremath{{\rm{C}}}}_{{\ensuremath{{{\widetilde}G}}}\rtimes D}(G)= {\operatorname Z}({\ensuremath{{{\widetilde}G}}})$ and $A/{\operatorname Z}({\ensuremath{{{\widetilde}G}}})\cong{\operatorname{Aut}}(G)$ by the natural map, 4. $N$ is ${\operatorname{Aut}}(G)_Q$-stable, 5. ${\ensuremath{{\mathrm{N}}}}_G(Q)\leq N$, 6. \[hauptprop\_maxext\_glob\] every $\chi\in{\mathrm{Irr}_{\ell'}}(G)$ extends to its stabiliser ${\widetilde}G_\chi$, 7. \[hauptprop\_maxext\_loc\] every $\psi\in {\mathrm{Irr}_{\ell'}}(N)$ extends to its stabiliser ${\ensuremath{{{\widetilde}N}}}_\psi$. 2. \[2\_2glo\] Let ${{\mathcal G}}{:=}{\operatorname{Irr}}\big ({\widetilde}G\mid {\mathrm{Irr}_{\ell'}}(G)\big)$. For every $\chi\in{{\mathcal G}}$ there exists some $\chi_0\in {\operatorname{Irr}}(G\mid \chi)$ such that 1. \[2\_2glostar\] $({\widetilde}G\rtimes D)_{\chi_0}= {\widetilde}G_{\chi_0}\rtimes D_{\chi_0}$ and 2. \[2\_2gloext\] $\chi_0$ extends to $(G \rtimes D)_{\chi_0}$. 3. \[thm2\_2loc\] Let ${{\mathcal N}}{:=}{\operatorname{Irr}}\big ({\widetilde}N\mid {\mathrm{Irr}_{\ell'}}(N)\big )$. For every $\psi\in {{\mathcal N}}$ there exists some $\psi_0\in {\operatorname{Irr}}(N\mid \psi)$ such that $O{:=}G({\widetilde}G\rtimes D)_{N,\psi_0}$ satisfies 1. \[2\_2locstar\] $O=({\widetilde}G\cap O) \rtimes (D\cap O)$ and 2. \[2\_2loc-ext\] $\psi_0$ extends to $(G\rtimes D)_{N,\psi_0}$. 4. \[thm2\_2bij\] There exists a $({\ensuremath{{{\widetilde}G}}}\rtimes D)_Q$-equivariant bijection ${\widetilde}\Omega: {{\mathcal G}}\longrightarrow {{\mathcal N}}$ with 1. ${{\widetilde}\Omega}({{\mathcal G}}\cap{\operatorname{Irr}}({\widetilde}G\mid \nu))={{\mathcal N}}\cap{\operatorname{Irr}}({\widetilde}N\mid \nu)$ for every $\nu \in {\operatorname{Irr}}({\operatorname Z}({\widetilde}G))$, 2. \[Omega\_u\_epsilon\_equiv\] ${{\widetilde}\Omega}(\chi\delta)= {{\widetilde}\Omega}(\chi)\restr\delta|{{\widetilde}N}$ for every $\chi\in {{\mathcal G}}$ and every $\delta\in{\operatorname{Irr}}({\widetilde}G|1_G)$. Then the inductive McKay condition from [@IMN §10] holds for $S$ and $\ell$. Simple groups of Lie type {#ssec2:B} ------------------------- We now introduce the most relevant groups and automorphisms. For the later detailed calculations it is relevant to fix them in a rather precise way. Let ${{\mathbf G}}$ be a simple linear algebraic group of simply connected type over an algebraic closure of ${{\mathbb{F}}}_q$. Let ${{\mathbf B}}$ be a Borel subgroup of ${{\mathbf G}}$ with maximal torus ${{\mathbf T}}$. Let $\Phi,\Phi^+$ and $\Delta$ denote the set of roots, positive roots and simple roots of ${{\mathbf G}}$ that are determined by ${{\mathbf T}}$ and ${{\mathbf B}}$. Let ${{\mathbf N}}:={\ensuremath{{\mathrm{N}}}}_{{\mathbf G}}({{\mathbf T}})$. We denote by $W$ the Weyl group of ${{\mathbf G}}$ and by $\pi:\norm{{\mathbf G}}{{\mathbf T}}\rightarrow W$ the defining epimorphism. For calculations with elements of ${{\mathbf G}}$ we use the Chevalley generators subject to the Steinberg relations as in [@GLS3 Thm. 1.12.1], i.e., the elements $x_\al(t)$, $n_\al(t)$ and $h_\al(t)$ ($t\in {\overline {{\mathbb{F}}}}_q$ and $\al\in \Phi$) defined as there. In the following we describe automorphisms of ${{\mathbf G}}$. Let $p$ be the prime with $p\mid q$ and $F_0: {{\mathbf G}}\rightarrow {{\mathbf G}}$ the *field endomorphism* of ${{\mathbf G}}$ given by $$F_0(x_\al(t))= x_\al(t^p) \quad\text{ for every } t \in {\overline {{\mathbb{F}}}}_q \text{ and } \al \in \Phi.$$ Any length-preserving automorphism $\tau$ of the Dynkin diagram associated to $\Delta$ and hence automorphism of $\Phi$ determines a *graph automorphism* $\gamma$ of ${{\mathbf G}}$ given by $$\gamma(x_\al(t))=x_{\tau(\al)}(t) \quad\text{ for every } t \in {\overline {{\mathbb{F}}}}_q \text{ and } \al \in \pm \Delta.$$ Note that any such $\gamma$ commutes with $F_0$. For the construction of diagonal automorphisms of the associated finite groups of Lie type we introduce further groups: Let $r$ be the rank of ${\operatorname Z}({{\mathbf G}})$ (as abelian group) and ${{\mathbf Z}}\cong ({\overline {{\mathbb{F}}}}_q^\times)^r$ a torus of that rank with an embedding of ${\operatorname Z}({{\mathbf G}})$. We set $${\widetilde}{{\mathbf G}}:= {{\mathbf G}}\times_{{\operatorname Z}({{\mathbf G}})} {{\mathbf Z}},$$ the central product of ${{\mathbf G}}$ with ${{\mathbf Z}}$ over ${\operatorname Z}({{\mathbf G}})$. Then ${\widetilde}{{\mathbf G}}$ is a connected reductive group with connected centre and the natural map ${{\mathbf G}}\rightarrow {\widetilde}{{\mathbf G}}$ is a regular embedding, see [@CE04 15.1]. Note that ${\widetilde}{{\mathbf B}}:={{\mathbf B}}{{\mathbf Z}}$ is a Borel subgroup of ${\widetilde}{{\mathbf G}}$ and ${\widetilde}{{\mathbf T}}:={{\mathbf T}}{{\mathbf Z}}$ is a maximal torus therein. Furthermore let ${\widetilde}{{\mathbf N}}:={\ensuremath{{\mathrm{N}}}}_{{\widetilde}{{\mathbf G}}}({\widetilde}{{\mathbf T}})={{\mathbf N}}{{\mathbf Z}}$. As $F_0$ acts on $Z({{\mathbf G}})$ via $x\mapsto x^p$ for every $x\in{\operatorname Z}({{\mathbf G}})$ we can extend it to a Frobenius endomorphism $F_0: {\widetilde}{{\mathbf G}}\rightarrow{\widetilde}{{\mathbf G}}$ via $$F_0(g,x):= (F_0(g), x^p) \quad\text{ for every }g\in {{\mathbf G}}\text{ and }x \in {{\mathbf Z}}.$$ Now assume that $\gamma$ is a graph automorphism of ${{\mathbf G}}$. If $\gamma$ acts trivially on ${\operatorname Z}({{\mathbf G}})$ then it extends to an automorphism of ${\widetilde}{{\mathbf G}}$ which we also denote by $\gamma$, via $$\gamma(g,x):= (\gamma(g), x) \quad\text{ for every }g\in {{\mathbf G}}\text{ and }x \in {{\mathbf Z}}.$$ If $\gamma$ acts on ${\operatorname Z}({{\mathbf G}})$ by inversion then it can be extended via $$\gamma(g,x):= (\gamma(g), x^{-1}) \quad\text{ for every }g\in {{\mathbf G}}\text{ and }x \in {{\mathbf Z}}.$$ A similar extension of $\gamma$ is possible in the remaining cases. In any case $F_0$ and $\gamma$ stabilise ${\widetilde}{{\mathbf B}}$ and ${\widetilde}{{\mathbf T}}$. Now consider a Steinberg endomorphism $F:=F_0^m\gamma$, with $\gamma$ a (possibly trivial) graph automorphism of ${{\mathbf G}}$. Then $F$ defines an ${{\mathbb{F}}}_q$-structure on ${\widetilde}{{\mathbf G}}$, where $q=p^m$, and ${{\mathbf B}},{{\mathbf T}},{\widetilde}{{\mathbf B}},{\widetilde}{{\mathbf T}}$ are $F$-stable, so in particular ${{\mathbf T}},{\widetilde}{{\mathbf T}}$ are maximally split tori in ${{\mathbf G}}$, ${\widetilde}{{\mathbf G}}$ respectively. We let $G:={{\mathbf G}}^F$. By construction the order of $F_0$ as automorphism of ${\ensuremath{{{\widetilde}G}}}:={\widetilde}{{\mathbf G}}^F$ coincides with the one of $F_0$ as automorphism of $G$. The analogous statement also holds for any graph automorphism $\gamma$ and the automorphisms of ${\widetilde}G$ associated with it. Let $D$ be the subgroup of ${\operatorname{Aut}}(G)$ generated by $F_0$ and the graph automorphisms commuting with $F$. Then ${\widetilde}G\rtimes D$ is well-defined and induces all automorphisms of $G$, see [@GLS3 Thm. 2.5.1]. Moreover $D$ acts naturally on the set of $F$-stable subgroups of ${{\mathbf G}}$. An embedding of the group ${\mathsf D}_{l,sc}(q)$ into ${\mathsf B}_{l,sc}(q)$ {#embed_D_into_B} ------------------------------------------------------------------------------ We recall an embedding of ${\mathsf D}_{l,sc}(q)$ into ${\mathsf B}_{l,sc}(q)$ given explicitly in [@Spaeth2 10.1] in terms of the aforementioned Chevalley generators. Let ${\overline }\Phi$ be a root system of type ${\mathsf B}_l$ with base $\Delta=\{{\overline }\alpha_1, \alpha_2,\ldots, \al_l \}$, where ${\overline }\al_1=e_1$ and $\al_i=e_i-e_{i-1}$ ($i\geq 2$) as in [@GLS3 Rem. 1.8.8]. Let ${\overline }{{\mathbf G}}$ be the associated simple algebraic group of simply connected type over ${\overline }{{\mathbb{F}}}_q$. In analogy to our previous terminology we denote its Chevalley generators by ${\overline }x_\al(t_1)$, ${\overline }n_\al(t_2)$ and ${\overline }h_\al(t_2)$ with ($\al\in{\overline }\Phi$, $t_1\in {\overline }{{\mathbb{F}}}_q$ and $t_2\in{\overline }{{\mathbb{F}}}_q^\times$). Let $\Phi\subseteq {\overline }\Phi$ be the root system consisting of all long roots of ${\overline }\Phi$. Then the group $\spann<x_\al(t)\mid \al \in \Phi,\,t\in{\overline }{{\mathbb{F}}}_q>$ is a simply connected simple group over ${\overline }{{\mathbb{F}}}_q$ with the root system $\Phi$ of type ${\mathsf D}_l$. Whenever $\Phi$ is of type ${\mathsf D}_l$, we identify ${{\mathbf G}}$ with $\spann<{\overline }x_\al(t)\mid \al \in \Phi,t\in{\overline }{{\mathbb{F}}}_q>$ via $\iota_{{\mathsf D}}:{{\mathbf G}}\rightarrow {\overline }{{\mathbf G}}$, $x_\al(t)\mapsto {\overline }x_\al(t)$, and choose the notation of elements in ${{\mathbf G}}$ such that this defines a monomorphism. Let $\zeta\in {\overline }{{\mathbb{F}}}_q$ be a primitive $(2,q-1)^2$th root of unity. The graph automorphism of ${{\mathbf G}}$ of order 2 coincides with the map $x\mapsto x^{{\overline }n_{e_1}(1) \prod_{i=2}^l {\overline }h_{e_i}(\zeta)}$, see [@Spaeth2 Lemma 11.2], which because of ${\operatorname Z}({{\mathbf G}})=\spann<{\overline }h_{e_1}(-1), \prod_{i=1}^l {\overline }h_{e_i}(\zeta)>$ (by [@GLS3 Tab. 1.12.6 and Thm 1.12.1(e)]) coincides with $x\mapsto x^{{\overline }n_{e_1}(1) {\overline }h_{e_1}(\zeta)}$. Parametrisation of some local characters {#sec:IrrlN} ======================================== In this section we prove a result on stabilisers of characters that leads to the verification of condition \[thm2\_2loc\] in the cases considered in this paper. These results enable us later in Theorem \[thm:Bij\_wG\] to construct a bijection ${\widetilde}\Omega:{{\mathcal G}}\rightarrow{{\mathcal N}}$ as required in Theorem \[thm2\_2bij\]. The aim of this section is the proof of the following statement that concerns normalisers of Sylow $d$-tori, sometimes also called Sylow $d$-normalisers. Sylow $d$-tori were introduced in [@BM92] under the name of Sylow $\Phi_d$-tori (with $\Phi_d$ denoting the $d$-th cyclotomic polynomial), and play an important role in the study of height $0$ characters, see [@MaH0]. \[thm:IrrN\_autom\] Let $d\in\{1,2\}$, ${{\mathbf S}}_0$ be a Sylow $d$-torus of $({{\mathbf G}},F)$, $N_0{:=}{\ensuremath{{\mathrm{N}}}}_{{\mathbf G}}({{\mathbf S}}_0)^F$, ${\widetilde}N_0{:=}{\ensuremath{{\mathrm{N}}}}_{{\widetilde}{{\mathbf G}}}({{\mathbf S}}_0)^F$ and $\psi\in{\operatorname{Irr}}({\widetilde}N_0)$. There exists some $\psi_0\in {\operatorname{Irr}}(N_0\mid \psi)$ such that 1. $O_0= ({{{{{\widetilde}{{\mathbf G}}}^F}}}\cap O_0) \rtimes (D\cap O_0)$ for $O_0{:=}{{{{{\mathbf G}}^F}}}({{{{{\widetilde}{{\mathbf G}}}^F}}}\rtimes D)_{{{\mathbf S}}_0,\psi_0}$; and 2. $\psi_0$ extends to $({{{{{\mathbf G}}^F}}}\rtimes D)_{{{\mathbf S}}_0,\psi_0}$. This statement is related to Theorem 5.1 of [@CS15], where the same assertion was proved for all positive integers $d$ in the case that the root system of ${{\mathbf G}}$ is of type ${\mathsf A}_l$. Accordingly we may and will assume in the following that $\Phi$ is not of type ${\mathsf A}_l$. We verify the statement in five steps mimicking the strategy applied in [@CS15 Sec. 5]. First, in \[sec:3:Transfer\] we replace ${{{{{\mathbf G}}^F}}}$ by an isomorphic group, then for subgroups of this group we construct in \[sec:3:extmap\] an extension map that is compatible with certain automorphisms of ${{{{{\mathbf G}}^F}}}$, which gives in \[sec:3\_param\] a parametrisation of ${\operatorname{Irr}}(N_0)$. In the end, the condition \[2\_2locstar\] on the structure of stabilisers is deduced from properties of characters of relative inertia groups. By what we said before we may and will also assume throughout this section that $D$ is non-trivial and that ${\widetilde}{{\mathbf G}}$ induces non-inner automorphisms on ${{\mathbf G}}$. Accordingly the root system $\Phi$ of ${{\mathbf G}}$ is of type ${\mathsf B}_l$, ${\mathsf C}_l$, ${\mathsf D}_l$, ${\mathsf E}_6$ or ${\mathsf E}_7$ and ${\operatorname Z}({{\mathbf G}}^F)\neq 1$, hence in particular ${{{{{\mathbf G}}^F}}}\neq {{}^3}{\mathsf D}_{4,{\operatorname{sc}}}(q)$. Transfer to twisted groups {#sec:3:Transfer} -------------------------- Recall the notations from Section \[sec:Not\]. We set $V:=\langle n_\al(\pm 1)\mid \alpha \in \Phi\rangle\le {\ensuremath{{\mathrm{N}}}}_{{\mathbf G}}({{\mathbf T}})$, and $H:=V\cap{{\mathbf T}}$. We define $v\in {{\mathbf G}}$ as $$\begin{aligned} v&:=\begin{cases} {\operatorname{id}}_{{{\mathbf G}}}& \text{if }d=1,\\ {{{\widetilde}{\mathbf w_{0}}}}& \text{if }d=2,\end{cases}\end{aligned}$$ where ${{{\widetilde}{\mathbf w_{0}}}}$ is the canonical representative in $V$ of the longest element of $W$ defined as in [@Spaeth2 Def. 3.2]. \[lem:3\_3\] The torus ${{\mathbf T}}$ contains a Sylow $d$-torus ${{\mathbf S}}$ of $({{\mathbf G}}, vF)$. Moreover ${{\mathbf T}}={\ensuremath{{\rm{C}}}}_{{\mathbf G}}({{\mathbf S}})$ and $N= T V_1$, where $N:=\norm {{\mathbf G}}{{\mathbf S}}^{vF}$, $T{:=}{{\mathbf T}}^{vF}$ and $V_1:=V^{vF}$. Let $\phi$ denote the automorphism induced by $F$ on $W$. Comparing with the tables in [@Springer Sect. 5 and 6] one sees that $\pi(v)\phi$ is a $d$-regular element of $W\phi$ in the sense of Springer, see [@Springer Sect. 4 and 6]. Hence the centraliser of any Sylow $d$-torus in ${{\mathbf G}}$ is a torus. According to [@Spaeth2 Rem. 3.3 and Lemma 3.4] there exists some Sylow $d$-torus ${{\mathbf S}}\leq {{\mathbf T}}$ of $({{\mathbf G}}, vF)$. If $\Phi$ is of classical type and $F=F_0^m$ then $TV_1=N$ by [@Spaeth2 Rem. 3.3(c)]. For exceptional types this was proven in [@Sp09 Prop. 6.3 and 6.4]. It remains to consider the case where ${{{{{\mathbf G}}^F}}}={{}^2}{\mathsf D}_{l,{\operatorname{sc}}}(q)$. Here for $d=1$ one uses [@Spaeth2 Lemma 11.2] and computes that $H^F$ is an elementary abelian group of rank $l-1$ and that $\pi(V_1)$ is isomorphic to a Coxeter group of type ${\mathsf B}_{l-1}$ and hence to $\cent W \phi$. One can see analogously for $d=2$ and hence $v={{{\widetilde}{\mathbf w_{0}}}}$ that $H^{v F}$ is an elementary abelian group of rank $l-1$, and $\pi(V ^{v F}) = \cent W {\pi(v) \phi}$ if $w_0\in{\operatorname Z}(W)$ and hence $v\in{\operatorname Z}(V)$. If $w_0\notin {\operatorname Z}(W)$ computations in the braid group show that $H^{vF}=H$ and $V^{vF}=V$. \[not:3.4\] Let $e:=o(v)$, the order of $v$. In the following we denote by ${{\mathrm C}}_i$ the cyclic group of order $i$. Let $E_1$ be the subgroup of ${\operatorname{Aut}}({{\mathbf G}})$ generated by graph automorphisms. Let $E:= {{\mathrm C}}_{2em} \times E_1$ act on ${{\widetilde}{{\mathbf G}}}^{F_0^{2em}}$ such that the first summand ${{\mathrm C}}_{2em}$ of $E$ acts by $\spann<F_0>$ and the second by the group generated by graph automorphisms. Note that this action is faithful. Let ${\widehat}F_0, {\widehat}\gamma, {\widehat}F \in E$ be the elements that act on ${{\widetilde}{{\mathbf G}}}^{F^{2em}_0}$ by $F_0$, $\gamma$ and $F$, respectively. Note that $E$ stabilises $N$, $T$, $V$, $v$ and hence $H$, $V_1$ and $H^{vF}$. \[prop:5\_6\] Let ${{\mathbf S}}$ and $N$ be as in Lemma \[lem:3\_3\], and ${\ensuremath{{{\widetilde}N}}}{:=}\norm {{\widetilde}{{\mathbf G}}} {{\mathbf S}}^{vF}$. Suppose that for every $\chi\in{\operatorname{Irr}}({\widetilde}N)$ there exists some $\chi_0\in{\operatorname{Irr}}(N\mid \chi)$ such that 1. $({\widetilde}N \rtimes E)_{\chi_0} ={\widetilde}N_{\chi_0} \rtimes E_{\chi_0}$; and 2. $\chi_0$ has an extension ${\widetilde}\chi_0\in{\operatorname{Irr}}(N\rtimes E_{\chi_0})$ with $v{\widehat}F\in\ker({\widetilde}\chi_0)$. Then the conclusion of Theorem \[thm:IrrN\_autom\] holds for $({{\mathbf G}},F)$ and $d$. The statement is an analogue of [@CS15 Prop. 5.3]. The proof given there is independent of the underlying type, and is based on the application of Lang’s theorem using that $v$ is $D$- and hence $E$-invariant. It relies on the fact that conjugation by a suitable element of ${{\mathbf G}}$ gives an isomorphism $\iota: {{\mathbf G}}\rightarrow{{\mathbf G}}$ with $\iota({{\mathbf G}}^F)={{\mathbf G}}^{vF}$. Via $\iota$ the automorphisms of ${{\mathbf G}}^F$ induced by ${\widetilde}{{\mathbf G}}^F \rtimes D$ coincide with the ones of ${{\mathbf G}}^{vF}$ induced by ${\widetilde}{{\mathbf G}}^{vF}E/\langle v{\widehat}F\rangle$, and ${\widetilde}{{\mathbf G}}^F\rtimes D \cong {\widetilde}{{\mathbf G}}^{vF}E/\langle v{\widehat}F\rangle$. Extension maps with respect to $H_1\lhd V_1$ {#sec:3:extmap} -------------------------------------------- In order to verify the assumptions of Proposition \[prop:5\_6\] on the characters of $N$ we label them via some so-called extension map. \[def3\_6\] Let $Y\lhd X$ and ${{\mathcal Y}}\subseteq {\operatorname{Irr}}(Y)$. We say that *maximal extendibility holds for ${{\mathcal Y}}$ with respect to $Y\lhd X$* if every $\chi \in {{\mathcal Y}}$ extends (as irreducible character) to $X_\chi$. Then, an *extension map for ${{\mathcal Y}}$ with respect to $Y\lhd X$* is a map $$\Lambda: {{\mathcal Y}}\rightarrow \bigcup_{Y\leq I\leq X} {\operatorname{Irr}}(I),$$ such that for every $\chi\in {{\mathcal Y}}$ the character $\Lambda(\chi)\in {\operatorname{Irr}}(X_\chi)$ is an extension of $\chi$. If ${{\mathcal Y}}={\operatorname{Irr}}(Y)$ we also say that there exists *an extension map with respect to $Y\lhd X$*. The following is easily verified: \[lem:notsec3\] Let $X$ be a finite group, $Y\lhd X$ and ${{\mathcal Y}}\subseteq {\operatorname{Irr}}(Y)$ an $X$-stable subset. Assume there exists an extension map for ${{\mathcal Y}}$ with respect to $Y\lhd X$. Then there exists an $X$-equivariant extension map for ${{\mathcal Y}}$ with respect to $Y\lhd X$. In order to prove Theorem \[thm:IrrN\_autom\] in the form suggested by Proposition \[prop:5\_6\] our next goal is to establish the following intermediary step. Recall $V_1=V^{vF}$ and set $H_1:=H^{vF}$. \[thm:very\_good\_twist\] There exists a $V_1E$-equivariant extension map with respect to $H_1\lhd V_1$. The proof will be given in several steps. We first consider the case when $F$ is untwisted and $d=1$. \[prop:very\_good\_twist\_1\] For $\Phi$ not of type ${\mathsf D}_l$ there exists an extension map with respect to $H\lhd V$. According to [@Sp09 Prop. 5.1] we can assume that $\Phi$ is of type ${\mathsf B}_l$ or ${\mathsf C}_l$. Assume that $q=3$. Then $V={{\mathbf N}}^F$ and $H={{\mathbf T}}^F$. Maximal extendibility holds with respect to $H={{\mathbf T}}^F\lhd {{\mathbf N}}^F$ according to [@HL Cor. 6.11] or [@Spaeth2 Thm. 1.1]. By assumption $q$ is odd. Then the isomorphism types of $H$ and $V$ are independent of $q$ since $V$ and $H$ can be described as finitely presented groups whose relations are independent of $q$, see [@Tits] and [@Sp_Diss Lemma 2.3.1(b)]. Hence the considerations for $q=3$ already imply the statement. \[prop:3\_10\] For $\Phi$ not of type ${\mathsf D}_l$ there exists a $VE_1$-equivariant extension map with respect to $H\lhd V$. If $\Phi$ has no graph automorphism Proposition \[prop:very\_good\_twist\_1\] together with Lemma \[lem:notsec3\] proves that a $V$-equivariant extension map with respect to $H\lhd V$ exists. If $\Phi$ is of type ${\mathsf E}_6$ the generator ${\widehat}\gamma$ of $E_1$ corresponds to an automorphism of the associated braid group ${\mathsf B}$, that acts by permuting the generators. The epimorphism $\tau:{\mathsf B}\rightarrow V$ is ${\widehat}\gamma$-equivariant. Let ${\ensuremath{\mathrm{r}}}:W\rightarrow {\mathsf B}$ be the map from [@GP 4.1.1] and $w_0$ the longest element in $W$. Conjugation with ${\ensuremath{\mathrm{r}}}(w_0)={\mathrm w}_0$ acts on ${{\mathbf B}}$ like ${\widehat}\gamma$ by [@GP Lemma 4.1.9], analogously conjugating by ${{{\widetilde}{\mathbf w_{0}}}}$, which is the image of ${\ensuremath{\mathrm{r}}}(w_0)$ under the natural epimorphism from ${{\mathbf B}}$ to $V$, acts on $V$ like ${\widehat}\gamma$. Hence the automorphism induced by ${\widehat}\gamma$ on $V$ is an inner automorphism and hence any $V$-equivariant extension map is also $VE_1$-equivariant. \[prop:3\_Dl\] If $\Phi$ is of type ${\mathsf D}_l$ there exists a $VE_1$-equivariant extension map with respect to $H\lhd V$. First let us consider the case where $\Phi$ is of type ${\mathsf D}_l\neq {\mathsf D}_4$. Let $\iota_{{\mathsf D}}:{{\mathbf G}}\rightarrow {\overline }{{\mathbf G}}$ be the embedding from \[embed\_D\_into\_B\]. Then $\iota_{{\mathsf D}}(V)\leq {\overline }V {:=}\spann<{\overline }n_{\al}(\pm 1)\mid \al \in{\overline }\Phi>$ and $\iota_{{\mathsf D}}(H)={\overline }H{:=}\spann< {\overline }h_{\al}(\pm 1)\mid \al \in {\overline }\Phi>$. Note that ${\overline }H=H$ and hence $V_\la\leq {\overline }V_\la$ for every $\la\in {\operatorname{Irr}}(H)={\operatorname{Irr}}({\overline }H)$. Let $\Lambda_{{\mathsf B}}$ be the ${\overline }V$-equivariant extension map with respect to ${\overline }H\lhd {\overline }V$ from Proposition \[prop:very\_good\_twist\_1\]. As explained in \[embed\_D\_into\_B\], $\gamma(x)=x^{{\overline }n_{e_1}(1) {\overline }h_{e_1}(\zeta)}$ for every $x\in {{\mathbf G}}$, where $\zeta$ is some primitive $8$th root of unity. (Note that because of our initial reductions we can assume that $2\nmid q$.) Let $\zeta'\in{\overline }{{\mathbb{F}}}_q$ be a primitive $8$th root of unity and $t:=\prod_{i=1}^{l} h_{e_i}(\zeta')$. For $n\in V$ we have $$[t,n]= \prod_{j\in J} h_{e_j}(\zeta'^{2})$$ for a set $J\subseteq \{1,\ldots, l\}$ with $2\mid |J|$. This proves $[t,V]\subseteq H$ and hence $V^t=V$. Hence there is a well-defined extension map $\Lambda_0$ given by $$\Lambda_0(\la)=(\restr\Lambda_{{\mathsf B}}(\la)|{V_\la})^t\qquad \text{ for all } \la \in {\operatorname{Irr}}(H).$$ Since $\Lambda_{{\mathsf B}}$ is ${\overline }V$-equivariant, $\Lambda_0$ is ${\overline }V^t$-equivariant. The element ${\overline }n_{\al_1}(1)^t={\overline }n_{\al_1}(1) h_{\al_1}(\zeta'^2)$ and $\gamma$ induce the same automorphism on $V$, according to \[embed\_D\_into\_B\]. Hence $\Lambda_0$ is $V\spann<\gamma>$-equivariant. Now assume that $\Phi$ is of type ${\mathsf D}_4$. According to the above considerations there exists some $V\spann<{\widehat}\gamma_2>$-equivariant extension map with respect to $H\lhd V$ for some ${\widehat}\gamma_2\in E_1$ of order $2$. For the proof it is sufficient to show maximal extendibility for some $VE_1$-transversal ${{\mathbb{T}}}\subset {\operatorname{Irr}}(H)$ with respect to $H\lhd VE_1$. We may choose ${{\mathbb{T}}}$ such that for each $\la \in {{\mathbb{T}}}$ some Sylow $2$-subgroup of $(VE_1)_\la$ is contained in $(V\spann<{\widehat}\gamma_2>)_\la$. According to [@Isa Thm. 6.26], every $\la\in {{\mathbb{T}}}$ extends to $(VE_1)_\la$ if $\la$ extends to a Sylow $2$-subgroup of $(VE_1)_\la$. By the choice of ${{\mathbb{T}}}$ we have $(VE_1)_\la=(V\spann<{\widehat}\gamma_2>)_\la$ for every $\la\in {{\mathbb{T}}}$. By the above $\la$ has a $(V\spann<{\widehat}\gamma_2>)$-invariant extension to $V_\la$. Since $(V\spann<{\widehat}\gamma_2>)_\la/V_\la$ is cyclic, $\la$ extends to $(V\spann<\gamma_2>)_\la$ by [@Isa Cor. 11.22]. This proves the claim. In the next step we construct extension maps in the case where the Frobenius endomorphism is twisted. Recall $V_1=V^{vF}$ and $H_1=H^{vF}$. \[lem:A3\] Let $F_0$, $m$ and $\gamma$ be defined as in \[ssec2:B\]. Assume that $\Phi$ is of type ${\mathsf D}_l$, $v={\operatorname{id}}_{{\mathbf G}}$ and $F=\gamma F_0^m$. Then there exists a $V_1 E_1$-equivariant extension map with respect to $H_1\lhd V_1$. By the proof of Lemma \[lem:3\_3\], $V_1/H_1$ is isomorphic to ${{\mathrm C}}_W(\gamma)$. Let $\Delta=\{\al_1,\ldots ,\al_l\}$ be a base of $\Phi$. For a positive integer $i$ and elements $x,y\in V$ let ${\operatorname{prod}}(x,y, i)$ be defined by $${\operatorname{prod}}(x,y, i)=\underbrace{x\cdot y \cdot x \cdot y \cdots}_i.$$ Following [@Tits] the group $V$ coincides with the extended Weyl group of ${{\mathbf G}}$ that is the finitely presented group generated by $n_i=n_{\al_i}(1)$ and $h_i=n_i^2$, subject to the relations $$\begin{aligned} h_i h_j&=h_j h_i,& h_i^{2}&=1,\\ {\operatorname{prod}}(n_i,n_j, m_{ij})&={\operatorname{prod}}(n_j,n_i, m_{ij}),& h_i^{n_j}&=h_j^{A_{i,j}}h_i \text{ for all } 1\leq i,j\leq l,\end{aligned}$$ where $m_{ij}$ is the order of $s_{\al_i} s_{\al_j}$ in $W$ and $(A_{i,j})$ is the associated Cartan matrix, see [@Sp_Diss Lemma 2.3.1(b)] for more details. Assume that $\Delta$ is chosen such that the graph automorphism $\gamma$ of order $2$ permutes $\al_1$ and $\al_2$. Using straight-forward calculations one sees that the elements $n'_2:=n_{\al_1}(-1) n_{\al_2}(-1)$ and $n'_i:=n_{\al_i}(-1)$ for $i>2$ satisfy the defining relations of an extended Weyl group of type ${\mathsf B}_{l-1}$. As the orders of the groups coincide, they are isomorphic. Together with Proposition \[prop:very\_good\_twist\_1\] this implies the existence of the required extension map. Note that according to Lemma \[lem:notsec3\] the extension map can be chosen to be $V_1$-equivariant. Since by definition $\gamma$ acts trivially on $V_1$ the extension map is also $V_1E_1$-equivariant. The statement follows from the existence of a $V_1E_1$-equivariant extension map since ${\widehat}F_0$ acts trivially on $V$. If $\Phi$ is of type ${\mathsf E}_6$ the claim is implied by [@Sp09 Lemma 8.2]. In the remaining cases Propositions \[prop:3\_10\] and \[prop:3\_Dl\], and Lemma \[lem:A3\] imply the statement if $d=1$. If $d=2$ and $\Phi$ is of type ${\mathsf B}_l$, ${\mathsf C}_l$ or ${\mathsf E}_7$ the proof of [@Sp09 Lemma 6.1] shows that $v\in {\operatorname Z}(V)$. Hence $H=H^{vF}=H_1$ and $V=V^{vF}=V_1$. Then Proposition \[prop:very\_good\_twist\_1\] yields the claim. The only remaining case is when $\Phi$ is of type ${\mathsf D}_l$ and $d=2$. Computations in $V$ show that either $V_1=V$ or $V_1={{\mathrm C}}_V(\gamma)$ and then the statement about the maximal extendibility follows from the observations made for $d=1$ in Proposition \[prop:3\_Dl\] and Lemma \[lem:A3\]. Hence there exists a $V_1E_1$-equivariant extension map with respect to $H_1\lhd V_1$ in all cases. We next state a lemma helping to construct extensions with specific properties. \[lem:3\_13\] Let $\la\in {\operatorname{Irr}}(H_1)$ and ${\widetilde}\la\in{\operatorname{Irr}}(V_{1,\la})$ a $(V_1E)_\la$-invariant extension of $\la$. Then ${\widetilde}\la$ has an extension ${\widehat}\la\in {\operatorname{Irr}}((V_1 E)_{\la})$ with ${\widehat}\la(v{\widehat}F)=1$. Recall $E_1=\spann<{\widehat}\gamma>\leq E$ when ${{{{{\mathbf G}}^F}}}\neq {\mathsf D}_{4,{\operatorname{sc}}}(q)$ and $E_1=\spann<{\widehat}\gamma_2,{\widehat}\gamma_3>$ otherwise, with ${\widehat}\gamma_i$ of order $i$. Note that ${\widehat}F_0$ is central in $V_1\rtimes E$, i.e., $V_1E=(V_1\rtimes E_1)\times \langle{\widehat}F_0\rangle$. Since all Sylow subgroups of $E_1$ are cyclic, ${\widetilde}\la$ extends to a character $\psi$ of $(V_1E_1)_{{{{\widetilde}\lambda}}}$. Note that $(V_1E_1)_{{\widetilde}\la}=(V_1E_1)_{\la}$. Recall that ${{{{{\mathbf G}}^F}}}\neq {{}^3} {\mathsf D}_{4,{\operatorname{sc}}}(q)$ and $v{\widehat}F=v \kappa {\widehat}F_0^m$ for some $\kappa \in E_1$. We have $o({\widehat}F_0^m)=2o(v)$ by the definition of $E$ and $o(v\kappa)\mid (2o(v))$ since $v$ and $\kappa$ commute. Accordingly there exists some character $\epsilon\in{\operatorname{Irr}}(\langle{\widehat}F_0\rangle)$ with $\psi(1)\epsilon({\widehat}F_0^m)=\psi(v\kappa)^{-1}$. The character ${\widehat}\la= \psi\times \epsilon$ is an extension of ${\widetilde}\la$ with the required properties. Parametrisation of ${\operatorname{Irr}}(N)$ {#sec:3_param} -------------------------------------------- For the later understanding of the characters of ${\operatorname{Irr}}(N)$ we construct an extension map with respect to $T\lhd N$. Recall $N{:=}{\ensuremath{{\mathrm{N}}}}_{{\mathbf G}}({{\mathbf S}})^{vF}$ and $T{:=}{\ensuremath{{\rm{C}}}}_{{{\mathbf G}}}({{\mathbf S}})^{vF}={{\mathbf T}}^{vF}$. \[cor:ker\_delta\_sc\] There exists an extension map $\Lambda$ with respect to $T\lhd N$ such that 1. $\Lambda$ is $N\rtimes E$-equivariant; and 2. for every $\la\in {\operatorname{Irr}}(T)$, there exists some linear ${\widetilde}\la \in {\operatorname{Irr}}\left ((N\rtimes E)_{\la}\mid\Lambda(\la)\right )$ with ${\widetilde}\la(v{\widehat}F)=1$. Note that the existence of $\Lambda$ (without the properties required here) is known from [@HL Cor. 6.11] for $d=1$, and from [@Sp09] and [@Sp12]. According to Lemma \[lem:3\_3\] we have $N=T V_1$. Let $\Lambda_0$ be the $V_1 E$-equivariant extension map with respect to $H_1\lhd V_1$ from Theorem \[thm:very\_good\_twist\]. We obtain an $NE$-equivariant extension map $\Lambda$ by sending $\la \in {\operatorname{Irr}}(T)$ to the common extension of $\la$ and $\restr\Lambda_0(\restr \la|{H_1})|{ V_{1,\la}}$. According to the proof of [@Sp09 Lemma 4.3], $\Lambda$ is then well-defined. For proving (2) let $\la\in{\operatorname{Irr}}(T)$. Then $\la_0{:=}\restr\la|{H_1}$ extends to some ${\widetilde}\la_0\in{\operatorname{Irr}}((V_1 E)_{\la_0})$ with ${\widetilde}\la_0(v{\widehat}F)=1$ by Lemma \[lem:3\_13\]. According to the proof of [@Sp09 Lemma 4.3] there exists a unique common extension ${\widetilde}\la$ of $\Lambda(\la)$ and $\restr {\widetilde}\la_0| {(V_1E)_\la}$ to $(NE)_\la$. Then ${\widetilde}\la(v{\widehat}F)=1$. For later use we describe the action of ${\widetilde}N E$ on the extension map $\Lambda$ from Corollary \[cor:ker\_delta\_sc\]. Recall ${\widetilde}T:={\widetilde}{{\mathbf T}}^{vF}$ and ${\widetilde}N:={\ensuremath{{\mathrm{N}}}}_{{\widetilde}{{\mathbf G}}}({{\mathbf S}})^{vF}$. In the following we set $W(\la):=N_\la/T$ for $\la\in{\operatorname{Irr}}(T)$ and $W({\widetilde}\la):= N_{{{\widetilde}\lambda}}/T$ for ${\widetilde}\la\in{\operatorname{Irr}}({\widetilde}T)$. \[prop:3\_12\] Let $\la\in{\operatorname{Irr}}(T)$, ${\widetilde}\la \in {\operatorname{Irr}}({\widetilde}T|\la)$, $x \in {\widetilde}N E$, and $\Lambda$ the extension map from Corollary \[cor:ker\_delta\_sc\]. Then the character $\delta\in{\operatorname{Irr}}(W(\la)^x)$ with $\delta\Lambda(\la^x)=\Lambda(\la)^x$ satisfies $\ker(\delta)\geq W({\widetilde}\la^x)$. Observe that $\delta$ is well-defined by [@Isa Cor. 6.17]. Since $\Lambda$ is $NE$-equivariant $\delta$ associated with $x$ is trivial whenever $x\in NE$. For $x \in {\widetilde}T$ we have $\la^x=\la$ so $\delta$ has the stated property. Taking those two results together we obtain the claim. As mentioned earlier the extension map constructed above is key to a labelling and understanding of the characters of ${\operatorname{Irr}}(N)$. \[prop:5\_11\_here\] Let $\Lambda$ be the extension map from Corollary \[cor:ker\_delta\_sc\] with respect to $T\lhd N$. Then the map $$\Pi:{{\mathcal P}}=\{(\la,\eta)\mid \la\in{\operatorname{Irr}}(T),\,\eta\in{\operatorname{Irr}}(W(\la))\} \longrightarrow{\operatorname{Irr}}(N),\quad (\la,\eta)\longmapsto (\Lambda(\la)\eta)^{N},$$ is surjective and satisfies 1. $\Pi(\la,\eta)=\Pi(\la',\eta')$ if and only if there exists some $n\in N$ such that ${{}^n}\la=\la'$ and ${{}^n}\eta=\eta'$. 2. ${{}^\si}\Pi(\la,\eta)=\Pi({{}^\si}\la,{{}^\si}\eta)$ for every $\si\in E$. 3. \[param\_diag\] \[prop:5\_11\_here3\] Let $t\in{\widetilde}T$, and $\nu_t\in{\operatorname{Irr}}(N_\la)$ be the linear character given by ${{}^t}\Lambda(\la)=\Lambda(\la)\nu_t$. Then $N_{{\widetilde}\la}=\ker(\nu_t)$ for any ${\widetilde}\la\in {\operatorname{Irr}}(\spann<T,t>|\la)$. For ${\widetilde}\la_0 \in {\operatorname{Irr}}({\widetilde}T|\la)$ the map ${\widetilde}T \rightarrow{\operatorname{Irr}}(N_\la/N_{{\widetilde}\la_0})$ given by $t\mapsto \nu_t$ is surjective, and ${{}^t}\Pi(\la,\eta)=\Pi(\la,\eta\nu_t)$. The arguments from [@CS15 Prop. 5.11] can be transferred to prove the statement. Straightforward considerations show that the map in \[param\_diag\] is surjective. Maximal extendibility with respect to $W({{{\widetilde}\lambda}})\lhd W(\la)$ {#sec:maxextWwla} ----------------------------------------------------------------------------- Our aim in this subsection is to show that maximal extendibility holds with respect to $W({{{\widetilde}\lambda}})\lhd {\ensuremath{{\mathrm{N}}}}_{W_1E}(W({\widetilde}\la))$ for every ${{{\widetilde}\lambda}}\in{\operatorname{Irr}}(\wT)$ with $W_1{:=}\pi(N)$, where $W({\widetilde}\la){:=}N_{{{\widetilde}\lambda}}/ T$. Two less general results are known in particular cases: For $d=1$ maximal extendibility is known to hold with respect to $W({{{\widetilde}\lambda}})\lhd W(\la)$ where $\la=\restr {\widetilde}\la|T$, see Proposition \[prop:max:ext:Wla\] below. Proposition 5.12 of [@CS15] shows the analogue for arbitrary positive integers $d$ assuming that the underlying root system is of type ${\mathsf A}_l$. The statement in Theorem \[thm:3:25\] plays a crucial role in proving Theorem \[thm:stab\] via the parametrisation of characters of $N$ given above. We start by rephrasing the old result known for $d=1$. \[prop:max:ext:Wla\] Assume that $d=1$. Let $\la\in{\operatorname{Irr}}(T)$ and ${\widetilde}\la\in{\operatorname{Irr}}({\widetilde}T|\la)$. Then maximal extendibility holds with respect to $W({\widetilde}\la)\lhd W(\la)$. The quotient $W(\la)/W({\widetilde}\la)$ is abelian and for every $\eta\in{\operatorname{Irr}}(W(\la))$ every character $\eta_0\in {\operatorname{Irr}}(W({\widetilde}\la)\mid \eta)$ has multiplicity one in the restriction $\restr \eta|{W({\widetilde}\la)}$, see [@Cedric 13.13(a)]. This implies the statement. \[thm:3:25\] Let $\la\in{\operatorname{Irr}}( T)$, ${\widetilde}\la\in{\operatorname{Irr}}({\widetilde}T|\la)$ and $W_1{:=}\pi(N)$. Then every $\eta_0\in{\operatorname{Irr}}(W({\widetilde}\la))$ has an extension $\kappa\in{\operatorname{Irr}}(\norm{W_1 E}{W({{{\widetilde}\lambda}})}_{\eta_0})$ with $v{\widehat}F\in \ker(\kappa)$. We first prove that maximal extendibility holds with respect to $W({{{\widetilde}\lambda}})\lhd {\ensuremath{{\mathrm{N}}}}_{W_1 E}(W({{{\widetilde}\lambda}}))$. Let $({\widetilde}{{\mathbf G}}^*,{\widetilde}{{\mathbf T}}^*, v' F^*)$ be the dual to $({\widetilde}{{\mathbf G}},{\widetilde}{{\mathbf T}},vF)$ constructed as in [@DM Def. 13.10]. Note that because of our particular choice of $v$ the automorphism on $W$ induced by $vF$ coincides with a graph automorphism $\phi'$ on $W$. The character ${\widetilde}\la$ corresponds to a semisimple element $s\in({\widetilde}{{\mathbf T}}^*)^{v'F^*}$ of the dual group $({{\mathbf G}}^*,v'F^*)$. Let $R({{{\widetilde}\lambda}})$ be the Weyl group of ${\ensuremath{{\rm{C}}}}_{{\widetilde}{{\mathbf G}}^*}(s)$. Since ${\ensuremath{{\rm{C}}}}_{{\widetilde}{{\mathbf G}}^*}(s)$ is connected, $R({\widetilde}\la)$ is a reflection group. We have $W({{{\widetilde}\lambda}})={\ensuremath{{\rm{C}}}}_{R({{{\widetilde}\lambda}})}(v'F^*)={\ensuremath{{\rm{C}}}}_{R({\widetilde}\la)}(\phi')$. Accordingly $W({{{\widetilde}\lambda}})$ is a reflection group and the $\phi'$-orbits on the roots of $R({{{\widetilde}\lambda}})$ form a root system, which we denote by $\Phi(\la)$, see [@MT Thm. C.5]. Straightforward calculations show that $\Phi(\la)$ is already determined by $\la$. The group $K:={\ensuremath{{\mathrm{N}}}}_{W_1 E}(W({\widetilde}\la))$ acts on $W({\widetilde}\la)$, $R({{{\widetilde}\lambda}})$ and $\Phi(\la)$ by conjugation. Let $\Delta$ be a base of $\Phi(\la)$. Then by the properties of root systems $K=W({\widetilde}\la){\operatorname{Stab}}_K(\Delta)$, even $K=W({\widetilde}\la)\rtimes {\operatorname{Stab}}_K(\Delta)$, where ${\operatorname{Stab}}_K(\Delta)$ denotes the stabiliser of $\Delta$ in $K$. First let us prove that maximal extendibility holds with respect to $W({\widetilde}\la)\rtimes {\operatorname{Aut}}(\Delta)$, where ${\operatorname{Aut}}(\Delta)$ is the group of length-preserving automorphisms of $\Delta$. Whenever $\Delta$ is indecomposable the statement is true, since then all Sylow subgroups of ${\operatorname{Aut}}(\Delta)$ are cyclic. If $\Delta=\Delta_1\sqcup\ldots\sqcup\Delta_r$ with isomorphic indecomposable systems $\Delta_i$, the group ${\operatorname{Aut}}(\Delta)$ is isomorphic to the wreath product ${\operatorname{Aut}}(\Delta_1)\wr {{\operatorname{S}}}_r$. Since maximal extendibility holds with respect to $H^r\lhd H\wr {{\operatorname{S}}}_r$ for any group $H$ according to [@Hu Thm. 25.6] we see that maximal extendibility holds with respect to $W({{{\widetilde}\lambda}})\rtimes {\operatorname{Aut}}(\Delta)$ in that case. Since maximal extendibility holds for $H_1\times H_2\lhd G_1\times G_2$ whenever it holds for $H_1\lhd G_1$ and $H_2\lhd G_2$ the above implies that maximal extendibility holds with respect to $W({{{\widetilde}\lambda}}) \lhd W({{{\widetilde}\lambda}})\rtimes {\operatorname{Aut}}(\Delta)$. Now let $C:={\ensuremath{{\rm{C}}}}_K(\Delta)$. By definition $C\lhd K$. Let ${\overline }K:=K/C$. Then maximal extendibility holds with respect to $W({{{\widetilde}\lambda}})\lhd K$ if it holds with respect to ${\overline }R:= W({{{\widetilde}\lambda}})C/C\lhd {\overline }K$. We see that ${\overline }S:={\operatorname{Stab}}_K(\Delta)/C$ is a subgroup of ${\operatorname{Aut}}(\Delta)$ and by the above maximal extendibility holds with respect to $W({{{\widetilde}\lambda}})\lhd W({{{\widetilde}\lambda}})\rtimes {\overline }S$. But this implies maximal extendibility with respect to $W({{{\widetilde}\lambda}})\lhd K$. This proves the first part of the claim. We finish by constructing the required extension $\kappa$. Let $\eta_0\in{\operatorname{Irr}}(W({{{\widetilde}\lambda}}))$. Recall $E_1$ from \[not:3.4\]. By the above $\eta_0$ extends to some ${\widehat}F_0$-stable $\kappa_1\in{\operatorname{Irr}}({\ensuremath{{\mathrm{N}}}}_{W_1E_1}(W({{{\widetilde}\lambda}}))_{\eta_0})$. Since $\pi(v{\widehat}F {\widehat}F_0^{-m})\in {{\mathbf Z}}(W_1E)$ and ${\ensuremath{{\mathrm{N}}}}_{W_1E}(W({{{\widetilde}\lambda}}))_{\eta_0}= {\ensuremath{{\mathrm{N}}}}_{W_1E_1}(W({{{\widetilde}\lambda}}))_{\eta_0} \times \langle{\widehat}F_0\rangle$ the considerations from the proof of Lemma \[lem:3\_13\] ensure the existence of $\kappa$, as required. Consequences {#subsec:3:stab} ------------ The previous considerations allow us also to conclude that the considered characters of $N$ have the structure stated in Proposition \[prop:5\_6\]. \[thm:stab\] For every $\chi\in{\operatorname{Irr}}({\widetilde}N)$ there exists some $\chi_0\in{\operatorname{Irr}}(N|\chi)$ with the following properties: 1. $({\widetilde}N \rtimes E)_{\chi_0} ={\widetilde}N_{\chi_0} \rtimes E_{\chi_0}$; and 2. $\chi_0$ has an extension ${\widetilde}\chi_0\in{\operatorname{Irr}}(N\rtimes E_{\chi_0})$ with $v{\widehat}F\in\ker({\widetilde}\chi_0)$. Let $\chi_1\in{\operatorname{Irr}}(N|\chi)$ and $(\lambda,\eta)\in{{\mathcal P}}$ with $\chi_1=\Pi(\la,\eta)$ for the map $\Pi$ from Proposition \[prop:5\_11\_here\]. Let ${\widetilde}\la\in{\operatorname{Irr}}(\wT|\la)$ and $\eta_0\in{\operatorname{Irr}}(W({\widetilde}\la))$ such that $\eta\in{\operatorname{Irr}}(W(\la)|\eta_0)$. By Clifford correspondence there exists a unique character $\eta_1\in{\operatorname{Irr}}(W(\la)_{\eta_0}|\eta_0)$ such that $\eta=\eta_1^{W(\la)}$. Now since $W(\la)/W({\widetilde}\la)$ is abelian and as by Proposition \[prop:max:ext:Wla\] maximal extendibility holds with respect to $W({\widetilde}\la)\lhd W(\la)$, the character $\eta_1$ is an extension of $\eta_0$. Let $W_1{:=}\pi(N)$. According to Theorem \[thm:3:25\] there exists an ${\ensuremath{{\mathrm{N}}}}_{W_1E}(W({\widetilde}\la))_{\eta_0}$-invariant extension ${\widetilde}\eta_0\in{\operatorname{Irr}}(W(\la)_{\eta_0})$ of $\eta_0$. The character $\eta':=({\widetilde}\eta_0)^{W(\la)}$ is irreducible. Hence $\chi_0:=\Pi(\la,\eta')$ is a well-defined character of $N$. We show that $\chi_0$ is ${\ensuremath{{{\widetilde}N}}}$-conjugate to $\chi_1$: Since the map $\wT\rightarrow{\operatorname{Irr}}(W(\la)/W({\widetilde}\la))$, $t\mapsto\nu_t$, from Proposition \[prop:5\_11\_here\](3) is surjective, there exists $t\in {\widetilde}T$ such that $\eta'=\eta\nu_t$. This proves ${{}^t}\chi= {{}^t} \Pi(\la,\eta)= \Pi(\la,\eta\nu_t)= \chi_0$. For analysing the stabiliser of $\chi_0$ let $t\in\wT$ and $e\in E$ such that $\chi_0^{te}=\chi_0$. Then there exists some $n\in N$ such that $(\la,\eta')= (\la^{ne},(\eta')^{ne} \nu_t)$. Without loss of generality $n$ can be chosen such that $\pi(n)e\in{\ensuremath{{\mathrm{N}}}}_{W_1E}(W({\widetilde}\la))_{\eta_0}$. By the choice of ${\widetilde}\eta_0$, $\pi(n)e$ stabilises ${\widetilde}\eta_0$, hence $(\la^{ne}, (\eta')^{ne})=(\la, \eta')$ and $\chi_0^{e}=\chi_0$. This proves the equation in (1). By Corollary \[cor:ker\_delta\_sc\] the character $\Lambda(\la)$ has an extension ${\widetilde}\la$ to $(NE)_{\Lambda(\la)\eta'}$ with ${\widetilde}\la(v{\widehat}F)=1$. On the other hand the character ${\widetilde}\eta_0$ can be chosen to have an extension ${\widehat}\eta_0$ to ${\ensuremath{{\mathrm{N}}}}_{W_1E}(W({\widetilde}\la))_{\eta_0}$ with $v{\widehat}F \in \ker({\widehat}\eta_0)$, see Theorem \[thm:3:25\]. We denote by $\kappa_1$ the lift of $\restr{\widehat}\eta_0|{{\ensuremath{{\mathrm{N}}}}_{W_1E}(W({{{\widetilde}\lambda}}))_{\eta_0,\la}}$ to $(NE)_{\eta_0,\la}$. Then $\kappa_2:=(\kappa_1)^{(NE)_{\la,\eta}}$ is irreducible with $v{\widehat}F\in \ker(\kappa_2)$. The character $({{{\widetilde}\lambda}}\kappa_2)^{(NE)_{\chi_0}}$ is an extension of $\chi_0$ with the required properties. Via Proposition \[prop:5\_6\] the above proves Theorem \[thm:IrrN\_autom\]. For later two further consequence of our considerations are important. First we give the following interpretation of Theorem \[thm:IrrN\_autom\]. \[lem:3\_21\] Let ${{\mathbf S}}_0$, $N_0$, ${\widetilde}N_0$ and $O_0$ be defined as in Theorem \[thm:IrrN\_autom\]. For $\psi_0\in{\operatorname{Irr}}(N_0)$ the following are equivalent: 1. \[lem3:2i\] $O_0= ({{{{{\widetilde}{{\mathbf G}}}^F}}}\cap O_0) \rtimes (D\cap O_0)$. 2. \[lem3:2ii\] $({\widetilde}G^F D)_{{{\mathbf S}}_0,\psi_0}={\widetilde}N_{0,\psi_0}({{\mathbf G}}^F\rtimes D)_{{{\mathbf S}}_0,\psi_0}$. By the definition of $O_0$ one deduces \[lem3:2ii\] from \[lem3:2i\] by considering the stabiliser of ${{\mathbf S}}_0$: $$\begin{aligned} ({\widetilde}G^F D)_{{{\mathbf S}}_0,\psi_0}&= (O_0)_{{{\mathbf S}}_0} =\left(({{{{{\widetilde}{{\mathbf G}}}^F}}}\cap O_0) \rtimes (D\cap O_0)\right )_{{{\mathbf S}}_0}=\\ &=\left(({{{{{\widetilde}{{\mathbf G}}}^F}}}\cap O_0) ({{{{{\mathbf G}}^F}}}D\cap O_0)\right )_{{{\mathbf S}}_0} =({{{{{\widetilde}{{\mathbf G}}}^F}}}\cap O_0)_{{{\mathbf S}}_0} ({{{{{\mathbf G}}^F}}}D\cap O_0)_{{{\mathbf S}}_0}=\\ &={\ensuremath{{{\widetilde}N}}}_{0,\psi_0} ({{{{{\mathbf G}}^F}}}D)_{{{\mathbf S}}_0,\psi_0}.\end{aligned}$$ Here, recall that by [@BM92 Thm. 3.4] all Sylow $d$-tori of $({{\mathbf G}},F)$ are ${{{{{\mathbf G}}^F}}}$-conjugate and the $(D\cap O_0)$-conjugates of ${{\mathbf S}}_0$ are Sylow $d$-tori. Multiplying the equation in \[lem3:2ii\] with ${{{{{\mathbf G}}^F}}}$ gives $O_0=({\widetilde}{{\mathbf G}}^F\cap O_0) ( ({{{{{\mathbf G}}^F}}}\rtimes D) \cap O_0)$. Since ${{{{{\mathbf G}}^F}}}\leq ({{{{{\mathbf G}}^F}}}\rtimes D) \cap O_0 $ this gives \[lem3:2i\]. \[wLam\_sec3\] Let ${{\mathbf S}}_0$, $N_0$ and ${\widetilde}N_0$ be defined as in Theorem \[thm:IrrN\_autom\]. Let ${\widetilde}C_0{:=}{\ensuremath{{\rm{C}}}}_{{\widetilde}{{\mathbf G}}^F}({{\mathbf S}}_0)$. Then there exists some ${\ensuremath{{\mathrm{N}}}}_{{\widetilde}{{\mathbf G}}^FD}({{\mathbf S}}_0)$-equivariant extension map ${\widetilde}\Lambda$ with respect to ${\widetilde}C_0 \lhd {\widetilde}N_0$, such that in addition ${\widetilde}\Lambda({\widetilde}\la\restr\delta|{{\widetilde}C_0})={\widetilde}\Lambda({\widetilde}\la)\restr\delta|{{\widetilde}N_0}$ for every ${\widetilde}\la\in{\operatorname{Irr}}({\widetilde}C_0)$ and $\delta\in{\operatorname{Irr}}({\widetilde}{{\mathbf G}}^{F}|1_{{{\mathbf G}}^{F}})$. The considerations from the proof of [@CS15 Cor. 5.14] can be transferred: Applying the isomorphism $\iota$ from Proposition \[prop:5\_6\] shows that it is sufficient to verify that there exists some $NE $-equivariant extension map ${\widetilde}\Lambda$ with respect to ${\widetilde}T\lhd {\widetilde}N$, such that in addition ${\widetilde}\Lambda({\widetilde}\la\restr\delta|{T})={\widetilde}\Lambda({\widetilde}\la)\restr\delta|{{\widetilde}N}$ for every ${\widetilde}\la\in{\operatorname{Irr}}({\widetilde}T)$ and $\delta\in{\operatorname{Irr}}({\widetilde}{{\mathbf G}}^{vF}|1_{{{\mathbf G}}^{vF}})$. Let $\Lambda$ be the $ N E$-equivariant extension map with respect to $T \lhd N$ from Corollary \[cor:ker\_delta\_sc\] and $${\widetilde}\Lambda:{\operatorname{Irr}}(T) \rightarrow \bigcup_{T\leq I\leq N}{\operatorname{Irr}}(I)$$ be the map sending ${\widetilde}\la\in{\operatorname{Irr}}({\widetilde}T )$ to the unique common extension of ${\widetilde}\lambda$ and $\restr\Lambda(\la)|{N_{{{\widetilde}\lambda}}}$ where $\la{:=}\restr{{{\widetilde}\lambda}}|T$. Then ${\widetilde}\Lambda$ is well-defined according to [@Sp09 Lemma 4.3] and has the required properties. The next statement is later applied to verify assumption \[hauptprop\_maxext\_loc\] in the considered cases. \[cor:maxextNwN\] For the groups $N_0$ and ${\widetilde}N_0$ from Theorem \[thm:IrrN\_autom\] maximal extendibility holds with respect to $N_0\lhd {\widetilde}N_0$. Like in the proof of the preceding proposition the isomorphism $ \iota$ from the proof of Proposition \[prop:5\_6\] allows us to prove the statement by establishing that maximal extendibility holds with respect to $N\lhd {\widetilde}N$. Let ${\widetilde}\Lambda$ be the extension map with respect to ${\widetilde}T \lhd {\widetilde}N$ from (the proof of) Proposition \[wLam\_sec3\]. Then every character ${\widetilde}\psi\in {\operatorname{Irr}}({\widetilde}N)$ is of the form $({\widetilde}\Lambda({\widetilde}\la)\eta_0)^{{\widetilde}N}$ for some ${\widetilde}\lambda\in{\operatorname{Irr}}({\widetilde}T )$ and $\eta_0\in{\operatorname{Irr}}(W({{{\widetilde}\lambda}}))$. Thus $$\begin{aligned} \restr{\widetilde}\psi|{ N }= \restr ({\widetilde}\Lambda({\widetilde}\la)\eta_0)^{{\widetilde}N }|{ N }&=& \Big (\restr({\widetilde}\Lambda({\widetilde}\la)\eta_0)|{ N _{{{\widetilde}\lambda}}}\Big)^{ N }= \Big (\restr\Lambda(\la)|{ N _{{{\widetilde}\lambda}}}\eta_0\Big)^{ N }=\\ &=&\Big((\restr\Lambda(\la)|{ N _{{{\widetilde}\lambda}}}\eta_0)^{ N _\la}\Big)^{ N }= \Big (\Lambda(\la)(\eta_0^{ N _\la})\Big)^{ N },\end{aligned}$$ where $\la:=\restr {\widetilde}\la|{ T }$. According to Theorem \[thm:3:25\] maximal extendibility holds with respect to $W({{{\widetilde}\lambda}})\lhd W(\la)$. Since $W(\la)/W({{{\widetilde}\lambda}})$ is abelian, $\eta_0^{N_\la}$ and hence $\restr {\widetilde}\psi|N$ is multiplicity-free. This proves the statement. Some non-principal series in symplectic groups {#type C} ============================================== For later applications in the study of some non-principal Harish-Chandra series in type ${\mathsf C}_l$ we have to consider characters of a certain standard Levi subgroup that is not a torus. In this section let ${{\mathbf G}}={{\mathrm C}}_{l,{\operatorname{sc}}}$, so $G={\operatorname{Sp}}_{2l}(q)$ and $\tilde G={\operatorname{CSp}}_{2l}(q)$, and let $L$ be a standard Levi subgroup of $G$ with root system of type ${\mathsf C}_1$. Then for the $F$-stable torus ${{\mathbf T}}_0:={\ensuremath{{\rm{C}}}}_{{\mathbf G}}^\circ(L)$ we have $L={\ensuremath{{\rm{C}}}}_G({{\mathbf T}}_0)$. Additionally let $N={\ensuremath{{\mathrm{N}}}}_G({{\mathbf T}}_0)$, and ${\widetilde}N={\ensuremath{{\mathrm{N}}}}_{{\widetilde}G}({{\mathbf T}}_0)$. Let $\{\al_1,\ldots,\al_l\}$ be the base of the root system of ${{\mathbf G}}$ as introduced in [@Spaeth2 6.1]. Note that $L=\spann<T,X_{\al_1},X_{-\al_1}>$, and $L_0:=\spann<X_{\al_1},X_{-\al_1}>$ is isomorphic to ${\operatorname{SL}}_2(q)$. The group ${\widetilde}T:={\widetilde}{{\mathbf T}}^F$ induces diagonal automorphisms on ${\operatorname{SL}}_2(q)$ and $F_0$ induces the field automorphism. Let $T_1:=\spann<h_{e_i}(t) \mid i \geq 2\,,\, t\in{{\mathbb{F}}}_q^\times>$. Then an easy calculation gives that $L=T_1\times L_0$ and $N=N_1\times L_0$ where $N_1:=\spann<n_{e_i}(t),n_{\pm e_i \pm e_j}(t)\mid 2\leq i<j\leq l\,,\, t\in{{\mathbb{F}}}_q^\times>$. Let ${\widetilde}L:={\widetilde}T L$. Note that $D$ acts on $N_1$ and on $L_0$. The diagonal automorphism induced by $\tilde G$ acts as diagonal automorphism on $N_1$ and on $L_0$. \[prop:ext\_map\_C\] There exists an $ND$-equivariant extension map with respect to $L\lhd N$. Straightforward computations show that $T_1\lhd N_1$ are a maximally split torus and its normaliser in the group $\spann<X_{e_i},X_{\pm e_i \pm e_j} \mid 2\leq i<j\leq l>$, which is isomorphic to ${\mathsf C}_{l-1,{\operatorname{sc}}}(q)$. Let $\Lambda_1$ be the extension map with respect to $T_1\lhd N_1$ for ${\operatorname{Irr}}(T_1)$ from Corollary \[cor:ker\_delta\_sc\]. Any character $\psi\in{\operatorname{Irr}}(L)$ has the form as $\la_1\times \zeta$ with $\la_1\in{\operatorname{Irr}}(T_1)$ and $\zeta\in{\operatorname{Irr}}(L_0)$. The stabiliser $N_\psi$ coincides with $N_{1,\la_1}\times L_0$. Accordingly we can define an extension of $\psi$ as $\Lambda_1(\la_1) \times \zeta$. Then $$\label{eq_def_Lambda_C} \Lambda:{\operatorname{Irr}}(L) \rightarrow \bigcup_{L\leq I \leq N} {\operatorname{Irr}}(I),\quad \la_1\times \zeta \mapsto\Lambda_1(\la_1) \times \zeta,$$ is an extension map as required. Now since $D$ induces field automorphisms on $N_1$, $\Lambda_0$ and hence $\Lambda$ are $D$-equivariant. The action of $N$ on ${\operatorname{Irr}}(I)$ for subgroups $I$ with $L\leq I \leq N$ coincides with the one of $N_1$. This implies by definition that $\Lambda$ is $ND$-equivariant, since $\Lambda_1$ is $N_1D$-equivariant. As in Section \[sec:IrrlN\] the extension map from Proposition \[prop:ext\_map\_C\] can be used to give a labelling to the characters of $N$ lying above cuspidal characters of $L$. We write ${\operatorname{Irr}}_{{\operatorname{cusp}}}(L)$ for the set of cuspidal characters of $L$ and ${\operatorname{Irr}}_{{\operatorname{cusp}}}(N){:=}{\operatorname{Irr}}(N|{\operatorname{Irr}}_{{\operatorname{cusp}}}(L) )$. For $\la\in{\operatorname{Irr}}(L)$ set ${W(\la)}{:=}N_\la/L$. \[prop:loc\_param\_C\] Let $\Lambda$ be the extension map with respect to $L\lhd N$ from Proposition \[prop:ext\_map\_C\]. Then $$\Pi:{{\mathcal P}}=\{(\la,\eta)\mid\la\in{\operatorname{Irr}}_{{\operatorname{cusp}}}(L),\,\eta\in{\operatorname{Irr}}({W(\la)})\} \longrightarrow{\operatorname{Irr}}_{{\operatorname{cusp}}}(N), \quad (\la,\eta)\longmapsto (\Lambda(\la)\eta)^{N},$$ is surjective and satisfies 1. $\Pi(\la,\eta)=\Pi(\la',\eta')$ if and only if there exists some $n\in N$ such that ${{}^n}\la=\la'$ and ${{}^n}\eta=\eta'$. 2. ${{}^\si}\Pi(\la,\eta)=\Pi({{}^\si}\la,{{}^\si}\eta)$ for every $\si\in D$. 3. Let $t\in{\widetilde}L_\la$, and $\nu_t\in{\operatorname{Irr}}(N_\la)$ the linear character given by ${{}^t}\Lambda(\la)=\Lambda({{}^t}\la)\nu_t$. Then $N_{{\widetilde}\la}=\ker(\nu_t)$ for any ${\widetilde}\la\in {\operatorname{Irr}}(\spann<L,t>|\la)$. Let ${\widetilde}\la_0$ be an extension of $\la$ to ${\widetilde}L_\la$. Then the associated map ${\widetilde}L_\la \rightarrow{\operatorname{Irr}}(N_\la/N_{{\widetilde}\la_0})$, $t\mapsto\nu_t$, is surjective and ${{}^t}\Pi(\la,\eta)=\Pi(\la^t,\eta\nu_t)$. The proof of Proposition \[prop:5\_11\_here\] can be transferred. Let $\la\in{\operatorname{Irr}}(L)$, ${\widetilde}\la\in {\operatorname{Irr}}({\widetilde}L|\la)$ and $x\in {\widetilde}N D$. Then $\delta_{\la,x}$ defined by $\delta_{\la,x}\Lambda({{}^x}\la) ={{}^x}\Lambda(\la)$ satisfies $W({{}^x} {\widetilde}\la)\leq \ker(\delta_{\la,x})$. Using the result from the previous proposition the considerations proving Proposition \[prop:3\_12\] imply the statement. \[thm:stab\_C\] Every character $\psi\in{\operatorname{Irr}}(N)$ satisfies $({\widetilde}N \rtimes D)_{\psi} ={\widetilde}N_{\psi} (ND)_{\psi}$. Since $N=N_1\times L_0$ and ${\widetilde}ND$ stabilises $N_1$ and $L_0$ we obtain that $({\widetilde}ND)_{\psi}= ({\widetilde}N D)_{\chi}\cap ({\widetilde}N D)_{\zeta}$ for $\psi=\chi\times\zeta$ with $\chi\in{\operatorname{Irr}}(N_1|\psi)$ and $\zeta\in{\operatorname{Irr}}(L_0|\psi)$. By direct calculations for ${\operatorname{SL}}_2(q)$, $\xi$ satisfies $({\widetilde}N D)_{\xi}={\widetilde}N_\xi \rtimes D_\zeta$, since ${\widetilde}N$ induces diagonal automorphisms of $L_0$ and $D$ field automorphisms. Following Theorem \[thm:IrrN\_autom\] together with Lemma \[lem:3\_21\] every character $\chi\in{\operatorname{Irr}}(N_1)$ satisfies $({\widetilde}N D)_\chi=({\widetilde}N)_\chi D_\chi$. Together with the above this implies the claim. The action of ${\operatorname{Aut}}(G)$ on Harish-Chandra induced characters {#sec:HC} ============================================================================ The aim of this section is to verify that assumptions of Theorem \[thm:Sp12\] concerning the characters of $G$ are satisfied. For this we describe the action of ${\operatorname{Aut}}(G)$ on Harish-Chandra induced characters in terms of their parameters. Thus we first have to recall how one obtains the parametrisation of those characters. We follow here the treatment of the subject given in [@Ca85 Chap. 10], which is based on the results of [@HL] and [@HL83]. We consider the following slightly more general setting. Let $G$ be a finite group with a split $BN$-pair of characteristic $p$. We write $W=N/(N\cap B)$ for the Weyl group of $G$, which we assume to be of crystallographic type. Then there is a root system $\Phi$ attached to $W$ and we let $\Delta$ denote a base of $\Phi$ corresponding to the simple reflections of $W$. We write $s_\al\in W$ for the reflection along the root $\al\in\Phi$. Let $P\le G$ be a standard parabolic subgroup with standard Levi subgroup $L$ and Levi decomposition $P= U\rtimes L$. Let $N(L):=(N_G(L)\cap N)L$. We choose and fix once and for all an $N(L)$-equivariant extension map for $L\lhd N(L)$, which exists according to [@GeckHC] and [@Lu Thm. 8.6]. Let $\la$ be an irreducible cuspidal character of $L$. Via the Levi decomposition $\la$ can be inflated to a character of $P$. Let $M$ a left ${{\mathbb{C}}}P$-module affording $\la$ and denote by $\rho$ the corresponding representation. Let ${{\mathfrak F}}(\rho)$ be the vector space of ${{\mathbb{C}}}$-linear maps $f:{{\mathbb{C}}}G\rightarrow M$ with $$f(px)=\rho (p) f(x) \quad\text{ for all }p \in P \text{ and } x \in {{\mathbb{C}}}G.$$ This vector space becomes a ${{\mathbb{C}}}G$-module via $$\label{G_action} (g\star f)(x)=f(xg) \quad\text{ for all $g\in G$, $f\in {{\mathfrak F}}(\rho)$ and $x\in {{\mathbb{C}}}G$}.$$ We denote by ${\mathrm {R}}_L^G(\la)$ the character of $G$ afforded by this module. It is known that ${\mathrm {R}}_L^G(\la)$ only depends on $\la$, not on the choice of $P$ or of $\rho$. The set of constituents of ${\mathrm {R}}_L^G(\la)$ is called the Harish-Chandra series above $(L,\la)$ and will be denoted by ${{\mathcal E}}(G,(L,\la))$. The union of Harish-Chandra series associated with $N\cap B$ and its characters is called the *principal series of* $G$. Actions of automorphisms on the standard basis ---------------------------------------------- Let $\si$ be an automorphism of $G$ stabilising $P$, $L$, and the $BN$-pair. Recall that $\si$ acts on the class functions on $G$ via $\chi\mapsto{}^\si\chi$, where $^\si\chi(g)=\chi(\si^{-1}(g))$ for all $g\in G$. It is immediate from the definitions that $(L,{{^\sigma\!\lambda}})$ is again a cuspidal pair of $G$. The character $^\si{\mathrm {R}}_L^G(\la)$ is afforded by the ${{\mathbb{C}}}G$-module $^\si{{\mathfrak F}}(\rho)$ obtained from the vector space ${{\mathfrak F}}(\rho)$ together with the $G$-action $$\label{G_action_si} (g\star_{\si}f)(x)= f(x\si^{-1}(g)) \quad\text{ for all $g\in G$, $f\in {{\mathfrak F}}(\rho)$, and $x\in {{\mathbb{C}}}G$}.$$ One easily sees that ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$ and ${\operatorname{End}}_{{{\mathbb{C}}}G}(^\si{{\mathfrak F}}(\rho))$ can be canonically identified via ${}^\si\! B(f):=B(f)$ for $B\in {\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$ and $f\in{{\mathfrak F}}(\rho)$. Let ${{\mathfrak F}}({{^\sigma\!\rho}})$ be the coinduced module associated to ${{^\sigma\!\rho}}$ defined as above. Then $\iota:{}^\si{{\mathfrak F}}(\rho) \rightarrow {{\mathfrak F}}({{^\sigma\!\rho}})$ given by $$f\mapsto {}^\si\!f\text{ with } {}^\si\!f(x)=f(\si^{-1}(x)) \text{ for all } x \in {{\mathbb{C}}}G$$ defines a ${{\mathbb{C}}}G$-module isomorphism. Moreover $B\mapsto\iota\circ B\circ\iota^{-1}$ for $B\in{\operatorname{End}}_{{{\mathbb{C}}}G}(^\si{{\mathfrak F}}(\rho))$ induces an isomorphism from ${\operatorname{End}}_{{{\mathbb{C}}}G}(^\si{{\mathfrak F}}(\rho))$ to ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}({{^\sigma\!\rho}}))$. We denote by ${{\widehat}\iota}:{\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))\rightarrow{\operatorname{End}}_{{{\mathbb{C}}}G}(^\si{{\mathfrak F}}(\rho))\rightarrow {\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}({{^\sigma\!\rho}}))$ the composed isomorphism. Since we are interested in the irreducible constituents of ${\mathrm {R}}_L^G(\la)$ and ${\mathrm {R}}_L^G({{^\sigma\!\lambda}})$, which are parametrised by the isomorphism classes of irreducible modules of ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$ and of ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}({{^\sigma\!\rho}}))$ respectively, see [@Ca85 Prop. 10.1.2], we will need to compute ${{\widehat}\iota}(B)$ for some elements $B\in{\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$. We start by determining ${{\widehat}\iota}$ on a natural basis of ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$. With $N(L)=(N_G(L)\cap N)L$ let $W_G(L):=N(L)/L$, the *relative Weyl group* of $L$ in $G$, and set $W(\la):=N(L)_\la/L$. For $w\in W(\la)$ we denote by $\dot{w}\in N(L)$ a once and for all chosen preimage under the natural map. We let $\Phi_L\subseteq\Phi$ denote the root system of $W_L$, with simple system $\Delta_L\subseteq\Delta$. Let ${\widetilde}\rho$ be an extension of $\rho$ to $N(L)_\la$ affording the extension $\Lambda(\la)$ from our chosen equivariant extension map $\Lambda$. For $w\in W_G(L)$ let ${{{\mathrm B}}_{w,\rho}\in{\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))}$ be defined by $$({{\mathrm B}}_{w,\rho} f) (x)= {\widetilde}\rho(\dot{w}) f(\dot{w}^{-1}e_U x )\quad \text{ for all }f\in {{\mathfrak F}}(\rho)\text{ and } x \in {{\mathbb{C}}}G,$$ where $e_U:=\frac 1{|U|}\sum_{u\in U} u$ is the idempotent associated to the unipotent radical $U$ of $P$. Note that ${{\mathrm B}}_{w,\rho}$ is independent of the actual choice of $\dot{w}$. Analogously we define ${{\mathrm B}}_{w,{{^\sigma\!\rho}}}$ by using the extension ${\widetilde}\rho'$ of ${{^\sigma\!\rho}}$ affording $\Lambda({{^\sigma\!\lambda}})$. Note that ${\widetilde}\rho'$ and $^\si({\widetilde}\rho)$ then may differ. We denote by $\delta_{\la,\si}\in {\operatorname{Irr}}(W({{^\sigma\!\lambda}}))$ the character of $N(L)_{{{^\sigma\!\lambda}}}$ with $\delta_{\la,\si}\Lambda({{^\sigma\!\lambda}})={}^\si\Lambda(\la)$. This character is well-defined by [@Isa Cor. 6.17]. For $w\in W(\la)$ let ${{{\mathrm B}}_{w,{{^\sigma\!\rho}}}\in {\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}({{^\sigma\!\rho}}))}$ be defined via $$({{\mathrm B}}_{w,{{^\sigma\!\rho}}} f) (x)= {\widetilde}\rho'(\dot{w}) f(\dot{w}^{-1}e_U x)\quad \text{ for all }f\in {{\mathfrak F}}({{^\sigma\!\rho}})\text{ and } x \in {{\mathbb{C}}}G.$$ \[lem:wi(B)\] For all $w\in W(\la)$ we have ${{\widehat}\iota}({{\mathrm B}}_{w,\rho})= \delta_{\la,\si}(\si(w))\,{{\mathrm B}}_{\si(w),{{^\sigma\!\rho}}}$. Indeed, for $f\in{{\mathfrak F}}(^\si\rho)$ and $x \in {{\mathbb{C}}}G$ we have $${{\widehat}\iota}({{\mathrm B}}_{w,\rho})(f)(x) ={\widetilde}\rho(\dot{w})f(\sigma(\dot{w^{-1}}e_U\sigma^{-1}(x)),$$ which agrees with $$\delta_{\la,\si}(\si(w))\,{{\mathrm B}}_{\si(w),{{^\sigma\!\rho}}}(f)(x) = {\widetilde}\rho(\dot{w})f(\sigma(\dot{w})^{-1}e_Ux)$$ as $\si(e_U)=e_U$. The decomposition of $W(\la)$ {#subsec:p alpha} ----------------------------- In order to transfer our results to the ${\mathrm {T}}_{w,\rho}$-basis of the endomorphism algebra we need to recall the semi-direct product decomposition of $W(\la)$, see [@HL Sec. 2 and 4]. Define $$\hat\Omega:=\{\al\in\Phi\setminus\Phi_L\mid w(\Delta_L\cup\{\al\})\subseteq\Delta\text{ for some }w\in W\},$$ and for $\al\in\hat\Omega$ set $v(\al):=w_0^L w_0^\al$, where $w_0^L,w_0^\al$ are the longest elements in $W_L$, $\langle W_L,s_\al\rangle$ respectively. Then let $\Omega:=\{\al\in\hat\Omega\mid v(\al)^2=1\}$. Note that $\Omega$ is $\si$-invariant. For $\al\in\Omega$ let $L_\al$ denote the standard Levi subgroup of $G$ corresponding to the simple system $\Delta_L\cup\{\al\}$. Then $L$ is a standard Levi subgroup of $L_\al$. We write $p_{\al,\la}\geq 1$ for the ratio between the degrees of the two different constituents of ${\mathrm {R}}_L^{L_\al}(\la)$. Let $$\Phi_\la:=\{\al\in\Omega\mid s_\al\in W(\la),\,p_{\al,\la}\neq 1\},$$ a root system with set of simple roots $\Delta_\la\subseteq\Phi_\la\cap\Phi^+$, and let $R(\la):=\langle s_\al\mid \al\in\Phi_\la\rangle$ its Weyl group. Then $W(\la)$ satisfies $W(\la)=R(\la)\rtimes C(\la)$, where the group $C(\la)$ is the stabiliser of $\Delta_\la$ in $W(\la)$, see [@Ca85 Prop. 10.6.3]. \[lem:p\] We have $p_{\al,\la}=p_{\si(\al),{{^\sigma\!\lambda}}}$ for all $\al\in\Phi_\la$ and hence $R({{^\sigma\!\lambda}})=\si(R(\la))$ and $C({{^\sigma\!\lambda}})=\si(C(\la))$. By definition we have $^\si{\mathrm {R}}_L^{L_\alpha}(\la)= {\mathrm {R}}_L^{L_{\si(\al)} }({{^\sigma\!\lambda}})$ since $\si$ stabilises $U$. This implies $p_{\al,\la}=p_{\si(\al),{{^\sigma\!\lambda}}}$ by its definition. For $w\in W$ we set ${\operatorname{ind}}(w):=|U_0\cap (U_0)^{w_0w}|$, where $U_0$ is the unipotent radical of $B$ and $w_0\in W$ is the longest element. Also, for $\al\in\Delta_\la$ a simple root of $\Phi_\la$ we define $\eps_{\al,\la}\in \{\pm1\}$ by $$\label{def_ep} {{\mathrm B}}_{s_\al,\rho}^2= {\operatorname{ind}}(s_\al)\,{\operatorname{id}}+ \eps_{\al,\la} \frac{p_{\al,\la}-1}{\sqrt{{\operatorname{ind}}(s_\al)p_{\al,\la} }} {{\mathrm B}}_{s_\al,\rho}$$ (see [@Ca85 Prop. 10.7.9]). Here, the square root is always taken positive. \[lem:eps\] If $R({{^\sigma\!\lambda}})\leq \ker(\delta_{\la,\si})$ then ${\operatorname{ind}}(\si(s_\al))={\operatorname{ind}}(s_\al)$ and $\eps_{\si(\al),{{^\sigma\!\lambda}}}=\eps_{\al,\la}$ for all $\al\in\Delta_\la$. Let $\al\in \Delta_\la$ and set $s:=s_\al$, $\al':=\si(\al)$, $s':=s_{\al'}$, $\la':={{^\sigma\!\lambda}}$, $\rho':={{^\sigma\!\rho}}$. Applying ${{\widehat}\iota}$ to Equation \[def\_ep\] we obtain $${{\widehat}\iota}({{\mathrm B}}_{s,\rho}^2)={\operatorname{ind}}(s)\,{\operatorname{id}}+\eps_{\al,\la} \frac{p_{\al,\la}-1}{\sqrt{{\operatorname{ind}}(s)p_{\al,\la}}}{{\widehat}\iota}({{\mathrm B}}_{s,\rho}).$$ Now $p_{\al',\la'}=p_{\al,\la}$ by Lemma \[lem:p\], and since $\si$ stabilises $U_0$ and $w_0$ we also have ${\operatorname{ind}}(s')={\operatorname{ind}}(s)$. Then Lemma \[lem:wi(B)\] yields $$\delta_{\la,\si}(s')^2\, {{\mathrm B}}_{s',\rho'}^2= {\operatorname{ind}}(s')\,{\operatorname{id}}+ \eps_{\al,\la} \frac{p_{\al',\la'}-1} {\sqrt{ {\operatorname{ind}}(s') p_{\al',\la'}}} \delta_{\la,\si}(s')\,{{\mathrm B}}_{s',\rho'}.$$ Since $s'\in \si(R(\la))=R({{^\sigma\!\lambda}})$ the assumption $R(\la')\leq \ker(\delta_{\la,\si})$ allows us to simplify this to $${{\mathrm B}}_{s',\rho'}^2={\operatorname{ind}}(s')\,{\operatorname{id}}+ \eps_{\al,\la}\frac{p_{\al',\la'}-1} {\sqrt{{\operatorname{ind}}(s')p_{\al',\la'}}}\,{{\mathrm B}}_{s',\rho'}.$$ The claim follows by comparison with  for ${{\mathrm B}}_{s',\rho'}$. Now for $\al\in\Delta_\la$ set ${\mathrm {T}}_{s_\al,\rho}:= \eps_{\al,\la}\,\sqrt{{\operatorname{ind}}(s_\al)p_{\al,\la}}\,{{\mathrm B}}_{s_\al,\rho}$; for $w\in R(\la)$ with a reduced expression $w=s_1\cdots s_r$ with $s_i=s_{\al_i}$ simple reflections (so $\al_i\in\Delta_\la$) let ${\mathrm {T}}_{w,\rho}:={\mathrm {T}}_{s_1,\rho}\cdots {\mathrm {T}}_{s_r,\rho}$; for $w\in C(\la)$ define ${\mathrm {T}}_{w,\rho}:=\sqrt{{\operatorname{ind}}(w)}\,{{\mathrm B}}_{w,\rho}$, and then for $w\in W(\la)$ with $w=w_1w_2$ where $w_1\in C(\la)$, $w_2\in R(\la)$, let ${\mathrm {T}}_{w,\rho}:={\mathrm {T}}_{w_1,\rho}{\mathrm {T}}_{w_2,\rho}$. This does not depend on the choice of reduced expressions, see [@Ca85 Prop. 10.8.2]. Then we have: \[prop:wi(T)\] If $R({{^\sigma\!\lambda}})\leq \ker(\delta_{\la,\si})$ then for all $w\in W(\la)$ we have $${{\widehat}\iota}({\mathrm {T}}_{w,\rho})=\delta_{\la,\si}(\si(w))\, {\mathrm {T}}_{\si(w),{{^\sigma\!\rho}}}.$$ First assume that $w=s_\al=:s$ for some $\al\in\Delta_\la$. Then $$\begin{aligned} {{\widehat}\iota}({\mathrm {T}}_{s,\rho}) &={{\widehat}\iota}\big(\eps_{\al,\la}\,\sqrt{{\operatorname{ind}}(s)p_{\al,\la}}\,{{\mathrm B}}_{s,\rho}\big)\\ &=\eps_{\al,\la}\,\sqrt{{\operatorname{ind}}(s)p_{\al,\la}}\,\,{{\widehat}\iota}({{\mathrm B}}_{s,\rho}) =\eps_{\al,\la}\,\sqrt{{\operatorname{ind}}(s)p_{\al,\la}}\,\, \delta_{\la,\si}(s'){{\mathrm B}}_{s',{{^\sigma\!\rho}}} \end{aligned}$$ by Lemma \[lem:wi(B)\], where $s'=\si(s)$, $\al'=\si(\al)$. From Lemmas \[lem:p\] and \[lem:eps\] we know $p_{\al',\la'}=p_{\al,\la}$, ${\operatorname{ind}}(s')={\operatorname{ind}}(s)$ and $\eps_{\al',\la'}=\eps_{\al,\la}$. So indeed $${{\widehat}\iota}({\mathrm {T}}_{s,\rho}) =\delta_{\la,\si}(s')\,\eps_{\al',\la'}\,\sqrt{{\operatorname{ind}}(s')p_{\al',\la'}}\, {{\mathrm B}}_{s',{{^\sigma\!\rho}}}=\delta_{\la,\si}(s') {\mathrm {T}}_{s',{{^\sigma\!\rho}}}.$$ Next, if $w\in C(\la)$ then $${{\widehat}\iota}({\mathrm {T}}_{w,\rho})=\sqrt{{\operatorname{ind}}(w)}\,{{\widehat}\iota}({{\mathrm B}}_{w,\rho}) =\delta_{\la,\si}(\si(w))\sqrt{{\operatorname{ind}}(w)}{{\mathrm B}}_{\si(w),{{^\sigma\!\rho}}} =\delta_{\la,\si}(\si(w))\,{\mathrm {T}}_{\si(w),{{^\sigma\!\rho}}}.$$ In the general case, let $w\in W(\la)$ with $w=w_1w_2$ where $w_1\in C(\la)$, and $w_2\in R(\la)$ has a reduced expression $w_2=s_1\cdots s_r$. Then by the above we get $$\begin{aligned} {{\widehat}\iota}({\mathrm {T}}_{w,\rho}) &={{\widehat}\iota}({\mathrm {T}}_{w_1,\rho}){{\widehat}\iota}({\mathrm {T}}_{s_1,\rho})\cdots {{\widehat}\iota}({\mathrm {T}}_{s_r,\rho})\\ &=\delta_{\la,\si}(\si(w_1))\Big(\prod_{i=1}^r\delta_{\la,\si}(\si(s_i))\Big)\, {\mathrm {T}}_{\si(w_1),{{^\sigma\!\rho}}}{\mathrm {T}}_{\si(s_1),{{^\sigma\!\rho}}}\cdots {\mathrm {T}}_{\si(s_r),{{^\sigma\!\rho}}}\\ &=\delta_{\la,\si}(\si(w_1s_1\cdots s_r))\, {\mathrm {T}}_{\si(w_1),{{^\sigma\!\rho}}}{\mathrm {T}}_{\si(s_1\cdots s_r),{{^\sigma\!\rho}}} =\delta_{\la,\si}(\si(w))\, {\mathrm {T}}_{\si(w),{{^\sigma\!\rho}}} \end{aligned}$$ as claimed. Central-primitive idempotents of ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$ ------------------------------------------------------------------------------------------------ Next we describe the central-primitive idempotents of ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$. It is well-known (see e.g. [@Ca85 Prop. 10.9.2]) that ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$ is a symmetric algebra with symmetrising trace defined by the linear map $\tau_\rho:{\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))\rightarrow {{\mathbb{C}}}$ with $$\tau_\rho({\mathrm {T}}_{w,\rho})=\begin{cases} 1 & w=1, \\ 0& w\neq 1.\end{cases}$$ Let us denote by $\{{\mathrm {T}}_{w,\rho}^\vee\}$ the basis dual to $\{{\mathrm {T}}_{w,\rho}\}$ with respect to the bilinear form associated with $\tau_\rho$. Thus $${\mathrm {T}}^\vee_{w,\rho}=p_{w,\la}^{-1} {\mathrm {T}}_{w^{-1},\rho}\qquad \text{for }w\in W(\la)$$ where $p_{w,\la}:=\prod_{\al\in \Phi_\la^+, \,\, w(\al)<0} p_{\al,\la}$ (see [@Ca85 p. 349] or [@GP 8.1.1]). Note that via ${{\widehat}\iota}$, $\tau_\rho$ defines a symmetrising trace $\tau_{{{^\sigma\!\rho}}}$ on ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}({{^\sigma\!\rho}}))$ with $$\tau_{{{^\sigma\!\rho}}}({\mathrm {T}}_{w,{{^\sigma\!\rho}}})=\begin{cases} 1 & w=1, \\ 0& w\neq 1.\end{cases}$$ Let $M$ be a simple ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$-module and $\eta$ its character. It can be considered as a submodule of $e_{\eta,\rho} {{\mathfrak F}}(\rho)$, where $$e_{\eta,\rho}:=\frac 1 {c_{\eta ,\rho}} \sum_{w\in W(\la)} \eta({\mathrm {T}}_{w,\rho})\, {\mathrm {T}}_{w,\rho}^\vee$$ denotes the central-primitive idempotent of ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$ corresponding to $M$ (see [@GP 7.2.7(c)]). Here, $c_{\eta,\rho}$ is the Schur element associated to $\eta$ as in [@GP Thm. 7.2.1]. \[prop:wi(e)\] Assume that $R({{^\sigma\!\lambda}})\leq \ker(\delta_{\la,\si})$. There exists a simple ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}({{^\sigma\!\rho}}))$-module with character $\eta'$ such that $$\eta'({\mathrm {T}}_{\si(w),{{^\sigma\!\rho}}})= \delta_{\la,\si}^{-1}(w)\,\eta({\mathrm {T}}_{w,\rho}) \quad\text{ for all }w\in W(\la).$$ The ${{\mathbb{C}}}G$-modules $e_{\eta,\rho}\,{{\mathfrak F}}(\rho)$ and $(e_{\eta',{{^\sigma\!\rho}}}\,{{\mathfrak F}}_{{{^\sigma\!\rho}}})^\si$ are isomorphic. Since ${{\widehat}\iota}$ is an isomorphism of algebras we see from Proposition \[prop:wi(T)\] that if $R({{^\sigma\!\lambda}})\leq \ker(\delta_{\la,\si})$ then $${{\widehat}\iota}(e_{\eta,\rho})=\frac{1}{c_{\eta,\rho}}\sum_{w\in W(\la)} \eta({\mathrm {T}}_{w,\rho})\,\delta_{\la,\si}(\si(w^{-1}))\,{\mathrm {T}}_{\si(w),{{^\sigma\!\rho}}}^\vee$$ is a central-primitive idempotent of ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho^{\si}))$. Let $\eta'$ be the character of the associated simple ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}({{^\sigma\!\rho}}))$-module. Then comparison of coefficients between ${{\widehat}\iota}(e_{\eta,\rho})$ and $$e_{\eta',{{^\sigma\!\rho}}}=\frac{1}{c_{\eta',{{^\sigma\!\rho}}}} \sum_{w\in W({{^\sigma\!\lambda}})} \eta'({\mathrm {T}}_{w,{{^\sigma\!\rho}}})\,{\mathrm {T}}_{w,{{^\sigma\!\rho}}}^\vee$$ at $w=1$ gives $$\frac{\eta({\mathrm {T}}_{1,\rho})}{c_{\eta,\rho}} =\frac{\eta'({\mathrm {T}}_{1,{{^\sigma\!\rho}}})}{c_{\eta',{{^\sigma\!\rho}}}}.$$ Since ${\mathrm {T}}_{1,\rho}$ is the identity element of ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$, and $\eta,\eta'$ have the same degree, this implies $c_{\eta,\rho}=c_{\eta',{{^\sigma\!\rho}}}$. Then comparison of coefficients at arbitrary $w\in W(\la)$ gives the first statement. The second is also clear as ${{\widehat}\iota}$ is a ${{\mathbb{C}}}G$-module isomorphism. The generic algebra ${{\mathcal H}}$ {#sec:labelling} ------------------------------------ We next analyse in more detail the bijection between ${\operatorname{Irr}}({\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho)))$ and ${\operatorname{Irr}}(W(\la))$ using the approach presented in [@HL83 Sec. 4]. The main idea is to introduce a generic algebra over a polynomial ring ${{\mathbb{C}}}[ u_\al\mid \al\in \Delta_\la]$. One specialisation then gives the endomorphism algebra ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$ and another specialisation gives the group algebra ${{\mathbb{C}}}W(\la)$. Application of these specialisations to the irreducible characters defines a parametrisation of the constituents of ${\mathrm {R}}_L^G(\la)$ by ${\operatorname{Irr}}(W(\la))$. Let ${{\mathbf u}}=(u_\al \mid \alpha \in \Delta_\la)$ be indeterminates with $u_\al=u_\beta$ if and only if $\al$ and $\beta$ are conjugate under $W(\la)$. Let $K$ be an algebraic closure of the quotient field of the Laurent series ring $A_0={{\mathbb{C}}}[{{\mathbf u}}^{\pm1}]$, and let $A$ be the integral closure of $A_0$ in $K$. Let ${{\mathcal H}}$ be the free $A$-module with basis $\{a_w\mid w\in W(\la)\}$. According to [@HL83 4.1] one can define a unique $A$-bilinear associative multiplication on ${{\mathcal H}}$ such that for all $x\in C(\la)$, $w\in W(\la)$ and $\al\in \Delta_\la$ one has $$\begin{aligned} a_xa_w&=&a_{xw} \text{ and } a_wa_x=a_{wx},\\ a_{s_\al}a_w&=&\begin{cases} a_{s_\alpha w}& \text{ if } w^{-1} \alpha \in \Phi_\la^+,\\ u_\alpha a_{s_\alpha w} +(u_\alpha -1) a_w& \text{ if } w^{-1} \alpha \notin \Phi_\la^+, \end{cases}\\ a_wa_{s_\al}&=&\begin{cases} a_{ws_\al} &\text{ if } w \alpha \in \Phi_\la^+,\\ u_\alpha a_{ws_\al} +(u_\alpha -1) a_w& \text{ if } w \alpha \notin \Phi_\la^+.\end{cases}\end{aligned}$$ Any homomorphism $f: A \rightarrow {{\mathbb{C}}}$ induces a right $A$-module structure on the field ${{\mathbb{C}}}$, so we obtain from ${{\mathcal H}}$ a ${{\mathbb{C}}}$-algebra ${{\mathcal H}}^f:={{\mathbb{C}}}\otimes_A {{\mathcal H}}$ with ${{\mathbb{C}}}$-vector space basis $\Set{1 \otimes a_w |w \in W(\la)}$. The structure constants of ${{\mathcal H}}^f$ are obtained from the ones of ${{\mathcal H}}$ by applying $f$. By [@HL83 4.2] the morphisms $f_0,g_0: A_0 \rightarrow {{\mathbb{C}}}$ defined by $f_0(u_\al)=p_{\al,\la}$ and $g_0(u_\al)=1$ for $\alpha \in \Delta_\la$ can be extended to morphisms $f,g: A \rightarrow {{\mathbb{C}}}$. Then ${{\mathcal H}}^f$ is isomorphic to ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$ via $1\otimes a_w\mapsto {\mathrm {T}}_{w,\rho}$ and ${{\mathcal H}}^g$ is isomorphic to ${{\mathbb{C}}}W(\la)$ via $1\otimes a_w\mapsto w$. By [@HL83 4.7] the map $\eta\mapsto \eta^f$ with $\eta^{f}(1\otimes a_w):=f(\eta(a_w))$ defines a bijection between the set of $K$-characters associated to simple $K\otimes_A {{\mathcal H}}$-modules and the characters associated to simple ${{\mathcal H}}^f$-modules. The analogous result holds for ${{\mathcal H}}^g$. This combines to give a bijection between ${\operatorname{Irr}}(W(\la))$ and ${\operatorname{Irr}}({\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho)))$ and thus provides a labelling of the irreducible constituents of ${\mathrm {R}}_L^G(\la)$ by ${\operatorname{Irr}}(W(\la))$: for $\eta\in{\operatorname{Irr}}(W(\la))$ we denote by ${\mathrm {R}}_L^G(\la)_{\eta}$ the irreducible character of $G$ occurring in $e_{{\eta',\rho}^f} {{\mathfrak F}}(\rho)$, where $\eta'$ is the $K$-character of ${{\mathcal H}}$ with $\eta'^{g}=\eta$. Together with Proposition \[prop:wi(e)\] this proves: \[thm:equiv\_HC\] If $R({{^\sigma\!\lambda}})\leq \ker(\delta_{\la,\sigma})$, then for $\eta\in {\operatorname{Irr}}(W(\la))$ we have $$^\si({\mathrm {R}}_L^G(\la)_\eta)={\mathrm {R}}_L^G({{^\sigma\!\lambda}})_{\eta'} \label{equiv_equation}$$ with $\eta':={{^\sigma\!\eta}}\delta_{\la,\si}^{-1}$. Uniqueness of parametrisation {#subsec:unique} ----------------------------- So far, our parametrisation of constituents of ${\mathrm {R}}_L^G(\la)$ and hence also the assertion of Theorem \[thm:equiv\_HC\] depends on the choice of the parabolic subgroup $P$ containing $L$. The following result, which extends [@McGovern Thm. 2.12] from the case of a torus to an arbitrary Levi subgroup, allows us to control that dependency. \[thm:action N\] \[equiv\_HC\] Let $n\in N(L)$. Assume that the parametrisation of the constituents of ${\mathrm {R}}_L^G(\la)$ and ${\mathrm {R}}_L^G(^n\la)$ is obtained using the same parabolic subgroup $P$ of $G$ with $L\le P$ and extensions of $\la$ and $^n\la$ given by an $N(L)$-equivariant extension map. Then $${\mathrm {R}}_L^G(\la)_\eta={\mathrm {R}}_L^G(^n\la)_{^n\eta},$$ where $^n\eta\in{\operatorname{Irr}}(W(^n\la))$ is the character with $^n\eta(^nx)=\eta(x)$ for $x\in W(\la)$. Write $w$ for the image of $n$ in $W$. Note that by multiplying $n$ by elements of $L$ we may assume that $w$ fixes $\Delta_L$, and also that $w$ preserves the set of positive roots $\Phi_\la^+$ (by multiplying with a suitable element from $R(\la)$). By [@Ca85 10.1.3], for $v\in W$ the map $$\theta_v:{{\mathfrak F}}(\rho)\rightarrow{{\mathfrak F}}(^v\!\rho),\quad \theta_v(f)(x):=f(\dot{v}e_Ux)\quad\text{for $f\in{{\mathfrak F}}(\rho)$, $x\in G$},$$ is a homomorphism of $G$-modules. Moreover, it is invertible by [@Ca85 10.5.1, 10.5.3]. Now let $v\in W(\la)$ and set $v':=wvw^{-1}$. It then follows by [@Ca85 10.7.5] that $$\theta_w\theta_v=\sqrt{\frac{{\operatorname{ind}}(wv)}{{\operatorname{ind}}(w){\operatorname{ind}}(v)}}\theta_{wv}\quad \text{and}\quad \theta_{v'}\theta_w=\sqrt{\frac{{\operatorname{ind}}(v'w)}{{\operatorname{ind}}(w){\operatorname{ind}}(v')}}\theta_{v'w},$$ so that $$\theta_w\theta_v\theta_w^{-1}=\sqrt{\frac{{\operatorname{ind}}(v')}{{\operatorname{ind}}(v)}}\theta_{v'}.$$ Now we have that ${{\mathrm B}}_{v,\rho}={\widetilde}\rho(\dot{v})\circ\theta_v$, and that $\theta_w\circ{\widetilde}\rho(\dot{v})={\widetilde}\rho(\dot{v})\circ\theta_w$ by the argument given in the proof [@Ca85 Prop. 10.2.4]. Thus $$\begin{aligned} \theta_w\circ{{\mathrm B}}_{v,\rho}\circ\theta_w^{-1} =&\theta_w\circ{\widetilde}\rho(\dot{v})\circ\theta_v\circ\theta_w^{-1} ={\widetilde}\rho(\dot{v})\circ\theta_w\circ\theta_v\circ\theta_w^{-1}\\ =&{\widetilde}\rho(\dot{v})\circ\sqrt{\frac{{\operatorname{ind}}(v')}{{\operatorname{ind}}(v)}}\theta_{v'} =\sqrt{\frac{{\operatorname{ind}}(v')}{{\operatorname{ind}}(v)}}{}^n\!{\widetilde}\rho(\dot{v'})\circ\theta_{v'} =\sqrt{\frac{{\operatorname{ind}}(v')}{{\operatorname{ind}}(v)}}{{\mathrm B}}_{v',{}^n\!\rho}. \end{aligned}$$ Comparing the quadratic polynomials satisfied by ${{\mathrm B}}_{s_\al,\rho}$ and ${{\mathrm B}}_{{^w}s_\al,{}^n\!\rho}$ we see that $\eps_{\al,\la}$ and $\eps_{w(\al),{}^w\!\la}$ agree for $\al\in\Delta_\la$. Also, as conjugation by $n$ does not change the degrees of the two constituents of ${\mathrm {R}}_L^{L_\al}(\la)$ we have $p_{\al,\la}=p_{w(\al),{}^w\!\la}$. Thus, the isomorphism of $G$-modules $\theta_w$ sends the standard generators ${\mathrm {T}}_{v,\rho}$ of ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$ to the generators ${\mathrm {T}}_{{^n}v,{}^n\!\rho}$ of ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(^n\rho))$. It then follows from our construction of the central primitive idempotents and the specialisation argument as in the proof of Theorem \[thm:equiv\_HC\] that conjugation by $n$ sends the character of ${\operatorname{End}}_{{{\mathbb{C}}}G}({{\mathfrak F}}(\rho))$ parametrised by $\eta\in{\operatorname{Irr}}(W(\la))$ to the character parametrised by $^n\eta\in{\operatorname{Irr}}(W(^n\la))$. The stabilisers of some Harish-Chandra induced Characters ========================================================= Using Proposition \[prop:5\_11\_here\] for characters lying in the principal Harish-Chandra series and Proposition \[prop:loc\_param\_C\] together with the results from Section \[sec:HC\] we can show that Harish-Chandra induction induces an equivariant map between certain local characters and suitable characters of $G={{{{{\mathbf G}}^F}}}$. For this we determine the stabilisers of characters $\chi\in{\operatorname{Irr}}(G)$ lying in a Harish-Chandra series ${{\mathcal E}}(G,(L,\la))$ where $L$ is either - a maximally split torus, or - the standard Levi subgroup of type ${\mathsf C}_1$ in ${{\mathbf G}}$ of type ${\mathsf C}_l$ from Section \[type C\]. In order to assure the assumptions made in Section \[sec:HC\] we first describe the groups $R(\la)$. Recall that ${{\mathbf B}}^F$ and ${\ensuremath{{\mathrm{N}}}}_{{\mathbf G}}({{\mathbf T}})^F$ form a split $BN$-pair in $G$, see [@MT Thm. 24.10] for example. Let $\Phi_1$ be the associated root system of $G$ and $X_\al\leq G$ ($\al\in \Phi_1$) the associated root subgroups. In the following we use freely the notation around Harish-Chandra induction introduced in the section before. \[lem:Rla\] Assume that ${{\mathbf G}}$ is not of type ${\mathsf A}_l$. Let $\la\in{\operatorname{Irr}}({{\mathbf T}}^F)$, $\al\in\Phi_1$ and let $\Phi_\la$ be defined as in \[subsec:p alpha\]. Then $\al\in \Phi_\la$ if and only if $\la({{\mathbf T}}^F\cap \spann<X_{\al},X_{-\al}>)=1$. Moreover $R(\la)\leq W({\widetilde}\la)$ for any ${\widetilde}\la\in{\operatorname{Irr}}({\widetilde}{{\mathbf T}}^F|\la)$. Assume first that $F$ is a standard Frobenius endomorphism. Since $T={{\mathbf T}}^{F}$ is a torus the integer $p_{\al,\la}$ from Section \[subsec:p alpha\] is determined inside the standard Levi subgroup $L_\al:=\langle T,X_{\pm\al}\rangle$. As ${{\mathbf G}}$ is of simply connected type, the group $K:=\langle X_{\pm\al}\rangle$ is isomorphic to ${\operatorname{SL}}_2(q)$. Moreover $T_0:={{\mathbf T}}\cap\spann<X_{\pm\al}>=\spann<h_\al(t)\mid t\in {{\mathbb{F}}}_q^\times>$. Let $\la_0:=\restr \la| {T_0}$. From the situation in ${\operatorname{SL}}_2(q)$ we know that ${\mathrm {R}}_{T_0}^K(\la_0)$ splits into two constituents of different degrees if $\la_0$ is trivial. Hence in that case $p_{\al,\la}\neq 1$. If $\la_0$ is not trivial then either the character ${\mathrm {R}}_{T_0}^K(\la_0)$ and hence ${\mathrm {R}}_T^{L_\al}(\la)$ is irreducible or it is the sum of two characters of the same degree. The above argument remains valid for twisted Frobenius endomorphisms, possibly replacing ${\operatorname{SL}}_2(q)$ by ${\operatorname{SL}}_2(q^m)$ with $m$ the order of the graph automorphism induced by $F$. Let ${\widetilde}\la\in{\operatorname{Irr}}({\widetilde}{{\mathbf T}}^F|\la)$. If $\al\in \Phi_\la$ and $s_\al\in \pi({\ensuremath{{\mathrm{N}}}}_G({{\mathbf T}}))$ is the reflection associated with $\al$ the Steinberg relations imply $$[{\widetilde}{{\mathbf T}}^{F},s_\al]\subseteq {{\mathbf T}}^F\cap \spann<X_{\al},X_{-\al}>.$$ Together with the above we conclude that $s_\al\in W({{{\widetilde}\lambda}})$ if $\al\in\Phi_\la$. Since $R(\la)$ is generated by the elements $s_\al$ ($\al\in\Phi_\la$) this proves the statement. \[lem:C(la)=1\] If $\Phi$ is of type ${\mathsf C}_l$ and $L$ is a standard Levi subgroup of type ${\mathsf C}_1$ then for every $\la\in{\operatorname{Irr}}(L)$ and ${{{\widetilde}\lambda}}\in{\operatorname{Irr}}({\widetilde}L|\la)$ we have $R(\la)\leq W({{{\widetilde}\lambda}})$. Let $\la\in{\operatorname{Irr}}(L)$, so $\la=\la_1\times\zeta$ with $\la_1\in{\operatorname{Irr}}(T_1)$ and $\zeta\in{\operatorname{Irr}}(L_0)$ where $T_1$ and $L_0$ are as defined in Section \[type C\]. For $N$ defined as there $W(\la)=N_\la/T\cong W(\la_1)$, where $W(\la_1)$ is the relative Weyl group of $\la_1$ in the subgroup of type ${\mathsf C}_{l-1,{\operatorname{sc}}}(q)$ centralising $L_0$. Let ${{{\widetilde}\lambda}}\in{\operatorname{Irr}}({\widetilde}L|\la)$. Then by the direct product structure of $N$ we see that $W({{{\widetilde}\lambda}})=W({{{\widetilde}\lambda}}_1)$ for some ${{{\widetilde}\lambda}}_1\in {\operatorname{Irr}}({\widetilde}T_1|\la_1)$. Thus we are in the situation described in Lemma \[lem:Rla\], but with respect to the torus $T_1$ in type ${\mathsf C}_{l-1}$, whence $W({{{\widetilde}\lambda}})=W({{{\widetilde}\lambda}}_1)\geq R(\la_1)$. Let $\{\pm\al_1\}$ denote the root system of the standard Levi subgroup $L$. Let $\Omega\subseteq \Phi\setminus\{\pm\al_1\}$ be defined as in \[subsec:p alpha\] and $\al\in\Omega$. Computations in $W$ show that $\al$ is orthogonal to $\al_1$. Let $L_\al$ be the standard Levi subgroup of $G$ corresponding to the simple system $\{\al_1,\al\}$. Then ${\mathrm {R}}_L^{L_\al}(\la)={\mathrm {R}}_{T_1}^{\spann<T_1,X_{\pm \al}>}(\la_1)\times \zeta$. Accordingly $\al\in \Phi_\la$ if and only if $\al\in \Phi_{\la_1}$, where $\Phi_{\la_1}$ is associated with $\la_1$ as in \[subsec:p alpha\]. This proves the statement, since $R(\la)=\spann<s_\al\mid\al\in\Phi_\la>$ and $R(\la_1)=\spann<s_\al\mid\al\in\Phi_{\la_1}>$. \[thm:bij\] Let $L$ be either a maximally split torus of $({{\mathbf G}},F)$ or the Levi subgroup from Section \[type C\] in type ${\mathsf C}_l$. Let $N{:=}{\ensuremath{{\mathrm{N}}}}_G({{\mathbf S}})$ and ${\widetilde}N{:=}{\ensuremath{{\mathrm{N}}}}_{\ensuremath{{{\widetilde}G}}}({{\mathbf S}})$ where ${{\mathbf S}}$ is a torus of ${{\mathbf G}}$ such that ${\ensuremath{{\rm{C}}}}_{G}({{\mathbf S}})=L$. Then there exists an ${\widetilde}N D$-equivariant bijection $${\operatorname{Irr}}_{{\operatorname{cusp}}}(N)\longrightarrow \bigcup_{\la\in {\operatorname{Irr}}_{{\operatorname{cusp}}}(L)}{{\mathcal E}}(G,(L,\la)),$$ where ${\operatorname{Irr}}_{{\operatorname{cusp}}}(L)$ is the set of cuspidal characters of $L$, ${\operatorname{Irr}}_{{\operatorname{cusp}}}(N){:=}{\operatorname{Irr}}(N|{\operatorname{Irr}}_{{\operatorname{cusp}}}(L))$ and ${{\mathcal E}}(G,(L,\la))$ denotes the set of constituents of ${\mathrm {R}}_L^G(\la)$. In Propositions \[prop:5\_11\_here\] and \[prop:loc\_param\_C\] we gave a parametrisation of the set ${\operatorname{Irr}}_{{\operatorname{cusp}}}(N)$. Let us first assume that $L$ is a maximally split torus and hence $N={{\mathbf N}}^F$. In this case ${\operatorname{Irr}}_{{\operatorname{cusp}}}(L)={\operatorname{Irr}}(L)$ and hence ${\operatorname{Irr}}_{{\operatorname{cusp}}}(N)={\operatorname{Irr}}(N)$. Let $\Lambda$ be the $ND$-equivariant extension map from Corollary \[cor:ker\_delta\_sc\] applied with $d=1$ and $v=1$. Then Proposition \[prop:5\_11\_here\] yields a map $$\Pi:{{\mathcal P}}\longrightarrow{\operatorname{Irr}}(N),\qquad (\la,\eta)\longmapsto (\Lambda(\la)\eta)^{{{\mathbf N}}^{F}},$$ with ${{\mathcal P}}=\{(\la,\eta)\mid\la\in{\operatorname{Irr}}(L),\,\eta\in{\operatorname{Irr}}(W(\la))\}$. On the other hand let $$\Pi':{{\mathcal P}}\longrightarrow \bigcup_{\la\in {\operatorname{Irr}}_{{\operatorname{cusp}}}(L)} {{\mathcal E}}(G,(L,\la)), \qquad (\la,\eta)\longmapsto {\mathrm {R}}^G_L(\la)_\eta,$$ where ${\mathrm {R}}^G_L(\la)_\eta $ is defined using $\Lambda$. The maps $\Pi$ and $\Pi'$ induce bijections between the set of $N$-orbits in ${{\mathcal P}}$ and the characters in ${\operatorname{Irr}}_{{\operatorname{cusp}}}(N)$ and $\bigcup_{\la\in{\operatorname{Irr}}_{{\operatorname{cusp}}}(L)} {{\mathcal E}}(G,(L,\la))$ respectively, see Proposition \[prop:5\_11\_here\](1) for $\Pi$ and Theorem \[thm:action N\] for the statement about $\Pi'$. Accordingly the concatenation $\Pi'\circ\Pi^{-1}$ gives the required bijection $${\operatorname{Irr}}_{{\operatorname{cusp}}}(N) \rightarrow \bigcup_{\la\in {\operatorname{Irr}}_{{\operatorname{cusp}}}(L)} {{\mathcal E}}(G,(L,\la)), \qquad \Pi(\la,\eta) \mapsto \Pi'(\la,\eta).$$ Now the action of ${\widetilde}N D$ on ${\operatorname{Irr}}(N)$ has been described in Proposition \[prop:5\_11\_here\] in terms of the associated labels. Analogously Theorem \[thm:equiv\_HC\] determines the action of ${\widetilde}T D$ on the sets ${{\mathcal E}}(G,(L,\la))$ in terms of the associated labels. (Note that by Proposition \[prop:3\_12\] the map $\Lambda$ satisfies the requirements made in Theorem \[thm:equiv\_HC\].) Comparing the induced actions on ${{\mathcal P}}$ we see that the bijection is ${\widetilde}N D$-equivariant. Similarly, when $L$ is as in Section \[type C\], then according to the results of Propositions \[prop:ext\_map\_C\] and \[prop:loc\_param\_C\] the above construction again determines an equivariant bijection, using that the assumption of Theorem \[thm:equiv\_HC\] is satisfied by Lemma \[lem:C(la)=1\]. \[cor:7\_3\] Let $(L,\la)$ be a cuspidal pair as in Theorem \[thm:bij\]. Then for every character $\chi_0\in{{\mathcal E}}(G,(L,\la))$ there exists some ${\widetilde}G$-conjugate $\chi$ such that $$({\widetilde}G D)_\chi={\widetilde}G_\chi D_\chi.$$ Via the bijection from Theorem \[thm:bij\] the character $\chi_0$ corresponds to a character $\psi_0\in{\operatorname{Irr}}_{{\operatorname{cusp}}}(N)$. Some ${\widetilde}N$-conjugate $\psi$ of $\psi_0$ satisfies $({\widetilde}N D)_\psi={\widetilde}N_\psi(ND)_\psi$, see Theorems \[thm:IrrN\_autom\] and \[thm:stab\_C\] together with Lemma \[lem:3\_21\]. Since the bijection from Theorem \[thm:bij\] is ${\widetilde}N D$-equivariant, this implies $$({\widetilde}G D)_\chi= G({\widetilde}N D)_\chi= G({\widetilde}N D)_\psi =G ({\widetilde}N_\psi (ND)_\psi) = {\widetilde}G_\chi (N D_\chi) = {\widetilde}G_\chi D_\chi$$ where $\chi$ corresponds to $\psi$ via Theorem \[thm:bij\], so is ${\widetilde}G$-conjugate to $\chi_0$. Towards the inductive McKay condition {#sec:indMcK} ===================================== In this section we collect the previous results to prove Theorem \[thm:d=1good\] on the inductive McKay condition for primes $\ell$ with $d_\ell(q)=1$, whenever ${{{{{\mathbf G}}^F}}}\notin\{{\mathsf D}_{l,{\operatorname{sc}}}(q),{\mathsf E}_{6,{\operatorname{sc}}}(q)\}$. Here $d_\ell(q)$ denotes the order of $q$ modulo $\ell$ if $\ell>2$, respectively the order of $q$ modulo 4 if $\ell=2$. We describe under which additional assumptions this result can be extended to primes with $d_\ell(q)=2$ and to the missing types. We first collect some cases in which the assertion of Theorem \[thm:d=1good\] had already been proven. \[prop:exc\] Assume that $S:={{\mathbf G}}^F/{\operatorname Z}({{\mathbf G}}^F)$ is simple, and let $\ell$ be a prime dividing $|S|$. The inductive McKay condition holds for $S$ and $\ell$ if one of the following is satisfied: - $\ell=p$, - ${\operatorname Z}({{\mathbf G}}^F)=1$, - $\Phi$ is of type ${\mathsf A}_l$, or - $\ell=2$ and ${{\mathbf G}}^F={\mathsf C}_{l,{\operatorname{sc}}}(q)$, where $q$ is an odd power of an odd prime. The case where $\ell$ is the defining characteristic has been settled in [@Sp12 Thm. 1.1]. The case ${\operatorname Z}({{\mathbf G}}^F)=1$ has been considered in [@CS13 Thm. A, Prop. 5.2]. If $\Phi$ is of type ${\mathsf A}_l$ the statement follows from [@CS15 Thm. A]. According to [@MaExt Thm. 4.11] the inductive McKay condition is satisfied for $S$ and $\ell=2$ whenever ${{\mathbf G}}^F={\mathsf C}_{l,{\operatorname{sc}}}(q)$ for an odd power $q$ of an odd prime. \[prop:Malle\] Let $({{\mathbf G}},F)$ be as in Section \[sec:Not\] and $\ell$ a prime different from the defining characteristic with $d=d_\ell(q)\in\{1,2\}$. Assume that ${{\mathbf G}}^F$ and $\ell$ are not as in Proposition \[prop:exc\]. Let ${{\mathbf S}}$ be a Sylow $d$-torus of $({{\mathbf G}},F)$. Then Assumption \[thm2\_2gen\] is satisfied for $G:={{{{{\mathbf G}}^F}}}$, ${\widetilde}G:={\widetilde}{{\mathbf G}}^F$, $D$, $N:={\ensuremath{{\mathrm{N}}}}_{{{{{\mathbf G}}^F}}}({{\mathbf S}})$ and some Sylow $\ell$-subgroup $Q$ of $G$ such that ${\ensuremath{{\mathrm{N}}}}_{{\widetilde}{{\mathbf G}}^F}({{\mathbf S}})={\ensuremath{{\mathrm{N}}}}_{{\widetilde}{{\mathbf G}}^F}(Q)N$. We can argue as in the proof of Lemma 7.1 of [@CS15]. According to [@MaH0 Thms. 5.14 and 5.19] since ${{{{{\mathbf G}}^F}}}$ and $\ell$ are not as in Proposition \[prop:exc\] there exists some Sylow $\ell$-subgroup $Q$ of ${{{{{\mathbf G}}^F}}}$ with ${\ensuremath{{\mathrm{N}}}}_{{{{{\mathbf G}}^F}}}(Q)\leq {\ensuremath{{\mathrm{N}}}}_{{{{{\mathbf G}}^F}}}({{\mathbf S}})\lneq {{{{{\mathbf G}}^F}}}$. Since all Sylow $d$-tori of $({{\mathbf G}},F)$ are ${{{{{\mathbf G}}^F}}}$-conjugate, we can conclude that ${\ensuremath{{\mathrm{N}}}}_{{{{{\mathbf G}}^F}}}({{\mathbf S}})$ is ${\operatorname{Aut}}({{{{{\mathbf G}}^F}}})_{Q}$-stable, see also [@CS13 Sect. 2.5]. Maximal extendibility for ${{{{{\mathbf G}}^F}}}\lhd {\widetilde}{{\mathbf G}}^F$ as required in \[hauptprop\_maxext\_glob\] was shown by Lusztig, see [@LuDis Prop. 10] or [@CE04 Thm. 15.11]. The maximal extendibility with respect to $N\lhd {\widetilde}N$ as required in \[hauptprop\_maxext\_loc\] has been proven in Corollary \[cor:maxextNwN\]. In our next step we establish the existence of a bijection as required in \[thm2\_2bij\]. \[thm:Bij\_wG\] Let $\ell$ be a prime such that $d=d_\ell(q)\in\{1,2\}$. Let ${{\mathbf S}}$ be a Sylow $d$-torus of $({{\mathbf G}},F)$, $N{:=}{\ensuremath{{\mathrm{N}}}}_G({{\mathbf S}})$ and ${\widetilde}N{:=}{\ensuremath{{\mathrm{N}}}}_{\ensuremath{{{\widetilde}G}}}({{\mathbf S}})$. Let ${{\mathcal G}}{:=}{\operatorname{Irr}}\big ({\widetilde}G\mid {\mathrm{Irr}_{\ell'}}(G)\big)$ and ${{\mathcal N}}{:=}{\operatorname{Irr}}\big ({\widetilde}N\mid {\mathrm{Irr}_{\ell'}}(N)\big )$. Then there is a $({\ensuremath{{{\widetilde}G}}}\rtimes D)_{{{\mathbf S}}}$-equivariant bijection $${\widetilde}\Omega: {{\mathcal G}}\longrightarrow {{\mathcal N}}$$ with ${\widetilde}\Omega({{\mathcal G}}\cap{\operatorname{Irr}}({\widetilde}G\mid \nu))= {{\mathcal N}}\cap{\operatorname{Irr}}({\widetilde}N\mid \nu)$ for every $\nu\in{\operatorname{Irr}}({\operatorname Z}({\widetilde}G))$, and ${\widetilde}\Omega(\chi\delta)={\widetilde}\Omega(\chi)\restr\delta|{{\widetilde}N}$ for every $\delta \in {\operatorname{Irr}}({\widetilde}G\mid 1_G)$ and $\chi\in{{\mathcal G}}$. According to Corollary \[cor:ker\_delta\_sc\] there exists an ${\ensuremath{{\mathrm{N}}}}_{{\widetilde}G D}({{\mathbf S}})$-equivariant extension map $\Lambda$ with respect to ${\ensuremath{{\rm{C}}}}_{{\widetilde}G}({{\mathbf S}})\lhd {\widetilde}N$ that is compatible with multiplication by linear characters of ${\widetilde}G$. The considerations made in Section 6 of [@CS15] for groups of type ${\mathsf A}_l$ apply in our more general situation as well. Using our map $\Lambda$ the construction presented there gives the required bijection. \[thm:8\_1\] Let $({{\mathbf G}},F)$ be as in Section \[sec:Not\] and $\ell$ a prime different from the defining characteristic of ${{\mathbf G}}$ with $d=d_\ell(q)\in\{1,2\}$. Assume that ${{\mathbf G}}^F$ and $\ell$ are not as in Proposition \[prop:exc\], and that ${{\mathbf G}}^F$ is the universal covering group of $S={{\mathbf G}}^F/{\operatorname Z}({{\mathbf G}}^F)$. Then the inductive McKay condition holds for $S$ and $\ell$ in any of the following cases: 1. $({{\mathbf G}},F)$ is of type ${\mathsf B}_l$, ${\mathsf C}_l$, ${{}^2} {\mathsf D}_l$, ${{}^2}{\mathsf E}_6$ or ${\mathsf E}_7$ and $d=1$; 2. $({{\mathbf G}},F)$ is of type ${\mathsf D}_l$ or ${\mathsf E}_6$, $d=1$, and \[2\_2gloext\] holds; 3. $({{\mathbf G}},F)$ is of type ${\mathsf B}_l$, ${\mathsf C}_l$, ${{}^2}{\mathsf D}_l$, ${{}^2}{\mathsf E}_6$ or ${\mathsf E}_7$, $d=2$ and \[2\_2glostar\] holds; or 4. $({{\mathbf G}},F)$ is of type ${\mathsf D}_l$ or ${\mathsf E}_6$, $d=2$, and \[2\_2glostar\] and \[2\_2gloext\] hold. This is proven by an application of Theorem \[thm:Sp12\]. We successively ensure that the necessary assumptions are satisfied. The groups $G:={{{{{\mathbf G}}^F}}}$, ${\widetilde}G:={\widetilde}{{\mathbf G}}^F$, $D$, $N$ and $Q$ are chosen as in Proposition \[prop:Malle\] and satisfy accordingly the assumptions made in \[thm2\_2gen\]. For this group $N$ the characters satisfy assumption \[thm2\_2loc\] according to Theorem \[thm:IrrN\_autom\]. Let $\chi\in{\operatorname{Irr}}({{{{{\mathbf G}}^F}}})$ lie in a Harish-Chandra series ${{\mathcal E}}({{{{{\mathbf G}}^F}}},({{\mathbf T}}^F,\la))$ for some character $\la\in{\operatorname{Irr}}({{\mathbf T}}^F)$. Then \[2\_2glostar\] holds for $\chi$ (after suitable ${{{{{\widetilde}{{\mathbf G}}}^F}}}$-conjugation) according to Corollary \[cor:7\_3\]. Now assume that $d=1$. Then according to [@MaH0 Prop. 7.3] each character in ${\mathrm{Irr}_{\ell'}}({{{{{\mathbf G}}^F}}})$ lies in a Harish-Chandra series ${{\mathcal E}}({{{{{\mathbf G}}^F}}},({{\mathbf T}}^F,\la))$ for some character $\la\in{\operatorname{Irr}}({{\mathbf T}}^F)$. So assumption \[2\_2glostar\] holds again by Corollary \[cor:7\_3\]. On the other hand \[2\_2gloext\] clearly holds whenever $D$ is cyclic or by assumption. If $({{\mathbf G}},F)$ of type ${\mathsf B}_l$, ${\mathsf C}_l$, ${{}^2} {\mathsf D}_l$, ${{}^2}{\mathsf E}_6$ or ${\mathsf E}_7$ then $D$ is cyclic. Whenever $d\in\{1,2\}$ the bijection from Theorem \[thm:Bij\_wG\] has the properties required in \[thm2\_2bij\]. Altogether this proves the above statements. Note that the equation given in \[2\_2glostar\] has only to be checked for $\ell'$-characters of $G$ that are not ${\widetilde}G$-invariant since every $\chi\in{\operatorname{Irr}}(G)$ with ${\widetilde}G_\chi={\widetilde}G$ satisfies $({\widetilde}G \rtimes D)_\chi={\widetilde}G\rtimes D_\chi$. In particular only characters in Lusztig rational series ${{\mathcal E}}(G,s)$ have to be considered where the centraliser of $s$ in the dual group ${{\mathbf G}}^*$ is not connected, since characters in Lusztig series corresponding to elements with connected centralisers are ${\widetilde}G$-invariant according to [@LuDis Prop. 5.1], see also [@CE04 Cor. 15.14]. Furthermore \[2\_2glo\] holds whenever ${\operatorname{Aut}}(S)/S$ is cyclic since then $({\ensuremath{{{\widetilde}G}}}\rtimes D)_\chi/(G{\operatorname Z}({\ensuremath{{{\widetilde}G}}}))$ is cyclic and hence coincides with $({\ensuremath{{{\widetilde}G}}}_\chi \rtimes D_\chi)/(G{\operatorname Z}({\ensuremath{{{\widetilde}G}}}))$. Theorem \[thm:d=1good\] in the case where ${{\mathbf G}}^F$ is a universal covering group and $d_\ell(q)=1$ is now part (a) of the preceding theorem. For odd primes $\ell$ the following completes the proof of Theorem \[thm:d=1good\]. The cases where the universal covering group of a simple group $S$ is not of the form ${{\mathbf G}}^F$ can be determined by Table 6.1.4 of [@GLS3]. Then the Schur multiplier of $S$ is said to have a non-trivial exceptional part, see [@GLS3 Sec. 6.1]. \[lem:excSchur\] Let $S$ be a simple group of Lie type with a non-trivial exceptional part of the Schur multiplier and let $\ell$ be a prime dividing $|S|$. Assume that $\ell$ is the defining characteristic or $d_\ell(q)\in \{1,2\}$. Then the inductive McKay condition holds for $S$ and $\ell$. If $S$ is a Suzuki or Ree group the result is known from [@IMN Thm. 16.1] and [@CS13 Thm. A]. Otherwise $S\cong{{\mathbf G}}^F/Z({{\mathbf G}}^F)$ for some pair $({{\mathbf G}},F)$ as in Section \[sec:Not\]. If $\ell$ is the defining characteristic of ${{\mathbf G}}$ the statement follows from [@Sp12 Thm. 1.11]. If $({{\mathbf G}},F)$ is of type ${\mathsf A}_l$, ${{}^2}{\mathsf A}_l$, ${\mathsf F}_4$, or ${\mathsf G}_2$ then the claim is known by [@CS13 Thm. A] and [@CS15 Thm. A]. In the other cases the considerations from [@CS15 Sec. 7] can be transferred: the inductive McKay condition holds for $S$ if it holds for any pair $(S,Z)$ in the sense of [@CS15 Def. 7.3], where $Z$ is a cyclic $\ell'$-quotient of the Schur multiplier of $S$, see [@CS15 Lemma 7.4(a)]. Taking into account [@ManonLie Thm. 1.1] it is sufficient to prove the claim in the cases where $Z$ is a quotient of the non-exceptional Schur multiplier of $S$. According to [@GLS3 Table 6.1.3] all Sylow subgroups of $D$ are cyclic and every automorphism of ${{\mathbf G}}^F$ is induced by ${\widetilde}{{\mathbf G}}^F D$ for groups ${\widetilde}{{\mathbf G}}^F$ and $D$ defined as in Section \[sec:Not\]. Further in those cases any Sylow subgroup of the outer automorphism group of $S$ is cyclic and hence \[2\_2glo\] holds. The proofs of Theorem \[thm:8\_1\] and [@Sp12 Thm. 2.12] imply that the inductive McKay condition holds for $(S,Z)$ in those missing cases if $d=d_\ell(q)\in \{1,2\}$. The McKay conjecture for $\ell=2$ {#sec:l=2} ================================= In this section we prove Theorem \[thm:McKayp=2\] from the introduction. Let ${{\mathbf G}}$ be a connected reductive linear algebraic group over an algebraically closed field of characteristic $p$ and $F:{{\mathbf G}}\rightarrow{{\mathbf G}}$ a Steinberg endomorphism defining an ${{\mathbb{F}}}_q$-structure on ${{\mathbf G}}$ such that $G:={{\mathbf G}}^F$ has no component of Suzuki or Ree type. Degree polynomials {#subsec:degpol} ------------------ We begin by defining degree polynomials for the irreducible characters of $G={{\mathbf G}}^F$, which play a crucial role in our arguments. These are probably known to (some) experts, but we have not been able to find a convenient reference. Let ${{\mathbb{G}}}$ be the complete root datum of $({{\mathbf G}},F)$. Then there is a monic integral polynomial $|{{\mathbb{G}}}|\in{{\mathbb{Z}}}[X]$ such that $|{{\mathbf G}}^{F^m}|=|{{\mathbb{G}}}|(q^m)$ for all natural numbers $m$ prime to the order of the automorphism induced by $F$ on the Weyl group $W$ of ${{\mathbf G}}$ (see [@BM92 1C]). The same then also holds for any connected reductive $F$-stable subgroup of ${{\mathbf G}}$, like maximal tori or connected components of centralisers. Furthermore, by work of Lusztig the unipotent characters of any finite reductive group with complete root datum ${{\mathbb{G}}}$ are parameterised uniformly, and the degree of a unipotent character $\chi$ of ${{\mathbf G}}^F$ is given by $f_\chi(q)$ for a suitable polynomial $f_\chi\in{{\mathbb{Q}}}[X]$ depending only on the parameter of $\chi$, see [@BMM §1B]. Now let $\chi\in{\operatorname{Irr}}(G)$ be arbitrary. Then $\chi$ lies in the Lusztig series ${{\mathcal E}}(G,s)$ of a semisimple element $s$ of the dual group $G^*={{\mathbf G}}^{*F}$, and Lusztig’s Jordan decomposition of characters gives a bijection $$\begin{aligned} \Psi:{{\mathcal E}}(G,s)\rightarrow {{\mathcal E}}({\ensuremath{{\rm{C}}}}_{G^*}(s),1)\quad\text{ such that }\quad \chi(1)=|G^*\co {\ensuremath{{\rm{C}}}}_{G^*}(s)|_{p'}\,\Psi(\chi)(1),\label{eq:Jordan dec}\end{aligned}$$ where ${{\mathcal E}}({\ensuremath{{\rm{C}}}}_{G^*}(s),1)$ denotes the unipotent characters of ${\ensuremath{{\rm{C}}}}_{G^*}(s)$, see [@DM Thm. 13.25]. While this bijection is not defined canonically, the formula in loc. cit. for scalar products with Deligne–Lusztig characters shows that its uniform projection is, and hence in particular so is the correspondence of degrees. Moreover, by the description in [@LuDis Prop. 5.1], the multiplicities of unipotent characters of ${\ensuremath{{\rm{C}}}}_{G^*}^\circ(s)$ in those of ${\ensuremath{{\rm{C}}}}_{G^*}(s)$ are determined by the complete root datum of $({{\mathbf G}},F)$, hence generic, so there exist well-defined degree polynomials $f_\psi$ for the unipotent characters $\psi$ of the possibly disconnected group ${\ensuremath{{\rm{C}}}}_{G^*}(s)$. Thus, denoting by $|{{\mathbb{G}}}_s|$ the order polynomial of ${\ensuremath{{\rm{C}}}}_{G^*}(s)$, we can define from  the *degree polynomial* $f_\chi:=(|{{\mathbb{G}}}|/|{{\mathbb{G}}}_s|)_{X'}f_{\Psi(\chi)}\in{{\mathbb{Q}}}[X]$ of $\chi$. \[lem:degHCseries\] Let $G={{\mathbf G}}^F$ be as above. If $\chi\in{\operatorname{Irr}}(G)$ lies in the Harish-Chandra series of a cuspidal character of a Levi subgroup of $G$ of semisimple ${{\mathbb{F}}}_q$-rank $r$ then its degree polynomial $f_\chi$ is divisible by $(X-1)^r$. Let ${{\mathbf L}}\le{{\mathbf G}}$ be an $F$-stable Levi subgroup such that $\chi$ lies in the Harish-Chandra series ${{\mathcal E}}(G,(L,\la))$, where $L={{\mathbf L}}^F$. Let ${{\mathbf T}}\le{{\mathbf G}}$ be an $F$-stable maximal torus of ${{\mathbf G}}$, with Sylow 1-torus ${{\mathbf T}}_1\le{{\mathbf T}}$. Then ${{\mathbf M}}={\ensuremath{{\rm{C}}}}_{{\mathbf G}}({{\mathbf T}}_1)$ is a (1-split) Levi subgroup of ${{\mathbf G}}$, and $$\langle {\mathrm {R}}_T^G(\theta),\chi\rangle=\langle {\mathrm {R}}_M^G(\mu),\chi\rangle$$ with $\mu={\mathrm {R}}_T^M(\theta)$ a (virtual) character of $M={{\mathbf M}}^F$, where $T={{\mathbf T}}^F$. Thus, if $\langle {\mathrm {R}}_T^G(\theta),\chi\rangle\ne0$ then by disjointness of Harish-Chandra series we must have ${{\mathbf M}}\ge {{\mathbf L}}$ up to conjugation, whence ${{\mathbf T}}_1\le {\operatorname Z}({{\mathbf M}})_{\Phi_1}\le {\operatorname Z}({{\mathbf L}})_{\Phi_1}$, where ${\operatorname Z}({{\mathbf M}})_{\Phi_1}$ and ${\operatorname Z}({{\mathbf L}})_{\Phi_1}$ denote the Sylow $1$-torus of the groups ${\operatorname Z}({{\mathbf M}})$ and ${\operatorname Z}({{\mathbf L}})$. Now we have $\chi(1)=\langle {\operatorname{reg}}_G,\chi\rangle$, where the regular character ${\operatorname{reg}}_G$ of $G$ is given by $${\operatorname{reg}}_G=\frac{1}{|W|}\sum_{w\in W}|G\co T_w|_{p'}\, {\mathrm {R}}_{T_w}^G({\operatorname{reg}}_{T_w}),$$ where $W$ is the Weyl group of ${{\mathbf G}}$, $T_w={{\mathbf T}}_w^F$ is a maximal torus of $G$ of type $w$, and ${\operatorname{reg}}_{T_w}$ denotes the regular character of $T_w$ (see [@DM Cor. 12.14]). Let $W_0$ denote the set of elements $w\in W$ satisfying $\dim({{\mathbf T}}_w)_{\Phi_1}\le\dim ({\operatorname Z}({{\mathbf L}})_{\Phi_1})$. Our above considerations then yield $$\chi(1)= \frac{1}{|W|}\sum_{w\in W_0}|G\co T_w|_{p'}\, \langle {\mathrm {R}}_{T_w}^G({\operatorname{reg}}_{T_w}),\chi\rangle.$$ (Note that this is generic, as by [@Lu Thm. 4.23] the multiplicities $\langle {\mathrm {R}}_{T_w}^G({\operatorname{reg}}_{T_w}),\chi\rangle$ only depend on the unipotent Jordan correspondent of $\chi$.) For any $F$-stable reductive subgroup ${{\mathbf H}}$ of ${{\mathbf G}}$ we write in the following ${{\mathbf H}}_{\Phi_1}$ for a Sylow $1$-torus of $({{\mathbf H}},F)$. For $w\in W_0$ we have $$\dim({{\mathbf G}}_{\Phi_1})-\dim(({{{\mathbf T}}_w})_{\Phi_1})\ge \dim({{\mathbf L}}_{\Phi_1})-\dim ({\operatorname Z}({{\mathbf L}})_{\Phi_1})=r,$$ where $r$ is the semisimple ${{\mathbb{F}}}_q$-rank of ${{\mathbf L}}$, so the degree polynomial $f_\chi$ of $\chi(1)$ is divisible by $(X-1)^r$. We also recall the following facts from ordinary Harish-Chandra theory (see e.g. [@Ca85 Thm. 10.11.5]). Let $L\le G$ be a Levi subgroup with a cuspidal character $\la\in{\operatorname{Irr}}(L)$, and let $W(\la)$ denote the relative Weyl group of this cuspidal pair. Assume that $\chi\in{\operatorname{Irr}}(G)$ lies in the Harish-Chandra series above $(L,\la)$. Let $\eta\in{\operatorname{Irr}}(W(\la))$ be the character associated to $\chi$ and $D_\chi\in{{\mathbb{Q}}}(X)$ the inverse of the Schur element of $\eta$ of the corresponding generic Hecke algebra, so numerator and denominator of $D_\chi$ are prime to $X-1$. Then $$\begin{aligned} \chi(1)&=|G\co L|_{p'}\,D_\chi(q)\,\la(1)\qquad\text{and}\quad D_\chi(1)=\eta(1)/|W(\la)|.\label{eq:HLdeg}\end{aligned}$$ With the degree polynomial $f_\la$ of $\la$ we define a *degree function* $f_\chi'\in{{\mathbb{Q}}}(X)$ for $\chi$ as $f_\chi'=(|{{\mathbb{G}}}|/|{{\mathbb{L}}}|)_{X'} D_\chi f_\la$, where ${{\mathbb{L}}}$ denotes the complete root datum associated to the standard Levi subgroup $({{\mathbf L}},F)$ with ${{\mathbf L}}^F=L$. The following is shown in [@BMM Thm. 3.2]: \[lem:deguni\] Let $G={{\mathbf G}}^F$ be as above. If $\chi\in{\operatorname{Irr}}(G)$ is unipotent, then $f_\chi=f_\chi'$. Unipotent characters of odd degree ---------------------------------- From now on and for the rest of this section assume that $p$ and hence $q$ *is odd*. \[prop:cusp\] Let $G={{\mathbf G}}^F$ be as above. Then every non-trivial cuspidal unipotent character $\chi$ of $G$ has even degree. More precisely, if $\chi$ has degree polynomial $a(X-1)^mf$, with $a\in{{\mathbb{Q}}}$, $m\ge0$ and $f\in{{\mathbb{Z}}}[X]$ monic and prime to $X-1$, then $a(q-1)^m$ is even. First note that unipotent characters of $G$ restrict irreducibly to unipotent characters of $[{{\mathbf G}},{{\mathbf G}}]^F$, so we may assume that ${{\mathbf G}}$ is semisimple. Furthermore, degrees of unipotent characters are insensitive to the isogeny type of ${{\mathbf G}}$, whence we may assume that ${{\mathbf G}}$ is of simply connected type and hence a direct product of simple algebraic groups. As unipotent characters of a direct product are the exterior products of the unipotent characters of the factors, we may reduce to the case that ${{\mathbf G}}$ is a direct product of $r$ isomorphic simple groups ${{\mathbf H}}_i\cong{{\mathbf H}}$, $1\le i\le r$, transitively permuted by $F$. But then ${{\mathbf G}}^F\cong{{\mathbf H}}^{F^r}$, and $f(X^r)$ is divisible by the same power of $X-1$ as $f(X)$, so that finally we may assume that ${{\mathbf G}}$ is simple. We then use Lusztig’s classification of cuspidal unipotent characters. In fact, when $q\equiv1\pmod4$ then the first claim is already proved in [@MaH0 Prop. 6.5]. But a quick check of that argument shows that it only relies on the fact that the degree of $\chi$ is divisible by a sufficiently high power of the even number $q-1$. It thus also works for $q\equiv3\pmod4$ and does even yield the second assertion. \[prop:unip\] Let $G={{\mathbf G}}^F$ be as above. Then all unipotent characters of $G$ of odd degree lie in the principal series of $G$. We distinguish two cases. If $q\equiv1\pmod4$ then our claim is contained in [@MaH0 Cor. 6.6]. So for the rest of the proof we may suppose that $q\equiv3\pmod4$. Assume if possible that $\chi$ is a unipotent character of $G$ of odd degree and not lying in the principal series. So $\chi$ lies above a cuspidal unipotent character $\la\ne 1_L$ of a Levi subgroup $L\le G$. Let $f_\chi,f_\la\in{{\mathbb{Q}}}[X]$ denote the degree polynomials of $\chi,\la$ respectively. As $\chi(1)$ is odd and $4|(q+1)$ we have that $\chi$ must lie in the principal 2-series of $G$ by [@MaH0 Cor. 6.6] applied with $d=2$. Thus $f_\chi$ is prime to $X+1$ according to [@BMM Prop. 2.4] (an analogue of our Lemma \[lem:degHCseries\]). Now by what we recalled before Lemma \[lem:deguni\] there exists a rational function $g\in{{\mathbb{Q}}}(X)$ with numerator and denominator products of cyclotomic polynomials times an integer, both prime to $X-1$, such that $f_\chi= g\cdot f_\la$, and such that $g(1)$ is the degree of an irreducible character of the relative Weyl group $W(\la)$ of $(L,\la)$ in $G$. Write $f_\la=(X+1)^kf_1$ with a non-negative integer $k$ such that $f_1\in{{\mathbb{Q}}}[X]$ is prime to $X+1$. Then by our observations on $f_\chi$ and $f_\la$ there exists a rational function $g_1$ such that $g=g_1/(X+1)^k$ and both numerator and denominator of $g_1$ are prime to $X^2-1$. Then $f_\chi=g_1\cdot f_1$. Now let $\Phi_i$ be a cyclotomic polynomial dividing $g_1$. Then $\Phi_i(q)$ is odd unless $i=2^{j+1}$ for some $j\ge1$, in which case we have $\Phi_i(q)_2=(q^{2^j}+1)_2=2=\Phi_i(1)_2$. Thus, $g_1(q)$ is divisible by the same $2$-power as $g_1(1)$, which is an integer. Since $f_1(q)$ is even by Proposition \[prop:cusp\] we conclude that $$\chi(1)=g(q)\cdot f_\la(q)=g_1(q)\cdot f_1(q) \equiv g_1(1)\cdot f_1(q) \pmod 2$$ is even as well, a contradiction. Characters of odd degree and the principal series ------------------------------------------------- \[lem:centSyl\] Let ${{\mathbf H}}$ be simple of adjoint type ${\mathsf B}_l$ ($l\ge1$), ${\mathsf C}_l$ ($l\ge2)$, ${\mathsf D}_{2l}$ ($l\ge2$) or ${\mathsf E}_7$, and $F:{{\mathbf H}}\rightarrow{{\mathbf H}}$ a Steinberg endomorphism. Let $s\in {{\mathbf H}}^F$ be semisimple centralising a Sylow 2-subgroup of ${{\mathbf H}}^F$. Then $s^2=1$. This was observed in [@MaExt Lemma 4.1] for type ${\mathsf B}_l$; the proof given there carries over word by word, since in all listed cases the longest element of the Weyl group acts by inversion on a maximal torus. Recall that an element of a connected reductive algebraic group ${{\mathbf H}}$ is called *quasi-isolated* if its centraliser is not contained in any proper Levi subgroup of ${{\mathbf H}}$. \[lem:2-central\] Let ${{\mathbf H}}$ be simple of adjoint type ${\mathsf B}_l$ ($l\ge2$), ${\mathsf C}_l$ ($l\ge3$) or ${\mathsf D}_l$ ($l\ge4$), and $F:{{\mathbf H}}\rightarrow{{\mathbf H}}$ a Frobenius endomorphism defining an ${{\mathbb{F}}}_q$-rational structure such that $q\equiv3\pmod4$. Let $s\in{{\mathbf H}}$ be semisimple quasi-isolated with disconnected centraliser ${{\mathbf C}}={\ensuremath{{\rm{C}}}}_{{{\mathbf H}}}(s)$ such that ${{\mathbf C}}^F$ contains a Sylow 2-subgroup of ${{\mathbf H}}^F$. Then ${{\mathbf C}}^F$ is as in Table \[tab:2-central\]. $\begin{array}{|l|l|l|} \hline {{\mathbf H}}^F& {{\mathbf C}}^F& \text{conditions}\cr \hline {\mathsf B}_l(q)& {\mathsf B}_{l-2k}(q)\cdot {\mathsf D}_{2k}(q).2& 1\le k\le l/2\cr {\mathsf B}_{2l+1}(q)& {\mathsf B}_{2k}(q)\cdot{{}^2}{\mathsf D}_{2(l-k)+1}(q).2& 0\le k\le l\cr \hline {\mathsf C}_{2l}(q)& ({\mathsf C}_l(q)\cdot {\mathsf C}_l(q)).2& \cr \hline {\mathsf D}_l(q)& ({\mathsf D}_k(q)\cdot {\mathsf D}_{l-k}(q)).2& 1\le k< l/2\cr {\mathsf D}_{4l}(q)& ({\mathsf D}_{2l}(q)\cdot {\mathsf D}_{2l}(q)).4& \cr \hline {{}^2}{\mathsf D}_l(q)& ({\mathsf D}_k(q)\cdot {{}^2}{\mathsf D}_{l-k}(q)).2& 2\le k\le l-1,\ k\ne l/2\cr \hline \end{array}$ Here ${\mathsf D}_1(q)$, ${{}^2}{\mathsf D}_1(q)$ are to be interpreted as tori of order $q-1$, $q+1$ respectively. The conjugacy classes of quasi-isolated elements $s$ in classical groups of adjoint type were classified in [@Bo05 Tab. 2]. From that list, we may exclude those $s$ with connected centraliser. It then remains to determine the various rational forms of ${{\mathbf H}}$ and ${{\mathbf C}}$ and to decide when $s$ is 2-central. We treat the cases individually. For ${{\mathbf H}}$ of adjoint type ${\mathsf B}_l$, the table contains all examples from loc. cit. For ${{\mathbf H}}$ of type ${\mathsf C}_l$, an easy calculation shows that only the listed case occurs. Similarly, it can be checked in type ${\mathsf D}_l$ from the order formulas that only the listed types of disconnected centralisers can possibly contain a Sylow 2-subgroup. We thus obtain the following classification of characters of odd degree: \[thm:odd degree\] Let ${{\mathbf G}}$ be simple, of simply connected type, not of type ${\mathsf A}_l$, with $F$, ${{\mathbf G}}^*$ as introduced in Section \[ssec2:B\]. Let $\chi\in{\operatorname{Irr}}_{2'}(G)$. Then either $\chi$ lies in the principal series of $G$, or $q\equiv3\pmod4$, $G={\operatorname{Sp}}_{2l}(q)$ with $l\ge1$ odd, $\chi\in{{\mathcal E}}(G,s)$ with ${\ensuremath{{\rm{C}}}}_{G^*}(s)={\mathsf B}_{2k}(q)\cdot{{}^2}{\mathsf D}_{l-2k}(q).2$ where $0\le k\le (l-3)/2$, and $\chi$ lies in the Harish-Chandra series of a cuspidal character of degree $\frac{1}{2}(q-1)$ of a Levi subgroup ${\operatorname{Sp}}_2(q)\times(q-1)^{l-1}$. We follow the line of arguments in [@MaH0 §7]. Let $\chi\in{\operatorname{Irr}}(G)$ be a character of odd degree and not lying in the principal series of $G$. Then the degree polynomial of $\chi$ is divisible by $X-1$, by Lemma \[lem:degHCseries\]. Let $s\in G^*$ be semisimple such that $\chi\in{{\mathcal E}}(G,s)$ and set ${{\mathbf C}}:={\ensuremath{{\rm{C}}}}_{{{\mathbf G}}^*}(s)$, $C:={{\mathbf C}}^F$ and $C^\circ:={{{\mathbf C}}^\circ}^F$. Let $\Psi(\chi)\in{{\mathcal E}}(C,1)$ denote the unipotent Jordan correspondent of $\chi$. Then by Lusztig’s Jordan decomposition $\chi(1)=|G^*:C|_{p'}\,\Psi(\chi)(1)$ (see ), so both $|G^*:C|$ and $\Psi(\chi)(1)$ have to be odd. Thus $\Psi(\chi)$ lies above a unipotent character of ${{{\mathbf C}}^\circ}^F$ of odd degree, and hence in the principal series of $C$ by Proposition \[prop:unip\]. So its degree polynomial is prime to $X-1$ by Lemma \[lem:deguni\] and . Hence the order polynomial of $|G^*:C|$ must be divisible by $X-1$ by our assumption. On the other hand, as $|G^*:C|$ is odd, $C$ contains a Sylow 2-subgroup of $G^*$. If $q\equiv1\pmod4$ then we may argue as follows. By [@MaH0 Thm. 5.9], ${{\mathbf C}}$ must contain a Sylow 1-torus of ${{\mathbf G}}^*$. But then the order polynomial of $|G^*:C|$ cannot be divisible by $X-1$, a contradiction. So now assume that $q\equiv3\pmod4$. Then again by [@MaH0 Thm. 5.9], ${{\mathbf C}}$ must contain a Sylow 2-torus of ${{\mathbf G}}^*$. The order $|C|$ is given by a polynomial in $q$ of the form $cf(q)$, where $c=|C:C^\circ|$ and $f\in{{\mathbb{Z}}}[X]$ is monic. Note that ${{\mathbf C}}/{{\mathbf C}}^\circ$ is isomorphic to a subgroup of the fundamental group of ${{\mathbf G}}$, hence of the center of ${{\mathbf G}}$ (see [@MT Prop. 14.20]). In particular, as ${{\mathbf G}}$ is simple and not of type ${\mathsf A}_l$ we have $|C:C^\circ|_2\le4$, and in fact $|C:C^\circ|_2\le2$ unless ${{\mathbf G}}$ is of type ${\mathsf D}_l$. As $X-1$ divides $f$, we are done if either $G$ has odd order center, or if ${{\mathbf C}}$ is connected. So ${{\mathbf G}}$ is of type ${\mathsf B}_l,{\mathsf C}_l,{\mathsf D}_l$ or ${\mathsf E}_7$. For ${{\mathbf G}}$ not of type ${\mathsf D}_l$ with $l$ odd we know by Lemma \[lem:centSyl\] applied to ${{\mathbf H}}:={{\mathbf G}}^*$ that $s$ must be an involution. For ${{\mathbf G}}$ of type ${\mathsf E}_7$, the 2-central involutions of $G^*$ have centraliser of type ${\mathsf D}_6(q){\mathsf A}_1(q)$, whose order polynomial is divisible by the full power $(X-1)^7$ of $X-1$ occurring in the polynomial order of $G^*$, contrary to what we showed. For the classical type groups, let us first observe that ${{\mathbf C}}$ cannot be contained inside a proper $F$-stable Levi subgroup ${{\mathbf L}}$ of ${{\mathbf G}}^*$, because $L={{\mathbf L}}^F$ has even index in $G^*$. Indeed, a Sylow 2-subgroup of $G^*$, or $L$, is contained in the normaliser of a Sylow 2-torus of ${{\mathbf G}}^*$ (see [@MT Cor. 25.17]), respectively of ${{\mathbf L}}$. But this normaliser is an extension of that Sylow 2-torus by the Weyl group of $G^*$, $L$ respectively. The claim then follows since any proper parabolic subgroup of a Weyl group $W$ of type ${\mathsf B}_l$ or ${\mathsf D}_l$ has even index in $W$. Thus, $s$ is quasi-isolated in ${{\mathbf G}}^*$, and hence occurs in Table \[tab:2-central\]. For ${{\mathbf G}}$ of type ${\mathsf B}_l$ the dual group is of adjoint type ${\mathsf C}_l$, and there the listed centraliser does contain a Sylow 1-torus. For ${{\mathbf G}}$ of type ${\mathsf C}_l$ and so ${{\mathbf G}}^*$ of type ${\mathsf B}_l$ the listed centralisers either contain a Sylow 1-torus or are given in the statement with $l$ odd. As the order polynomial of ${{\mathbf C}}^\circ$ is divisible by $(X-1)^{l-1}$ in these cases, the degree polynomial of $\chi$ is divisible by $X-1$ just once, so by Lemma \[lem:degHCseries\], $\chi$ lies in the Harish-Chandra series of a cuspidal character of a Levi subgroup $L$ of $G$ of rank 1, hence a Levi subgroup of type ${\mathsf A}_1$. Now $G$ has two conjugacy classes of such Levi subgroups, one with connected center lying in the stabiliser ${\operatorname{GL}}_l(q)$ of a maximally isotropic subspace, the other isomorphic to ${\operatorname{Sp}}_2(q)\times(q-1)^{l-1}$. The degrees of cuspidal characters of these two types of subgroups are $q-1$, and also $\frac{1}{2}(q-1)$ for the second type. The latter ones are thus the only ones of odd degree. An easy variation of the proof of Proposition \[prop:unip\], using that $X-1$ is prime to $X+1$ now shows that if $\chi$ has odd degree, it must lie above the cuspidal characters of ${\operatorname{Sp}}_2(q)\times(q-1)^{l-1}$ of degree $\frac{1}{2}(q-1)$. So finally assume that ${{\mathbf G}}$ is of type ${\mathsf D}_l$. Recall that ${{\mathbf C}}^\circ$ cannot contain a Sylow 1-torus of ${{\mathbf G}}^*$. The only centralisers in Table \[tab:2-central\] not containing a Sylow 1-torus of the ambient group are ${\mathsf D}_k(q)\cdot{{}^2}{\mathsf D}_{l-k}(q).2$ with $l$ even and $k$ odd inside ${{}^2}{\mathsf D}_l(q)$. But these do not contain a Sylow 2-torus, contrary to what we know has to happen. So we get no example in type ${\mathsf D}_l$. The precise conditions on $k$ for ${\mathsf B}_{2k}(q)\cdot{{}^2}{\mathsf D}_{l-2k}(q)$ in Theorem \[thm:odd degree\] to contain a Sylow 2-subgroup of $G^*={\operatorname{SO}}_{2l+1}(q)$ are worked out in [@MaExt Prop. 4.2]. In fact, in [@MaExt Thms. 4.10 and 4.11] it is shown that $G={\operatorname{Sp}}_{2l}(q)$ satisfies the inductive McKay condition for the prime 2 if $q$ is an odd power of $p$. The following consequence will be used in the proof of Proposition \[prop:9\_2\]: \[lem:label odd chars\] Let ${{\mathbf G}}$ be simple, of simply connected type, not of type ${\mathsf A}_l$ or ${\mathsf C}_l$. Let $\chi\in{\operatorname{Irr}}_{2'}(G)$. Then $\chi={\mathrm {R}}_T^G(\lambda)_\eta$, where $T$ is a maximally split torus of $G$, $\lambda\in {\operatorname{Irr}}(T)$ is such that $2\nmid |W:W(\lambda)|$ and $\eta\in{\operatorname{Irr}}(W(\lambda))$ is of odd degree. By Theorem \[thm:odd degree\] every $\chi\in{\operatorname{Irr}}_{2'}(G)$ occurs in the principal series, that is, it lies in the Harish-Chandra series of a linear character $\la\in{\operatorname{Irr}}(T)$. According to our remarks before Lemma \[lem:deguni\] we have that $\chi(1)=|G:T|_{p'}\,D_\chi(q)\,\la(1)$, where $\la(1)=1$. Write $f_\chi'\in{{\mathbb{Q}}}[X]$ for the degree polynomial of $\chi$. Since $\Phi_i(q)_2\ge\Phi_i(1)_2$ for all $i\ge2$ it follows that $f_\chi(1)$ is odd. Now replacing $|G:T|_{p'}$ by the order polynomial and then specialising at $q=1$ we obtain that $|W|\,D_\chi(1)=|W|\,\eta(1)/|W(\la)|$ is odd, for $\eta\in{\operatorname{Irr}}(W(\la))$ the label of $\chi$, whence the two integers $\eta(1)$ and $|W:W(\la)|$ are odd. Proof of Theorem \[thm:McKayp=2\] --------------------------------- We now combine the above results on Harish-Chandra induction and on characters of odd degree to complete the proof that every simple group satisfies the inductive McKay condition for $\ell=2$, and thus Theorem \[thm:McKayp=2\] holds. We have seen in Theorem \[thm:8\_1\] that our results are sufficient to prove that the inductive McKay condition holds for the prime $2$, whenever $4|(q-1)$ and $\Phi$ is of type ${\mathsf B}_l$, ${\mathsf C}_l$ or ${\mathsf E}_7$. The result even applies to the simple groups which are the quotients of ${{}^2}{\mathsf D}_l(q)$ or ${{}^2} {\mathsf E}_6(q)$. Taking into account earlier results summarised in Proposition \[prop:exc\] the only cases that are left to consider are the simple groups associated with ${\mathsf D}_{l,{\operatorname{sc}}}(q)$ and ${\mathsf E}_{6,{\operatorname{sc}}}(q)$. The following specific considerations are tailored to the case where $\ell=2$. \[prop:9\_2\] Let $G={{\mathbf G}}^F$ be as above. Let $\chi\in {\operatorname{Irr}}_{2'}(G)$. Then $\chi$ extends to its inertia group in $GD$. Thus the assumption \[2\_2gloext\] holds. The statement is trivial whenever $D_\chi$ is cyclic. If $G\not \cong {\mathsf D}_{4,{\operatorname{sc}}}(q)$ the Sylow $r$-subgroups of $D$ are cyclic for any odd prime $r$. Note that $\det \chi$ is trivial since $G$ is perfect. Then [@Isa Thm. 6.25] shows that $\chi$ extends to its inertia group in $G D_2$. According to [@Isa (11.31)] this implies that $\chi$ extends to $G D_\chi$. It remains to consider the case where $G \cong {\mathsf D}_{4,{\operatorname{sc}}}(q)$. Following the considerations above we have to show that $\chi$ extends to its inertia group in $G D_3$ for any Sylow $3$-subgroup $D_3$ of $D$. Assume that $D_\chi$ has a non-cyclic Sylow $3$-subgroup. According to Theorem \[thm:odd degree\] there exists a character $\lambda\in{\operatorname{Irr}}(T)$, where $T$ is a maximally split maximal torus of $G$, such that $\chi$ is a constituent of ${\mathrm {R}}_T^G(\lambda)$. According to Lemma \[lem:label odd chars\], $\chi$ corresponds to some character $\eta\in{\operatorname{Irr}}(W(\lambda))$ of odd degree such that $\chi$ has multiplicity $\eta(1)$ in ${\mathrm {R}}_T^G(\lambda)$. Moreover $2\nmid |W:W(\lambda)|$. Let $\gamma \in D$, $F'\in \spann<F_0>$ be such that $\langle\gamma,F'\rangle$ is the Sylow $3$-subgroup of $D_\chi$. Direct computations show that any Sylow $2$-subgroup of $W$ is self-normalising in $W$ and can be chosen to be $\gamma$-stable. Let $P$ be such a $\gamma$-stable Sylow $2$-subgroup of $W$. Then after some $N$-conjugation of $\la$ we can assume that $W(\lambda)$ contains $P$. Then there exist elements $n,n'\in N$ such that $n\gamma$ and $n'F'$ stabilise $\lambda$ and $P$. Since $P$ is $\langle \gamma, F'\rangle$-invariant the elements $\pi(n),\pi(n')$ are contained in ${\ensuremath{{\mathrm{N}}}}_W(P)=P$. As linear character $\lambda$ has an extension ${\widetilde}\lambda$ to $\langle T,F',\gamma\rangle$. Since $\gamma$ and $F'$ stabilise the unipotent radical $U$ of the Borel subgroup $B$ of $G$, ${\widetilde}\lambda$ lifts to a character $\hat\la\in {\operatorname{Irr}}(\spann<B,F',\gamma>)$. The induced character $\Gamma=\hat\la^{\langle G,F',\gamma\rangle}$ is then an extension of the character ${\mathrm {R}}_T^G(\lambda)$. Hence $\restr\Gamma|G$ has $\chi$ as constituent with odd multiplicity $\eta(1)$. Since $|W:W(\la)|$ is odd and $P$ has index $3$ in $W$, we either have $W(\lambda)=W$ or $W(\lambda)=P$. In the first case, $\la=1$ since $W$ acts fixed point freely on ${\operatorname{Irr}}(T)$ and so $\chi$ is unipotent, in which case the statement is an easy consequence of [@MaExt Thm. 2.4]. Else, the character $\eta$ of $W(\lambda)$ of odd degree is linear since $W(\lambda)$ is a $2$-group, and so $\chi$ has multiplicity one in ${\mathrm {R}}_T^G(\lambda)$. Hence $\Gamma$ has a unique constituent ${\widetilde}\chi$ that is an extension of $\chi$. This completes the proof. Together with Theorem \[thm:8\_1\] and Lemma \[lem:excSchur\] this completes the proof of Theorem \[thm:d=1good\]. By [@IMN Thm. B] it is sufficient to show that all non-abelian simple groups $S$ satisfy the inductive McKay condition. For the simple groups not of Lie type this is known by [@ManonLie Thm. 1.1]. So now assume that $S$ is of Lie type, and not as in Proposition \[prop:exc\]. Observe that $\ell=2$ implies that $d_\ell(q)\in\{1,2\}$. So it suffices to verify the assumptions in Theorem \[thm:8\_1\](c) and (d). Condition \[2\_2glostar\] holds since all $2'$-character only lie in very specific Harish-Chandra series thanks to Theorem \[thm:odd degree\] and characters in those series, more precisely the structure of their stabilisers, have been studied in Corollary \[cor:7\_3\]. The requirement \[2\_2gloext\] is satisfied for characters in ${\operatorname{Irr}}_{2'}(G)$ thanks to Proposition \[prop:9\_2\]. [Ma08b]{} , Quasi-isolated elements in reductive groups. *Comm. Algebra **33*** (2005), 2315–2337. , Sur les caractères des groupes réductifs finis à centre non connexe: applications aux groupes spéciaux linéaires et unitaires. *Astérisque **306*** (2006), vi+165 pp. , Théorèmes de Sylow génériques pour les groupes réductifs sur les corps finis. *Math. Ann. **292*** (1992), 241–262. , Generic blocks of finite reductive groups. *Astérisque **212*** (1993), 7–92. , *Representation Theory of Finite Reductive Groups*. Volume 1 of *New Mathematical Monographs*, Cambridge University Press, Cambridge, 2004. , Equivariance and extendibility in finite reductive groups with connected center. *Math. Z. **275*** (2013), 689–713. , Equivariant character correspondences and inductive McKay condition for type A. To appear in *J. Reine Angew. Math.*, 2015. , *Finite Groups of Lie Type*. John Wiley & Sons Inc., New York, 1985. , *Representations of finite groups of Lie type*. London Math. Soc. Student Texts 21, Cambridge University Press, 1991. , A note on Harish-Chandra induction, *Manuscripta Math.* [**80**]{} (1993), 393–401. , *Characters of Finite Coxeter Groups and Iwahori-Hecke Algebras*. Volume 21 of *London Mathematical Society Monographs. New Series*, The Clarendon Press Oxford University Press, New York, 2000. , *The Classification of the Finite Simple Groups. Number 3*. American Mathematical Society, Providence, RI, 1998. , Induced cuspidal representations and generalised Hecke rings. *Invent. Math. **58*** (1980), 37–64. , Representations of generic algebras and finite groups of Lie type. *Trans. Amer. Math. Soc. **280*** (1983), 753–779. , *Character Theory of Finite Groups*. De Gruyter, Berlin, 1998. , *Character Theory of Finite Groups*. Academic Press, New York, (1976). , A reduction theorem for the McKay conjecture. *Invent. Math. **170*** (2007), 33–101. , *Characters of Reductive Groups over a Finite Field*. Ann. Math. Studies, [**107**]{}, Princeton University Press, 1984. , On the representations of reductive groups with disconnected center. *Astérisque **168*** (1988), 157–166. , Height 0 characters of finite groups of Lie type. *Represent. Theory **11*** (2007), 192–220. , The inductive McKay condition for simple groups not of Lie type. *Comm. Algebra **36*** (2008), 455–463. , Extensions of unipotent characters and the inductive McKay condition. *J. Algebra **320*** (2008), 2963–2980. , *Linear Algebraic Groups and Finite Groups of Lie Type*. Cambridge Studies in Advanced Mathematics, 133, Cambridge University Press, 2011. , Multiplicities of principal series representations of finite groups with split [$(B,N)$]{}-pairs. *J. Algebra **77*** (1982), 419–442. , Irreducible representations of odd degree. *J. Algebra **20*** (1972), 416–418. , Irreducible representations of odd degree. Preprint 2014. , *Die McKay-Vermutung für quasi-einfache Gruppen vom Lie-Typ*. Dissertation. TU Kaiserslautern, [http://kluedo.ub.uni-kl.de/volltexte/2007/2073/pdf/Spaeth\_McKay.pdf]{}. , The McKay conjecture for exceptional groups and odd primes. *Math. Z. **261*** (2009), 571–595. , A reduction theorem for the Alperin–McKay conjecture. J. reine angew. Math. [**680**]{} (2013), 153–189. , Sylow $d$-tori of classical groups and the McKay conjecture, [I]{}. *J. Algebra **323*** (2010), 2469–2493. , Inductive McKay condition in defining characteristic. *Bull. London Math. Soc. **44*** (2012), 426–438. , Regular elements of finite reflection groups. *Invent. Math. **25*** (1974), 159–198. , Normalisateurs de tores. I. Groupes de Coxeter ' etendus. *J. Algebra **4*** (1966), 96–116. [^1]: The authors gratefully acknowledge financial support by ERC Advanced Grant 291512.
--- abstract: 'In situ boundary arrays have been installed in the North Atlantic to measure the large-scale ocean circulation. Here, we use measurements at the western edge of the North Atlantic at $16^\circ$N and $26^\circ$N to investigate low-frequency variations in deep densities and their associated influence on ocean transports. At both latitudes, deep waters (below 1100 dbar) at the western boundary are becoming fresher and less dense. The associated change in geopotential thickness is about $0.15$ $\mbox{m}^2\mbox{s}^{-2}$ between 2004–2009 and 2010–2014, with the shift occurring between 2009–2010 and earlier at $26^\circ$N than $16^\circ$N. Without a similar density change on the east of the Atlantic, a mid-depth reduction in water density at the west drives an increase in the shear between the upper and lower layers of North Atlantic Deep Water of about 2.6 Sv at $26^\circ$N and 3.9 Sv at $16^\circ$N. While these transport anomalies result in an intensifying tendency in the meridional overturning circulation (MOC) estimate at $16^\circ$N, the method of applying a zero net mass transport constraint at $26^\circ$N results in an opposing (reducing) tendency of the MOC.' author: - 'E. Frajka-Williams[^1], M. Lankhorst, J. Koelling, and U. Send[^2]' title: Coherent changes of the circulation in the deep North Atlantic from moored transport arrays --- - Southward flow weakened below 3 km relative to above by 2.6–3.9 Sv. - The shift occurred between 2009–2010, and earlier at $26^\circ$N than $16^\circ$N. - From $26^\circ$N observations, geostrophic reference level methods can influence transport trends. Introduction ============ The large-scale ocean circulation is often displayed in schematics with ribbons of red and blue indicating warm and cold transports at different depths. These schematics capture several key aspects of the meridional overturning circulation (MOC): that it includes warm thermocline waters flowing northwards in the top 1000 m of the Atlantic and colder waters at depth moving generally southwards. The thermocline waters carry heat northwards, while the deep waters, recently formed through interaction with the atmosphere at the surface, store carbon and other properties at depth. Zonally-averaging this circulation across the Atlantic basin from east-to-west, the meridional flow (flow in the north-south direction) shows “overturning” with surface waters moving northwards, deepening, then returning southwards at depth [@Danabasoglu-etal-2014]. The strength of the overturning then refers to the total northward flow in the top $\sim$1000 m of the Atlantic, which is equal and opposite to the southward flow below. This overturning is typically about 17 Sv (1 Sv = 1,000,000 $\mbox{m}^3\mbox{s}^{-1}$). Schematics of overturning, while capturing some of the salient features, also connote a circulation that is simple and laminar, and when referred to as a “Great conveyor” suggest a conveyor belt moving at similar speeds everywhere. While time-mean circulation shows a continuous northward flow across the tropics to mid-latitudes in the Atlantic, variations in the strength of overturning at different latitudes may not be simultaneous. A long simulation (1000 years) of the time-varying overturning circulation identified lower frequency fluctuations in the subpolar regions and interannual variations in subtropical regions [@Zhang-2010]. In particular, the subtropical transport magnitude exhibited variations of the same sign as those in the subpolar regions, but at some time delay. More realistic simulations investigating the coherence of the overturning find that across the subtropics, fluctuations are relatively coherent, meaning instantaneously correlated, on interannual timescales ($r>0.6$ between $0$–$40^\circ$N) [@Bingham-etal-2007]. Differences in the strength of overturning between latitudes may result in local convergences or divergences of heat [@Cunningham-etal-2014; @Kelly-etal-2014] which may in turn drive heat fluxes into or out of the atmosphere. Moored estimates of the time-varying transports in the Atlantic show substantial interannual and sub-annual variability [@FrajkaWilliams-etal-2016; @Send-etal-2011; @Toole-etal-2011]. However, efforts to link observations between distant individual latitudes have shown limited meridional coherence of the MOC [@Elipot-etal-2014; @Mielke-etal-2013]. On sub-annual time scales, fluctuations between latitudes appear to be out-of-phase. @Mielke-etal-2013 showed that the seasonal cycles of the non-Ekman component of the overturning were $180^\circ$ out-of-phase between $26^\circ$N and $41^\circ$N, though the phasing of the observed seasonal cycle at $41^\circ$N did not agree with the modeled seasonal cycle. @Elipot-etal-2014 also identified an out-of-phase relationship between the large-scale transport fluctuations at different latitudes but using only the western boundary density signals to compute transports. Some of these fluctuations in transport have been found to have a fixed relationship to two modes of wind stress variability over the Atlantic [@Elipot-etal-2017], with locations at $16^\circ$N and $26^\circ$N related to the first mode of variability, and the more northerly regions ($\sim40^\circ$N) to the North Atlantic Oscillation pattern of wind forcing. On longer (interannual-to-one-decade) timescales, where the overturning circulation may be expected to represent larger-scale basin-wide fluctuations in ocean circulation, the MOC at $26^\circ$N has been shown to have a declining trend [@Smeed-etal-2014] while the transports estimated at $16^\circ$N [@Send-etal-2011] show a different tendency. Transports at both latitudes are monitored using a boundary mooring approach, where temperature and salinity profiles are measured continuously at western and eastern edges, spanning great swaths of the ocean. The method of calculating transports relies on the thermal wind relation between meridional shear in transports and zonal density gradients. However, thermal wind only determines the velocity shear relative to a level of no or known motion. The methods used to compute transports differ between the two latitudes in their application of a choice of reference level. Observed low frequency variation of transports at $16^\circ$N showed a weakening overturning from 2000–2010, resulting primarily from changes at the Mid-Atlantic Ridge [@Send-etal-2011]. From 2010 to 2014, transports are now strengthening, consistent with an intensification of the MOC $16^\circ$N. This is in apparent contradiction with the transport fluctuations estimated at 26°N, where a reduction in the lower NADW layer (3000–5000 m) was identified over the 2004–2014 period [@Smeed-etal-2014; @FrajkaWilliams-etal-2016]. In this paper, we explore whether or not the meridional overturning circulation is coherent (similar sign/magnitude changes) at $16^\circ$N and $26^\circ$N, and the influence of the method of calculation on the transport estimates. In section 2, the data and methods are described. In section 3, we detail the hydrographic properties and tendencies over the 11 and 16 years of observations at the two latitudes. In section 4, we use the hydrographic data to construct transport shear estimates from dynamic height, and discuss how the observed variations influence transbasin transport estimates. Finally, in section 5 we conclude and highlight the key issue of the choice of reference level for transport estimates. Data & Methods ============== Data used here are from two mooring arrays in the Atlantic: the RAPID Climate Change (RAPID) and Meridional Overturning Circulation and Heat transport Array (MOCHA) moored observations at $26.5^\circ$N from 2004–2015 and the Meridional Overturning Variability Experiment (MOVE) moored observations at $16^\circ$N from 2004–2016 (Fig. 1a). Both arrays were designed to estimate the strength of the overturning circulation using boundary measurements (Fig. 1b). Two major differences exist between the two arrays. First, at $16^\circ$N, the array extends eastward only to the Mid-Atlantic Ridge while at $26^\circ$N, it extends eastward to Africa. For longer timescales (4+ years), observing system simulation experiments (OSSEs) identified that transport fluctuations are primarily due to density changes at the western boundary at $16^\circ$N [@Kanzow-2004]. This is confirmed by observations at $26^\circ$N show that the western boundary dominates transport variability on interannual and longer timescales [@FrajkaWilliams-etal-2016]. There is some uncertainty resulting from ocean transports that may not be captured at $16^\circ$N east of the Mid Atlantic Ridge, and so we follow the method of @Elipot-etal-2014 in focusing on the dynamic height variations at the western boundary only. The second difference is in the application of a choice of reference level. Traditionally, geostrophic shear is referenced to a level of no motion, where the integration (described further below) is referenced to a level where flow is weak or absent. At $16^\circ$N, this is applied by choosing a deep reference level of no motion. At $26^\circ$N, a deep reference level is chosen, but transports are then corrected with a barotropic velocity profile (as a hypsometric compesation term, $T_{ext}$) which is determined to ensure zero net mass transport across the latitude section. This second difference, and its influence on the computed MOC transports, is explored in more detail in section §4.2. ![Moored observations at RAPID 26$^\circ$N and MOVE 16$^\circ$N. Bathymetry is shaded in color.[]{data-label="figone"}](arrays_move.png){width="20pc"} RAPID $26^\circ$N observations ------------------------------ At the western boundary at $26^\circ$N, the primary dynamic height observations are from a full-depth mooring in 4000 m at $26.5^\circ$N, $76.75^\circ$W (WB2). Below 4000 m, instrument records are taken from nearby (within 25 km) moorings. Temperature and salinity records from individual instruments are vertically interpolated to form a western profile of hydrographic data as described in @McCarthy-etal-2015. During November 2005 to March 2006, the WB2 mooring failed, and so during this period, data from the WB3 mooring at $26.5^\circ$N, $76.5^\circ$N were substituted. Typical instrument configurations on this mooring include 18 MicroCAT (Seabird Electronics, Bellevue, WA) records between 50 and 4800 dbar, though specific instrument locations and sampling intervals have varied over the 10 years of observations. Field calibrations are carried out on individual instruments by mounting them to the conductivity-temperature-depth (CTD) rosette for pre-deployment and post-deployment casts. MicroCAT measurements are compared to those from the CTD at bottle stops, with drifts between the pre-deployment and post-deployment casts used to offset the time series observations. Individual instrument records are filtered with a 2-day low pass filter to remove the tides before gridding vertically to 20 m resolution. Full details of the data processing can be found in @McCarthy-etal-2015. MOVE $16^\circ$N observations ----------------------------- At $16^\circ$N, data from the MOVE 1 and MOVE3 moorings are used here. MOVE3 is a single, sub-surface mooring that was initially deployed early 2000 and has been in operation ever since. The location is approximately $16.3^\circ$N, $60.5^\circ$W, at 5000 m water depth a short distance east of Guadeloupe, while MOVE1 is on the western flank of the Mid Atlantic Ridge at $51.5^\circ$W. Measurements of temperature, salinity, and currents are made from this platform [@Kanzow-etal-2006]. Instrumentation has varied over the years; the present configuration has 21 MicroCAT instruments for temperature and salinity covering the depth range from 50 m to the seafloor. Earlier deployments only covered the deeper layers below 1000 m. Removal of sensor drift is performed as for the $26^\circ$N array, using CTD casts before deployment and after recovery [@Kanzow-etal-2006]. The calibrated, quality-controlled data are made publicly available through the OceanSITES data portals (www.oceansites.org). Data available at OceanSITES also includes six additional sites where MOVE has made observations, two of which are still in operation. Time series processing ---------------------- Data from both $26^\circ$N and $16^\circ$N were bin-averaged into monthly time series. In order to focus on interannual and longer-term variations, a seasonal climatology was removed and time series were filtered with an 8-month Tukey window. While some sub-annual variations remain, the $<1$-year filter window permits better identification of the timing of changes. In calculating correlations between time series, statistical significance was based on two-tailed t-tests where the numbers of degrees of freedom were determined from the integral time scale of decorrelation (Emery and Thomson, 2004). Hydrographic changes at $16^\circ$N and $26^\circ$N =================================================== Temperature-salinity (TS) diagrams of water mass properties at $26^\circ$N and $16^\circ$N show variations from warm and salty in the thermocline to cold and fresh at depth, with only a modest change in slope of the T-S relationship around 2000 m (Fig. 2). This bend in the curve corresponds to the transition between central Labrador Sea Water (cLSW) and Iceland-Scotland overflow water (ISOW, @Sebille-etal-2011). At $26^\circ$N, in the recent 10 years, the waters below 1100 m have tended towards cold and/or fresh on an isopycnal with the exception of at 2000 m (cLSW) where the properties have remained the same. At $16^\circ$N, properties at all depths have tended towards cold and fresh on an isopycnal. These changes are consistent with, but smaller amplitude than, the cooling and freshening observed at $26^\circ$N from hydrographic sections from 1984–2010 [@Sebille-etal-2011]. ![3-year averages of the monthly binned conservative temperature and absolute salinity, between Oct 2000 and Oct 2015, where 00/03 indicates the period Oct 2000 through Sep 2003. For the RAPID array at $26^\circ$N, the first averaging period $04/06^*$ represents the shorter period Apr 2004 through Sep 2006. Average depths are indicated by the black lines.[]{data-label="figtwo"}](j_fig2.png){width="30pc"} Temperature and salinity changes on depth surfaces -------------------------------------------------- Both the $16^\circ$N and$26^\circ$N arrays are designed to capture transport variability. From this perspective, density changes on depth surfaces influence transport more than property changes on isopycnals. Temperatures at both latitudes decrease to a minimum at depth of about $1.8^\circ$C at $26^\circ$N and $1.9^\circ$C at $16^\circ$N (Fig. 3). Salinities at both latitudes are fresher at depth than in the thermocline. At mid-depths, warmer, saltier waters are found (around $3^\circ$C and $35.1$ around 2000 m). Over the 11-year RAPID deployment and 16-year MOVE deployment, the waters at depth (below 1000 m) have tended towards fresher water, though the temperature changes on depth surfaces are more ambiguous. ![(a) Absolute salinity and (b) conservative temperature profiles from the western boundary of the RAPID 26$^\circ$N and MOVE 16$^\circ$N, in red and blue tones, respectively. Darker colours indicate later 3-year averages as in Fig. \[figtwo\]. (c) and (d) are insets of salinity and temperature for the depth ranges 2250–2750 and 4000–4500 dbar, respectively. []{data-label="figthree"}](j_fig3.png){width="30pc"} Property changes can be better seen in depth-time diagrams of the anomalies from the time-mean profile (Fig. 4–7). Over the 11-years of continuous RAPID deployments, temperatures below 2000 m have shifted from generally cooler to warmer, though the changes are neither monotonic nor depth-independent. In contrast, salinity changes are more monotonic on this 8-month smoothed data, transitioning from relatively salty to relatively fresh at all depths below 2000 m. At $16^\circ$N at the west, temperature anomalies between 1500 and 3500 m show some warming from 2002–2013 (Fig. 5), while salinities have moved more monotonically relatively salty in 2000 to relatively fresh in 2015. The transition here does have a temporary reversal of the freshening tendency visible at mid-depth (1500–4000 dbar) during the 2009–10 winter, possibly associated with isopycnal heave. ![Property anomalies at the western boundary composite profile from RAPID $26^\circ$N (around $26.5^\circ$N, $76.75^\circ$W). (a) Temperature anomalies are calculated relative to the mean profile over 2004-2014, with somewhat cooler temperatures prior to 2009. (b) Salinity anomalies relative to the mean profile over 2004–2014, with relatively fresher anomalies since 2008. Properties at each depth have been smoothed with an 8-month Tukey filter.[]{data-label="figfour"}](j_fig4.png){width="20pc"} ![As for Fig. 4, but for the western boundary of MOVE $16^\circ$N (mooring MOVE3, around $16^\circ$N, $60^\circ$W).[]{data-label="figfive"}](j_fig5.png){width="20pc"} ![As for Fig. 4, but for the eastern boundary composite profile from RAPID $26^\circ$N, east of the Mid-Atlantic Ridge up to the Canary Islands.[]{data-label="figsix"}](j_fig6.png){width="20pc"} ![As for Fig. 4, but for the eastern mooring from MOVE $16^\circ$N (MOVE1, around $15.5^\circ$N, $51.5^\circ$W).[]{data-label="figseven"}](j_fig7.png){width="20pc"} Comparing the average properties between 1200 and 4650 dbar between the two complete five-year periods of observations, Apr 2004–Mar 2009 and Apr 2009–Mar 2014, temperatures warmed at $26^\circ$N by $0.023^\circ$C, from $2.77\pm0.03$ to $2.79\pm0.03^\circ$C, while salinities freshened by $0.002$, from $35.104\pm0.002$ to $35.103\pm0.002$. (Here, standard deviations are calculated on the annual means.) At $16^\circ$N, over the same two five-year periods, temperatures between 1200–4650 dbar warmed by $0.018^\circ$C, from $2.81\pm0.03$ to $2.83\pm0.03^\circ$C. Salinities freshened by about $0.003$, from $35.111\pm0.002$ to $35.108\pm0.003$. At both latitudes, the freshening results in lighter (less dense) waters at the western boundary at depth. While observed changes are near the estimated accuracy of measurements [@McCarthy-etal-2015], the shift in properties between the two periods is statistically significant. We will see below that this change contributes to a shift in the estimated transport anomalies. For completeness, we also show the property changes at the east (Fig. 6 and 7). In both cases, below 1200 dbar, anomalies are less coherent in time, with no apparent trend in temperature or salinity. This is consistent with the previous estimates and simulations indicating a reduced role for variations at the eastern boundary in controlling low-frequency transport fluctuations [@Kanzow-etal-2008; @FrajkaWilliams-2015]. Overall, property changes at depths below 2000 m show a tendency towards freshening and lighter water at both $16^\circ$N and $26^\circ$N. These changes cannot be easily used to distinguish whether the changes are due to watermass changes (a different class of Labrador Sea Water) or a vertical shift or heave of density surfaces with the same TS properties. The freshening is clear in TS space (towards freshening, Fig. \[figtwo\]), though whether it explains the density anomaly exclusively, or vertical heave is important, is left for future investigation. Initial attempts to separately diagnose property changes on density surfaces proved complicated, due to small changes in salinity or temperature which affect both the calculation of the density surface and the property on that surface. Density changes --------------- Density anomalies rather than property changes can directly result in changes in circulation. A density change on the western boundary of a basin, without compensating changes at the eastern boundary, will change the slope of the isopycnals across the basin. Through the thermal wind relation, a change in the zonal slope of isopycnals contributes to vertical shear in the meridional velocities. This is the fundamental principle behind the design of both observing arrays at $16^\circ$N and $26^\circ$N. Here, we investigate the density changes associated with aforementioned property changes at the western boundary of the Atlantic. We first investigate the vertical coherence of density changes at a single latitude. At each depth, we compute a time series of density anomalies from the time mean. Time series are then correlated with each other to identify covariability between density anomalies at different depths. Fig. 8 shows the correlation coefficient between density anomalies at one depth (x-axis) with those at another depth (y-axis). Since density anomalies at the same depth are exactly correlated (correlation coefficient of 1), the 1-1 axis from upper left to lower right is exactly 1. Broader patches of high correlation around this axis are found for $26^\circ$N in the depth range 1100–4800 m and for $16^\circ$N in the range 1200–4650 dbar. This means, for example, that density anomalies at 2000 m co-vary with density anomalies at 4000 m. Overall, it suggests that density anomalies everywhere below 1200 m co-vary, or similarly, that the time series of density anomalies at a single depth will represent the variability for the whole deep layer. In contrast, in the thermocline above 1200 m, the red areas of co-variability contract back together towards the 1-1 diagonal (Fig. 8), indicating that density anomalies above 1200 m and do not co-vary with density anomalies below 1200 m. ![Correlation between density anomalies at each depth from (a) the western boundary of RAPID $26^\circ$N, and (b) the western boundary of MOVE $16^\circ$N. Red colours indicate positive correlation (coherent variations) while blue colours indicate negative correlation (anti-phase variations).[]{data-label="figeight"}](j_fig8.png){width="30pc"} The high degree of covariability below 1200 m simplifies a comparison between latitudes because the density fluctuations everywhere below 1200 m will not depend strongly on the particular choice of depth. It also suggests that a layered approximation of the ocean may be appropriate in the subtropical North Atlantic, with a small number of layers explaining a large fraction of the observed density variations. Timing of density changes ------------------------- While the time series of observations are relatively short for investigating interannual variations, and data have been smoothed with an 8-month window, we briefly investigate the relative timing of changes at the two latitudes. Given the vertical coherence of density variations at individual latitudes, we now compare the timing of density fluctuations between latitudes at the same depths. From visual inspection of the depth-time plots of property anomalies (Fig. 4 and 5), properties at $26^\circ$N shifted around 2009, while the shift at $16^\circ$N either occurred around 2008 or 2010. Using density anomalies at each depth and latitude, the time series of density anomalies at, e.g., 2000 m at $26^\circ$N can be lag-correlated with the time series of density anomalies at 2000 m at $16^\circ$N. This lag correlation can be used to identify whether fluctuations in density at $26^\circ$N typically occur before or after fluctuations at $16^\circ$N. Lag correlations between density anomalies at $26^\circ$N and $16^\circ$N are computed for each depth (Fig. \[fignine\]a). Above 1200 m, there is little to no relationship between density anomalies at $26^\circ$N and $16^\circ$N. Between 1200 m and 4650 m, density anomalies are correlated, with anomalies at $16^\circ$N tending to occur simultaneously or after those at $26^\circ$N. Highest correlations are for RAPID leading MOVE by less than 1 year, with a secondary peak around 24 months (also with RAPID leading MOVE). Strongest correlations occur around 1500 bar and 3500–4000 bar. Fig. \[fignine\]b shows an example of a time series of density anomalies at 3800 dbar from RAPID $26^\circ$N and MOVE $16^\circ$N, with the time series from $26^\circ$N shifted forward by 7 months. Both latitudes show a shift from relatively dense to relatively light waters, at the end of 2009 at $16^\circ$N (and 7 months earlier at $26^\circ$N). These shifts are of the same sign and similar magnitude at the two latitudes. In the absence of compensating density changes on the eastern boundary, density anomalies at the west would represent large-scale coherent changes in transport in geostrophic shear in the subtropical North Atlantic, which we explore in the next section. ![Lag correlation between density anomalies at different latitudes but the same depth. (a) Correlation coefficient between density anomalies at the western boundaries of MOVE $16^\circ$N and RAPID $26^\circ$N, as a function of depth (y-axis) and lag in months (x-axis). (b) Time series of density anomalies at the two latitudes, at 3820 dbar. The density time series from RAPID 26$^\circ$N has been shifted forward in time by 7 months. Positive lag corresponds to $26^\circ$N leading $16^\circ$N.[]{data-label="fignine"}](j_fig9.png){width="20pc"} Dynamic height and transports ============================= Transports from boundary arrays are calculated both from current meter measurements very near the west and from dynamic height differences between the west and east. Several previous investigations have separated the transport anomalies due to changes at the west from those in the east, to better identify the dynamic cause of those changes (See, for example, @Kanzow-etal-2010, @Duchez-etal-2014, and @Elipot-etal-2014). Here, we focus on the western boundary variations only, in order to compare like-with-like and because they have the greatest influence on low-frequency, deep transport variability. We neglect transport fluctuations below 4820 dbar, including any northward flowing Antarctic Bottom Water. These transports are expected to be small ($\sim$1 Sv at $26^\circ$N) with small variations (standard deviation of 0.4 Sv over 6 months) [@FrajkaWilliams-etal-2011; @McCarthy-etal-2015]. At $26^\circ$N, the geostrophic transport-per-unit-depth between the Bahamas and Canary Islands is derived from the thermal wind relation as $$T_{int}(p)=\frac{\Phi_{east}(p)-\Phi_{west}(p)}{f}$$ where $f$ is the Coriolis parameter, and $\Phi_{east}$ and $\Phi_{west}$ the dynamic height anomalies relative to zero at 4820 dbar at the east and west of the Atlantic, respectively. Full details of the calculation can be found in @McCarthy-etal-2015. Dynamic height is estimated from measured density profiles as $$\Phi(p)=\frac{1}{\rho_0}\int_{4820}^p\delta(p')\,\mathrm{d}p'$$ where $\rho_0$ is a constant reference density, and $\delta$ the specific volume anomaly ($1/\rho$). A different choice of reference level at $26^\circ$N can result in a different vertical structure of transports [@Roberts-etal-2013; @Sinha-etal-2017]. At $16^\circ$N, numerical simulations suggested that the overturning transports could be recovered using dynamic height anomalies referenced to zero at depth [@Kanzow-etal-2008]. Dynamic height changes ---------------------- In considering western boundary variations only, we do not directly address the issue of the choice of reference level. Instead, we consider dynamic height anomalies relative to a deep reference level: 4820 dbar. Referenced to 4820 dbar, there is a clear shift from relatively low dynamic height anomalies to relatively higher dynamic height anomalies at both latitudes (Fig. 10). Similar to the time series of density anomalies (Fig. 9), this transition occurs around 2009 at $26^\circ$N, and some months later at $16^\circ$N. By construction, dynamic height anomalies are zero at 4820 dbar, and increasing upwards, with a further increase in the amplitude of anomalies across the depth range 2000–3500 dbar. Since dynamic height anomalies are constructed as a depth integral of inverse density anomalies (2), the density anomalies in the range 2000–3500 dbar are responsible for the changes in shear in transports through equation (1). In comparison, dynamic height anomalies at the eastern boundary of $26^\circ$N (off the Canary islands) show markedly weak interannual variability (Fig. 11a), indicating little to no change in shear. At the western flank of the Mid Atlantic Ridge at $16^\circ$N, the fluctuations still weaker than the western boundary of $16^\circ$N, indicating that even between the western boundary and the Mid Atlantic Ridge, shear changes are more strongly governed by changes at the western boundary. ![Dynamic height anomalies at the western boundary of (a) RAPID $26^\circ$N and (b) MOVE $16^\circ$N, referenced to zero at 4820 dbar. A transition from negative (green) to positive (pink) dynamic height anomaly at 1000 dbar indicates a relatively strengthening of the southward upper NADW (1000–3000 dbar) relative to the southward lower NADW (3000–5000 dbar).[]{data-label="figten"}](j_fig10.png){width="20pc"} ![As for Fig. 10, but for (a) the eastern boundary profile from RAPID $26^\circ$N and (b) the eastern mooring of MOVE $16^\circ$N (mooring MOVE1, west of the Mid-Atlantic Ridge).[]{data-label="figeleven"}](j_fig11.png){width="20pc"} To quantify the fluctuations in shear at both latitudes—independent of the choice of reference level—we calculate the dynamic or geopotential thickness between two depths (Fig. 12). Over the 15 years of observations at $16^\circ$N, the thickness anomaly has shifted from negative to positive. At $26^\circ$N, the shift is the of similar magnitude but only documented over the past 10 years. The dynamic thickness anomaly is also calculated at the east (off of Africa for $26^\circ$N and at the western flank of the MAR for $16^\circ$N, see Fig. 12b). At $26^\circ$N, the deep dynamic height variability is negligible in the east. At $16^\circ$N, there are a few larger variations in 2007 and 2009, but for the most part, since 2004, the eastern boundary dynamic thickness anomaly has been relatively small. Comparing the two 5-year periods (2004–2009 and 2009–2014), the dynamic thickness of this layer at $16^\circ$N changed from $12.22\pm0.07$ to $12.37\pm0.08$ $\mbox{m}^2\mbox{s}^{-2}$. At $26^\circ$N, the change was from $12.32\pm0.05$ to $12.47\pm0.07$ $\mbox{m}^2\mbox{s}^{-2}$. At both latitudes, the geopotential thickness change or shear increased by about 0.15 $\mbox{m}^2\mbox{s}^{-2}$. ![Dynamic thickness anomaly time series at (a) the western boundary of RAPID $26^\circ$N and MOVE $16^\circ$N and (b) the eastern boundary of RAPID and the eastern mooring of MOVE (west of the Mid-Atlantic Ridge).[]{data-label="figtwelve"}](j_fig12.png){width="15pc"} To put the dynamic height changes back into more relatable physical quantities, we can estimate the velocity profile due to dynamic height variations at the west only (Fig. \[figfourteen\]). Here we can clearly see that with a deep reference level, the change from earlier time periods to more recent periods is accompanied by a relative strengthening of the southward flow in the upper layer of transports (upper NADW, 1100–3000 dbar). Shear in the transport due to dynamic height anomalies at the west ($\Phi'_{west}$) can also be estimated between the two layers (1100–3000 m and 3000–5000 m) by scaling by $f$ and integrating in depth as $$\mathbf{V}_z=\int_{3000}^{1100}\frac{-\Phi'_{west}(p)}{f}\,\mathrm{d}p - \int_{5000}^{3000} \frac{-\Phi'_{west}(p)}{f}\,\mathrm{d}p\label{shear}$$ where the first integral represents the transport contribution from the intermediate layer (upper NADW), and the second integral from the lower layer (lower NADW). Computing $\mathbf{V}_z$ at both latitudes gives a sense of the change of the circulation in units of Sv, where a positive value represents a strengthening of the upper NADW transports relative to the lower NADW transports (Fig. \[figthirteen\]). Between the two five year periods, both latitudes showed an increase in the shear transport of 3.9 Sv (MOVE $16^\circ$N) and 2.7 Sv (RAPID $26^\circ$N). Note that while the geopotential thickness anomaly at the two latitudes was similar, $f$ is smaller at $16^\circ$N resulting in a larger transport anomaly. ![Velocity estimates derived from dynamic height anomalies calculated at the western boundary profiles from (a) RAPID $26^\circ$N and (b) MOVE $16^\circ$N, following [@Send-etal-2011]. Dynamic height anomalies were integrated relative to a deep reference level.[]{data-label="figthirteen"}](plot_velocity_profiles_by_time_period_rapid_rel4800_west_to_constantmarwest.png "fig:"){width="17pc"}![Velocity estimates derived from dynamic height anomalies calculated at the western boundary profiles from (a) RAPID $26^\circ$N and (b) MOVE $16^\circ$N, following [@Send-etal-2011]. Dynamic height anomalies were integrated relative to a deep reference level.[]{data-label="figthirteen"}](plot_velocity_profiles_by_time_period_move_rel4950_move3_to_constantmove1.png "fig:"){width="17pc"} ![Shear anomaly due to western boundary dynamic height changes as in (\[shear\]) for MOVE at $16^\circ$N (black) and RAPID at $26^\circ$N (red). The observed dynamic height anomalies represent an increase in $\mathbf{V}_z$ by about 2.4 Sv (MOVE) and 2.7 Sv (RAPID) between the two 5-year periods, 2004–2009 and 2009–2014.[]{data-label="figfourteen"}](j_fig13.png){width="15pc"} These results show that the observed density changes at the western boundary of the Atlantic are consistent in tendency (towards lighter water below 1200 dbar) and timing (between 2009–2010) at both latitudes. This results in the same sign effect on changes to the geostrophic transport, $T_{int}$, relative to a deep level of no motion. The effect of these changes is to intensify the shear between the lower and upper NADW layers (Fig. \[figthirteen\] & \[figfourteen\]). At both latitudes, there is a notable change in the shear between these two layers between the earlier and latter part of the transport observations. Relationship between dynamic height and the MOC ----------------------------------------------- At $16^\circ$N, the strengthening of the upper NADW layer has been associated with an intensification of the deep southward flowing limb of the MOC. With a reference level of zero velocity at 5000 m, the lower layer transports (3000–5000 m) are relatively constant over the 15 years, while the shear results in an intensification of the southward flow in the upper NADW layer (1100–3000 m, Fig. \[figthirteen\]). At $26^\circ$N, however, the transports are derived from the geostrophic interior flow ($T_{int}$) as well as the compensation term, so that the overturning ($\Psi$) includes contributions from multiple components as $$\Psi=\int^zT_{gs}+T_{ek}+T_{wbw}+T_{int}+T_{ext}\,\mathrm{d}z$$ where $T_{gs}$ and $T_{ek}$ are the transports of the Florida Current and surface Ekman transport, respectively, and $T_{wbw}$ is from direct current meter observations in the western wedge. $T_{gs}$ and $T_{ek}$ have little interannual variability over the 2004–2015 period (Fig. \[figfifteen\]a). The change in dynamic height anomaly at the west results in an intensification of the southward interior flow ($T_{int}$) in the deep NADW layer (Fig. \[figfifteen\]c), while the thermocline shows a similar intensification of southward flow (Fig. \[figfifteen\]b). These are similar to the changes estimated at $16^\circ$N. However, the RAPID method applies a constraint of zero net mass transport across the section at $26^\circ$N. This allows the compensation velocity (akin to the reference level velocity at depth) to change in time, and is encapsulated in the $T_{ext}$ term. Due to the intensification of southward flow over all layers with time, weak changes in other components ($T_{gs}$, $T_{ek}$, and $T_{wbw}$), the compensation term must have an intensification of a northward flow (Fig. \[figfifteen\]a). The compensation is constructed as a uniform northward velocity since otherwise it would have been measured by the boundary arrays. This means that when $T_{ext}$ it is integrated over the thick NADW layer (1100–5000 dbar) it has a larger contribution to the total transport in that layer than in the thermocline layer (0–1100 dbar). The result is that the strengthening northward flow indicated by the $T_{ext}$ dominates over the strengthening southward flow of the $T_{int}$ in the deep layer, so that there is an overall weakening of the overturning circulation (southward flow below 1100 dbar). The southward flow in the thermocline is still dominated the increasing southward flow of the $T_{int}$, and so over the 11 year record, there is a relative intensification of the southward flow in the thermocline in spite of a northward tendency in the $T_{ext}$. This explains the origin of the opposing tendencies in the MOC estimated at $26^\circ$N, when compared to transports calculated at $26^\circ$N using a fixed, deep level of no motion. ![Total transports at $26^\circ$N, applying mass compensation. (a) Florida Current (blue), Ekman (black), western boundary current meter estimates (green), geostrophic estimates, relative to 0 at 4820 dbar (magenta) and external or compensation transport (black dashed). The sum of these is zero at all times. (b) Geostrophic transport in the thermocline (0–1100 m, magenta) relative to 0 at 4820 dbar, and the compensation applied over the 0–1100 m layer (black dashed). (c) Geostrophic transport in the deep layer (1100m–bottom) and the compensation applied over this layer (black dashed).[]{data-label="figfifteen"}](j_fig15.png){width="15pc"} Conclusions =========== Observations of the large-scale circulation at individual latitudes have been revolutionary to our understanding of variations in the overturning circulation [@Srokosz-Bryden-2015]. However, efforts to relate the variability observed at different latitudes via different measurement designs have proved challenging [@Elipot-etal-2013; @Mielke-etal-2013; @Elipot-etal-2014]. While the transport observations at $26^\circ$N and $16^\circ$N both rely on thermal wind (measuring density in order to calculate geostrophic shear), choosing an appropriate reference level to translate shear into velocity remains a challenge. At $26^\circ$N, the reference level is applied as a barotropic compensation by assuming no net transport across the section on timescales longer than 10-days. At $16^\circ$N, the choice of no motion near 5000 m was determined by OSSEs and is consistent with bottom pressure measurements on sub-annual time scales. Here, we have used the approach most consistent the RAPID and MOVE methodologies to investigate observed quantities (properties, density, dynamic height) which vary coherently or distinctly between the two latitudes. Overall, changes observed at the western boundary at $16^\circ$N and $26^\circ$N show consistent tendencies (towards freshening, lightening and an increase in deep shear of the southward flows), with similar magnitudes and a particular shift at both latitudes around 2009–2010. The methods employed at these latitudes to determine the overturning differ from those used in other boundary arrays (e.g., @Toole-etal-2011 where the transport changes are identified in density space). We find low frequency changes in density/dynamic height, occurring first at $26^\circ$N and less than a year later at $16^\circ$N. These changes are computed directly from density observations, but may arise from either property changes (Fig. 2) or thickness/volume changes of a particular layer. Due to complications with determining property changes on an isopycnal from a mooring with fixed point measurements, we leave that investigation for the future. This work highlights that the choice of reference level is a key element of estimating the overturning circulation through thermal wind balance. At $26^\circ$N, application of a zero net mass transport constraint resulted in a reversal in the estimated deep transport tendencies (from a strengthening southward deep flow, consistent with a strengthening MOC to a weakening southward deep flow and reducing MOC). It is possible that the calculated transports at the two latitudes, and their tendencies, are correct. Estimates of the barotropic transport variability from PIES (Pressure inverted echo sounders) at $16^\circ$N support the use of a deep reference level, showing that even when incorporating deep pressure gradient fluctuations, the tendency of transport variability on timescales up to 2-years is not affected. Due to limitations of measuring long records of pressure in the ocean, the barotropic flow cannot be evaluated over longer timescales. The result of this analysis is that the baroclinic changes driven by the western boundary densities are consistent between the two latitudes; it is possible that variability in the barotropic transport is distinct at the two latitudes. Assuming that the MOC at $16^\circ$N is strengthening while the transport at $26^\circ$N is weakening, one could envision several ways for this to occur, including through changes to the largely unmeasured Antarctic Bottom Water flow or deep volume storage/release. It is beyond the scope of this paper to evaluate the method for determining the reference level at individual latitudes, or whether the practice of estimating overturning from dynamic height mooring arrays needs to be adjusted to eliminate or incorporate the uncertainties due to reference level choices. New investigations into using satellite-based estimates of ocean bottom pressure [@Bentel-etal-2015; @Landerer-etal-2015] show promise at providing independent estimates of deep ocean transport variability but may also have trends in the data that are not related to circulation changes. New investigations are underway to determine uncertainties in the RAPID $26^\circ$N method of calculating overturning transports [@Sinha-etal-2017]. We conclude that of the observable robust changes over the 11- and 16-year moored observations, the southward flow in the subtropical North Atlantic below 3 km weakened relative to the southward flow above, and that a shift occurred between 2009–2010. These results also highlight a critical area of uncertainty in estimating large-scale ocean transports from boundary measurements: that of how to best incorporate a choice of reference level in the geostrophic shear method. Acknowledgments {#acknowledgments .unnumbered} --------------- EFW was funded by a Leverhulme Trust Research Fellowship. Data from the RAPID Climate Change (RAPID)/Meridional overturning circulation and heat flux array (MOCHA) projects are funded by the Natural Environment Research Council (NERC) and National Science Foundation (NSF, OCE1332978). Data are freely available from www.rapid.ac.uk. MOVE was also funded by National Oceanic and Atmospheric Administration (NOAA), the Climate Program Office–Climate Observation Division, and initially by the German Bundesministerium fuer Bildung und Forschung. MOVE is part of the international OceanSITES program (www.oceansites.org). Florida Current transports are funded by the NOAA and are available from www.aoml.noaa.gov/phod/floridacurrent. Special thanks to the captains, crews, and technicians, who have been invaluable in the measurement of the MOC in the Atlantic over the past 15 years. Bentel, K., Landerer, F. W., and Boening, C. (2015). Monitoring atlantic overturning circulation variability with GRACE-type ocean bottom pressure observations-a sensitivity study. , 12(4):1765–1791. Bingham, R. J., Hughes, C. W., Roussenov, V., and Williams, R. G. (2007). Meridional coherence of the [N]{}orth [A]{}tlantic meridional overturning circulation. , 34:L23606. Cunningham, S. A., Roberts, C., Frajka-Williams, E., Johns, W. E., Hobbs, W., Palmer, M. D., Rayner, D., Smeed, D. A., and [McCarthy]{}, G. D. (2014). Atlantic [MOC]{} slowdown cooled the subtropical ocean. , 40:6202–6207. Danabasoglu, G., Yaeger, S., and et al. (2014). North [Atlantic]{} simulations in [Coordinated Ocean-ice Reference Experiments]{}, phase [II]{} ([CORE-II]{}): [P]{}art [I]{}: [M]{}ean states. , 73:76–107. Duchez, A., Frajka-Williams, E., Castro, N., Hirschi, J. J.-M., and Coward, A. (2014). Seasonal to interannual variability in density around the [Canary Islands]{} and their influence on the [AMOC]{} at $26^\circ$[N]{}. , 119:1843–1860. Elipot, S., Frajka-Williams, E., Hughes, C., Olhede, S., and Lankhorst, M. (submitted). Observed basin-scale response of the [North]{} [Atlantic]{} meridional overturning circulation to wind stress forcing. . Elipot, S., Frajka-Williams, E., Hughes, C., and Willis, J. (2014). The observed [North]{} [Atlantic]{} moc, its meridional coherence and ocean bottom pressure. , 44:517–537. Elipot, S., Hughes, C., Olhede, S., and Toole, J. (2013). Coherence of western boundary pressure at the [RAPID WAVE]{} array: [Boundary]{} wave adjustments or deep western boundary current advection? , 43:744–765. Frajka-Williams, E. (2015). Estimating the [Atlantic]{} [MOC]{} at $26^\circ$[N]{} using satellite altimetry and cable measurements. , 42:3458–3464. Frajka-Williams, E., Eriksen, C. C., Rhines, P. B., and Harcourt, R. R. (2011). Determining vertical velocities from [S]{}eaglider. , 28:1641–1656. Frajka-Williams, E., Meinen, C. S., Johns, W. E., Smeed, D. A., Duchez, A. D., Lawrence, A. J., Cuthbertson, D. A., Bryden, H. L., McCarthy, G. D., Baringer, M. O., Rayner, D., and Moat, B. I. (2016). Compensation between meridional flow components of the [Atlantic]{} [MOC]{} at $26^\circ$n. , 12:481–493. Kanzow, T. (2004). . PhD thesis, Christian-Albrechts-Universitat, Kiel, Germany. Kanzow, T., Cunningham, S. A., Johns, W. E., Hirschi, J. J.-M., Marotzke, J., Baringer, M. O., Meinen, C. S., Chidichimo, M. P., Atkinson, C., Beal, L. M., Bryden, H. L., and Collins, J. (2010). Seasonal variability of the [A]{}tlantic meridional overturning circulation at $26.5^\circ$[N]{}. , 23:5678–5698. Kanzow, T., Send, U., and [McCartney]{}, M. (2008). On the variability of the deep meridional transports in the tropical [North]{} [Atlantic]{}. , 55:1601–1623. Kanzow, T., Send, U., Zenk, W., Rhein, M., and Chave, A. (2006). Monitoring the deep integrated meridional flow in the tropical [North]{} [Atlantic]{}: [Long]{}-term performance of a geostrophic array. , 53:528–546. Kelly, K. A., Thompson, L., and Lyman, J. (2014). The coherence and impact of meridional heat transport anomalies in the [Atlantic Ocean]{} inferred from observations. , 27:1469–1487. Landerer, F. W., Wiese, D. N., Bentel, K., Boening, C., and Watkins, M. M. (2015). North [Atlantic]{} meridional overturning circulation variations from [GRACE]{} ocean bottom pressure anomalies. , G. D., Smeed, D. A., Johns, W. E., Frajka-Williams, E., Moat, B. I., Rayner, D., Baringer, M. O., Meinen, C. S., and Bryden, H. L. (2015). Measuring the [Atlantic]{} meridional overturning circulation at $26^\circ$[N]{}. , 130:91–111. Mielke, C., Frajka-Williams, E., and Baehr, J. (2013). Observed and simulated variability of the [AMOC]{} at $26^\circ$[N]{} and 41$^\circ$[N]{}. , 40:1159–1164. Roberts, C. D., Waters, J., Peterson, K. A., Palmer, M., McCarthy, G. D., Frajka-Williams, E., Haines, K., Lea, D. J., Martin, M. J., Storkey, D., Blockley, E. W., and Zuo, H. (2013). Atmosphere drives observed interannual variability of the [Atlantic]{} meridional overturning circulation at $26.5^\circ$[N]{}. , 40:5164–5170. Send, U., Lankhorst, M., and Kanzow, T. (2011). Observation of decadal change in the [Atlantic]{} meridional overturning circulation using 10 years of continuous transport data. Sinha, B., Smeed, D., McCarthy, G., Moat, B., Josey, S., Hirschi, J. J.-M., Frajka-Williams, E., Blaker, A., and Madec, G. (in prep). The accuracy of estimates of the overturning circulation based on basinwide mooring arrays. , pages 1–17. Smeed, D. A., McCarthy, G., Cunningham, S. A., Frajka-Williams, E., Rayner, D., Johns, W. E., Meinen, C. S., Baringer, M. O., Moat, B. I., Duchez, A., and Bryden, H. L. (2014). Observed decline of the [Atlantic]{} meridional overturning circulation 2004 to 2012. , 10:29–38. Srokosz, M. A. and Bryden, H. L. (2015). Observing the [Atlantic]{} meridional overturning circulation yields a decade of inevitable surprises. , 348:1255575. Toole, J. M., Curry, R. G., Joyce, T. M., McCartney, M., and Pe[ñ]{}a-Molino, B. (2011). Transport of the [North Atlantic]{} deep western boundary current about 39$^\circ$[N]{}, 70$^\circ$[W]{}: 2004–2008. , 38:1768–1780. van Sebille, E., Baringer, M. O., Johns, W. E., Meinen, C. S., Beal, L. M., de Jong, M. F., and van Aken, H. M. (2011). Propagation pathways of classical [L]{}abrader [S]{}ea water from its source region to 26$^\circ$[N]{}. , 116:C12027. Zhang, R. (2010). Latitudinal dependence of [Atlantic]{} meridional overturning circulation variations. , 37:L16703. [^1]: Ocean and Earth Science, University of Southampton, Southampton, SO14 3ZH [^2]: Scripps Institution of Oceanography, University of California San Diego, La Jolla, USA
--- abstract: 'We prove a conjecture of Kollár stating that the local fundamental group of a klt singularity $x$ is finite. In fact, we prove a stronger statement, namely that the fundamental group of the smooth locus of a neighbourhood of $x$ is finite. We call this the *regional fundamental group*. As the proof goes via a local-to-global induction, we simultaneously confirm finiteness of the orbifold fundamental group of the smooth locus of a weakly Fano pair.' address: 'Mathematisches Institut, Albert-Ludwigs-Universität Freiburg, Ernst-Zermelo-Strasse 1, 79104 Freiburg im Breisgau, Germany' author: - Lukas Braun bibliography: - 'lukasbib.bib' title: The local fundamental group of a Kawamata log terminal singularity is finite --- Introduction {#introduction .unnumbered} ============ We work over the field $\CC$ of complex numbers. A *Kawamata log terminal* or *klt singularity* is a point $x \in X$ of an algebraic log pair $(X,\Delta=\sum(1-1/m_i)\Delta_i)$, such that for a log resolution $f:Y \to X$, locally around $x$, the discrepancies $a_i$, namely the coefficients of the exceptional divisors $E_i$ in the formula $$K_Y + f^{-1}_*\Delta \sim_\QQ f^*(K_X+\Delta)+\sum a_i E_i$$ satisfy $a_i > -1$. We call a log pair $(X,\Delta)$ *weakly Fano*, if it has only klt singularities and $-(K_X+\Delta)$ is big and nef. The *local fundamental group* of a normal singularity $x \in X$ is $$\pi_1^\loc(X,x):=\pi_1(B \setminus x) =\pi_1(\Link(x)),$$ where $B$ is the intersection of $X$ with a small euclidean ball around $x$ and the *link* $\Link(x)$ is the boundary $\del B$. It is a deformation retract of $B \setminus x$ and so $\pi_1^\loc$ is well defined. The following conjecture is due to Kollár [@KollarEx; @KollarSing]. \[con:local\] Let $x \in (X,\Delta)$ be a klt singularity. Then the local fundamental group $\pi_1^\loc(X,x)$ is finite. In the case of a weakly Fano pair $(X,\Delta)$, one can consider the smooth locus of $X$ and state the following global conjecture [@Zhang; @AIM]. \[con:global\] Let $(X,\Delta)$ be a weakly Fano pair. Then the fundamental group $\pi_1(X_\sm)$ of the smooth locus is finite. We prove generalized versions of these conjectures in the present paper. Firstly, a log pair $(X,\Delta)$ can be seen as a complex orbifold ${\mathcal{X}}=(X,\Delta)$, see Section \[sec:orbifold\]. Then one can consider the *orbifold fundamental group* of the smooth locus, denoted by $\pi_1(X_\sm,\Delta)$. This group is defined to be $\pi_1(X_\sm \setminus \supp (\Delta))/N$, where $N$ is the normal subgroup generated by the $\gamma_i^{m_i}$, where $\gamma_i$ is a small loop around $\Delta_i$. Similarly to the global case, in the local setting we can consider the fundamental group $\pi_1(B_\sm)=\pi_1(B \setminus X_\sing)$ of the smooth locus of $B$, instead of the local fundamental group. We call this group the *regional fundamental group* and denote it by $\pi_1^\reg(X,x)$. It is also possible to define the orbifold fundamental group of $(B_\sm,\left. \Delta \right|_{B_\sm})$, which we denote by $\pi_1^\reg(X,\Delta,x)$. Our two main theorems are the following. \[thm:regional\] Let $x \in (X,\Delta)$ be a klt singularity. Then the regional fundamental groups $\pi_1^\reg(X,x)$ and $\pi_1^\reg(X,\Delta,x)$ are finite. \[thm:global\] Let $(X,\Delta)$ be a weakly Fano pair. Then the orbifold fundamental group $\pi_1(X_\sm,\Delta)$ of the smooth locus is finite. Before sketching the structure of the (simultaneous) proof of these theorems, we give a short overview of related results and state some consequences. Fundamental groups of the whole space {#fundamental-groups-of-the-whole-space .unnumbered} ------------------------------------- Fano *manifolds* are known to be simply connected, and there are several proofs of this fact, relying for example on Atiyah’s $L^2$-index theorem or rational connectedness. Generalizing the smooth case, it was shown by Takayama [@Takayama] that also weakly Fano varieties are simply connected. In fact, Takayama proves finiteness of the fundamental group of a log resolution. The corresponding local statement - simply connectedness of the preimage of a small neighbourhood of $x$ under a log resolution - was proven by Kollár [@KollarShaf2] for quotient singularities and by Takayama for klt singularities [@TakayamaLocalSimple]. The proof in [@TakayamaLocalSimple] is similar to that of [@Takayama], but the latter manages to avoid the $L^2$-index theorem, which turns out to be very important to us. Simply connectedness also holds true for *log canonical* pairs $(X,\Delta)$ with ample $-(K_X+\Delta)$, see [@Fujino]. Étale fundamental groups {#étale-fundamental-groups .unnumbered} ------------------------ Conjecture \[con:local\] and Theorem \[thm:global\] have been confirmed by Xu for *étale fundamental groups* $\hat{\pi}_1$ in [@Xu Thms. 1, 2], see also [@GrebKebPet Thm. 1.13]. The étale or algebraic fundamental group is just the profinite completion of the topological fundamental group. Building on Xu’s results, Greb, Kebekus, and Peternell showed in [@GrebKebPet Thm. 1.5], that a quasiprojective klt variety $X$ allows a finite quasi-étale cover $Y \to X$ such that $\hat{\pi}_1(Y_\sm)$ is isomorphic to $\hat{\pi}_1(Y)$. Analogous statements for $F$-regular singularities and strongly $F$-regular schemes in characteristic $p$ can be found in [@FundGroupFreg1; @FundGroupFreg2]. It is also possible to deduce the statements in characteristic zero from the ones in characteristic $p$ [@FundRedModP]. Also Conjecture \[con:local\] was confirmed for log terminal singularities with a good torus action [@LafaceLiendoMoraga]. In general, it is possible that $\pi_1$ is infinite while $\hat{\pi}_1$ is trivial. We give an example of such group (the Thompson group $T$) in the follow-up of this introduction. Regional fundamental groups {#regional-fundamental-groups .unnumbered} --------------------------- The regional fundamental group $\pi_1^\reg$ has - not under this name - already been considered in [@Kumar; @TianXu; @Stibitz]. Of course, if $x$ is isolated, both local and regional fundamental group coincide. We think that $\pi_1^\reg$ is the more natural notion for non-isolated $x$. In particular, the proof of our two main theorems would not be possible considering only $\pi_1^\loc$. In [@StibDiss Thm. 2.2.6], Theorem \[thm:regional\] was proven for the profinite completion $\hat{\pi}_1^\reg$. Stibitz also gave an example [@Stibitz Ex. 2] of a non-isolated (non-klt) singularity, where all $\hat{\pi}_1^\loc$ are finite, but $\hat{\pi}_1^\reg$ is infinite. Consequences (and non-consequences) of our main theorems {#consequences-and-non-consequences-of-our-main-theorems .unnumbered} -------------------------------------------------------- We already mentioned that building on the results of [@Xu], in [@GrebKebPet Thm. 1.5] it was shown that a quasiprojective klt pair $(X,\Delta)$ allows a finite quasi-étale cover $Y \to X$ such that $\hat{\pi}_1(Y_\sm)$ is isomorphic to $\hat{\pi}_1(Y)$. One can ask if our results imply the same statement for the topological fundamental group. Unfortunately, this is *not* true. The reason is very simple: since the topological fundamental group $\pi_1(X_\sm)$ is not necessarily profinite, it can happen that there is *no* finite index normal subgroup intersecting the (finite) images of the regional fundamental groups $\pi_1^\reg(X,x)$ of points $x$ in $(X,\Delta)$ nontrivially, in contrary to the case of étale fundamental groups in [@Stibitz (i)$\Rightarrow$(ii),p. 7]. So even if $\pi_1(X)$ and $\hat{\pi}_1(X_\sm)$ both are trivial, it can happen that $\pi_1(X_\sm)$ is infinite. On the other hand, by [@TianXu Prop. 3.6], we obtain the following direct consequence of Theorem \[thm:regional\], which can be seen as an *infinite* version of [@GrebKebPet Thm. 1.5] (note that [@TianXu Prop. 3.6] requires finiteness of the regional fundamental group). Let $(X,\Delta)$ be a quasiprojective klt pair. Then every étale Galois orbifold cover of the orbifold $(X_\sm,\left. \Delta \right|_{X_\sm})$ - that is every (possibly infinite) cover of $X_\sm$, coming from a quotient of the orbifold fundamental group $\pi_1(X_\sm,\left. \Delta \right|_{X_\sm})$ by some normal subgroup - extends to a Galois orbifold cover of the orbifold $(X,\Delta)$. In particular, there is a (possibly infinite) Galois orbifold cover $(X',\Delta') \to (X,\Delta)$, étale (as orbifold cover) over $X_\sm$, such that the orbifold fundamental groups $\pi_1(X',\Delta')$ and $\pi_1(X'_\sm,\left. \Delta' \right|_{X'_\sm})$ are isomorphic. Note that it is also possible to deduce from Theorem \[thm:regional\] by the same arguments as in [@GrebKebPet Part II,Sec. 6.1] an *infinite* version of [@GrebKebPet Thm. 1.1]: if $(X,\Delta)$ is a quasiprojective klt pair, then in any tower $X=X_0 \xleftarrow{\phi_1} X_1 \xleftarrow{\phi_2} X_2 \xleftarrow{\phi_3} \cdots$ of *possibly infinite* quasi-étale covers $\phi_i$, such that $\phi_1 \circ \ldots \circ \phi_i \colon X_i \to X$ is Galois for every $i\geq 1$, all but finitely many of the $\phi_i$ are étale. In contrary to the finite versions from [@GrebKebPet], we have no idea if these statements could be of any use. We come to another unrelated consequence. It is known that the *Cox ring* of a weakly Fano variety is finitely generated [@BCHM], and analogously, this holds for a klt quasicone [@GorICR]. In particular, the divisor class group $\Cl(X)$ of any such object $X$ is finitely generated of the form $\ZZ^m \times \Cl(X)_\fin$ with $\Cl(X)_\fin$ a finite abelian group. The Cox ring is graded by $\Cl(X)$, which yields a quasi-étale quotient $\hat{X} \to \hat{X}/H=X$, where $\hat{X}$ is a quasiaffine variety, the so-called characteristic space, and $H$ is a linear algebraic group of the form $(\CC^*)^m\times \Cl(X)_\fin$ [@coxrings Sec. I.6.1]. The quotient of $\hat{X}$ by the torus $(\CC^*)^m$ yields a quasi-étale finite abelian Galois cover of $X$, which is universal with this property by [@GorICR Prop. 2.2(iii)]. That means it factors through all quasi-étale finite abelian Galois covers of $X$. So we have the following consequence of Theorems \[thm:regional\] and \[thm:global\] - immediate from the previous discussion, which tells us that $\Cl(X)_\fin$ is the abelianization of $\pi_1(X_\sm)$. Let $X$ be a weakly Fano variety or a klt quasicone. Then the finite part of the divisor class group of $X$ is isomorphic to the first homology group of the smooth locus of $X$: $$\Cl(X)_\fin \cong H_1(X_\sm,\ZZ).$$ A corollary merely of the definition of the regional fundamental group is inspired by a result of Serre [@Serre Prop. 15], saying that any finite group is the fundamental group of a smooth projective variety. Let $G$ be a finite group. Then $G$ has a complex linear representation with no pseudoreflections. In particular, there exists a quotient singularity $(X,x)=\CC^n/G$, such that $\pi_1^\reg(X,x)=G$. Let $G$ be a finite group and $V$ a complex linear faithful representation. If $V$ has a reflection, consider the sum $V+V$. If this representation of $G$ has a reflection $g$, then consider the restricted representation $\left.V\right|_{\langle g \rangle}+\left.V\right|_{\langle g \rangle}$ of the subgroup $\langle g \rangle$, containing a pointwise fixed hyperplane $H$. Since this representation is reducible, by [@RedFinRef Thm. 1], one of the copies of $\left.V\right|_{\langle g \rangle}$ is contained in $H$. This is a contradiction, since $V$ was faithful. The quotient $(X,x):=(V+V)/G$ is thus ramified only in codimension two, so $\pi_1^\reg(X,x)=G$. The proof of Theorems \[thm:regional\] and \[thm:global\] {#the-proof-of-theoremsthmregional-andthmglobal .unnumbered} --------------------------------------------------------- As done by [@Xu] in the case of the étale fundamental group, we will prove Theorems \[thm:regional\] and \[thm:global\] simultaneously by a local-to-global induction. One induction step is represented by the following two theorems. \[thm:loctoglob\] Let $(Y,D)$ be an $n$-dimensional weakly Fano pair. Assume that $n$-dimensional klt singularities $x \in (X,\Delta)$ have finite regional fundamental group. Then the orbifold fundamental group $\pi_1(Y_\sm,\left.D\right|_{Y_\sm})$ is finite. \[thm:globtoloc\] Let $x \in (X,\Delta)$ be an $(n+1)$-dimensional klt singularity. Assume that the orbifold fundamental group $\pi_1(Y_\sm,\left.D\right|_{Y_\sm})$ of $n$-dimensional weakly Fano pairs $(Y,D)$ is finite. Then $\pi_1^\reg(X,\Delta,x)$ is finite. It is clear that proving these two theorems yields a simultaneous proof of Theorems \[thm:regional\] and \[thm:global\]. The global-to-local part Theorem \[thm:globtoloc\] has been proven by Tian and Xu in [@TianXu Le. 3.1,3.2] for $\pi_1^\loc$. Then in [@TianXu Le. 3.4], they deduce finiteness of $\pi_1^\reg$ of a klt singularity from finiteness of $\pi_1^\loc$ for all lower dimensional klt singularities. Unfortunately, there is a small gap in the proof, when the Seifert-van Kampen theorem is applied to certain tubular neighbourhoods of a Whitney stratification. A careful analysis is taken out in Section \[sec:WorkTianXu\] of the present paper. In fact, it turns out that this task is equally hard as trying to prove Theorem \[thm:loctoglob\] with the same methods and assuming only finiteness of $\pi_1^\loc$ instead of $\pi_1^\reg$. So in order for the induction to work, we really need the $\pi_1^\reg$-version of Theorem \[thm:globtoloc\]. When we realized that we cannot use [@TianXu Le. 3.4], Tian and Xu proposed to us to modify [@TianXu Le. 3.1] for a direct proof avoiding their Lemma 3.4. After analyzing Lemma 3.1 in Section \[sec:WorkTianXu\], we carry out this modification in Section \[sec:Le31mod\] and thus are able to prove Theorem \[thm:globtoloc\] in full generality. The main part of the present paper is the proof of Theorem \[thm:loctoglob\]. So we have to prove finiteness of an orbifold fundamental group $\pi_1(Y_\sm,\left.D\right|_{Y_\sm})$. In contrast to the proofs of simply connectedness of Fano manifolds using Atiyah’s $L^2$-index theorem, we encounter two main difficulties. Firstly, $Y_\sm$ is not compact. Secondly, the orbifold version of the $L^2$-index theorem is more subtle, since for a universal orbifold cover $\widetilde{{\mathcal{X}}} \to {\mathcal{X}}$, the $L^2$-index on $\widetilde{{\mathcal{X}}}$ is *not* equal to the Euler characteristic on ${\mathcal{X}}$, as there are contributions from orbifold points, see [@TianXu Sec. 4.1]. The problems can be seen in Tian and Xu’s proof of Theorem \[thm:loctoglob\] in the special case of $3$-dimensional Fano varieties with canonical singularities [@TianXu Thm. 4.1]. Thus we are mildly sceptical about the possibility of proving Theorem \[thm:loctoglob\] in full generality using the orbifold $L^2$-index theorem. The solution is the following. As we already mentioned, the proof of simply connectedness of weakly Fano varieties $X$ of Takayama [@Takayama] manages to avoid the $L^2$-index theorem and instead relies on the so called *$\Gamma$-reduction* or *Shafarevich map*, independently constructed by Campana and Kollár in [@CampGamma1] and [@KollarShaf2] for compact Kähler manifolds and normal proper varieties respectively. Roughly said, it parameterizes maximal subvarieties of $X$ with finite fundamental group. Takayama uses it to construct an $L^2$-section of a certain line bundle on the universal cover of $X$. By the work of Gromov [@GromovKaehler], the existence of such a section means that if $\pi_1(X)$ is infinite, there are many sections of the corresponding line bundle on $X$. Fortunately, the *$\Gamma$-reduction* is also available for orbifolds due to Claudon [@claudon]. But then we still have the problem that $Y_\sm$ is not compact. This is where the hypothesis of Theorem \[thm:loctoglob\] comes into play (*and* thus the very reason why we cannot directly prove Theorem \[thm:global\] but have to carry out the induction). Consider a log resolution $f:X \to Y$ of the $n$-dimensional weakly Fano pair $(Y,D)$ with exceptional prime divisors $E_i$. Then a very small loop $\gamma_i$ around a general point $e_i$ of $E_i$ can be pushed forward to $Y_\sm$ and there it lies in the smooth locus of a very small neighbourhood of the image of $e_i$, which is a klt singularity. Thus by the hypothesis saying that the regional fundamental groups of klt singularities of dimension $n$ are finite, we know that $\gamma_i$ is of finite order $m_i$ in $f^{-1}(Y_\sm \setminus \supp(D)) = X \setminus ( \bigcup_i E_i \cup \supp(f_*^{-1}D))$. So the normal subgroup of $\pi_1(f^{-1}(Y_\sm \setminus \supp(D)))$ generated by all $\gamma_i^{m_i}$ is trivial. Thus $\pi_1(Y_\sm,D)$ is isomorphic to the orbifold fundamental group of the smooth compact orbifold $(X,f_*^{-1}D+\sum (1-1/m_i)E_i)$. Then the remaining task in order to prove finiteness of the latter is to transfer the techniques of [@Takayama] to the orbifold setting, which is done in Part \[part:loctoglob\] of the present paper. Possible alternative ways of proof {#possible-alternative-ways-of-proof .unnumbered} ---------------------------------- We consider two alternative approaches to prove Theorems \[thm:regional\] and \[thm:global\]. As we mentioned before, simply connectedness of Fano manifolds can be proven by showing that they are rationally connected, from which follows that their fundamental group is finite. The notion of rational connectedness can also be formulated for orbifolds, and also here, from rational connectedness of a smooth orbifold ${\mathcal{X}}=(X, \Delta)$ (in the sense of Campana) follows finiteness of the orbifold fundamental group of ${\mathcal{X}}$ [@CampSecOrbi Cor. 12.25]. So rational connectedness of the orbifold $(X,f_*^{-1}D+\sum (1-1/m_i)E_i)$ supported on a log resolution of a weakly Fano pair $(Y,D)$ would yield an alternative proof of Theorem \[thm:loctoglob\]. But the definition of rational connectedness for orbifolds is subtle [@CampSecOrbi Déf. 6.11, Rem. 6.12] and we have no idea how to prove it for the orbifold $(X,f_*^{-1}D+\sum (1-1/m_i)E_i)$. A different approach - which would yield a direct induction-free proof of Theorem \[thm:global\] in any dimension - is the following. In Proposition \[prop:fundfin\], we prove finiteness of the orbifold fundamental group of $(X,f_*^{-1}D+\sum (1-1/m_i)E_i)$ *for any choice of $m_i$*, supported on a log resolution $X$ of a weakly Fano pair $(Y,D)$. Instead of arguing with the induction hypothesis of finiteness of the regional fundamental group of klt singularities, one also could try to show that if $\pi_1(X \setminus \supp(f_*^{-1}D + \sum E_i))/\langle\langle \gamma_1^{m_1},\ldots,\gamma_1^{m_1}\rangle\rangle$ is finite for every choice of $m_i$, then $\pi_1(X \setminus \supp(f_*^{-1}D + \sum E_i))$ is already finite. It is known that there are finitely presented infinite groups with trivial profinite completion, but our situation is slightly different. By passing to some ramified finite cover of $(Y,D)$ (which is still weakly Fano), we can assume that $\hat{\pi}_1(Y_\sm,\left. D\right|_{Y_\sm})$ is trivial, which means that $\hat{\pi}_1(Y_\sm,\left. D\right|_{Y_\sm})$ has no proper normal subgroups of finite index. So the normal subgroup $\langle\langle \gamma_1^{m_1},\ldots,\gamma_1^{m_1}\rangle\rangle$ is the whole group $\pi_1(X \setminus \supp(f_*^{-1}D + \sum E_i))$ for any choice of $m_i$. This seems to be a strong property and one could ask if infinite finitely presented groups of this kind even exist. But they do. Mark Sapir sent us an example: the Thompson group $T$, which is simple, finitely presented, and infinite. It is generated by three elements, and two of them have infinite order, so all normal subgroups generated by any choice of powers of these elements are the whole group $T$. We want to remark that $\pi_1(X \setminus \supp(f_*^{-1}D + \sum E_i))$ is a so-called *quasiprojective group*, that is the fundamental group of a smooth quasiprojective variety. These groups satisfy some strong properties, see e.g. [@JumpLoci Sec. 1.5]. We do not know if it is possible to show that the negation of the above property is among them. Structure of the paper {#structure-of-the-paper .unnumbered} ---------------------- In Part \[part:loctoglob\] of the paper, we prove Theorem \[thm:loctoglob\]. While the proof itself happens in Section \[sec:loctoglob\], in Sections \[sec:orbifold\] to \[sec:maxsubsp\], we review the definitions of complex orbifolds and basic related notions - e.g. of orbibundles, orbisheaves, and orbimetrics - but transfer also more sophisticated concepts for complex manifolds to the orbifold case. In Part \[part:globtoloc\], we prove Theorem \[thm:globtoloc\]. After shortly recalling the notion of Whitney stratifications in Section \[sec:Whit\], we carefully analyze Lemmata 3.1 and 3.4 of [@TianXu] in Section \[sec:WorkTianXu\]. In the last Section \[sec:Le31mod\], we prove Theorem \[thm:globtoloc\] by modifying Lemma 3.1 appropriately. Acknowledgements {#acknowledgements .unnumbered} ---------------- The author wishes to thank Stefan Kebekus and Joaquín Moraga for several discussions related to the topics of this paper. He is grateful to Chenyang Xu and Zhiyu Tian for suggesting the modification of [@TianXu Le. 3.1]. Thanks go also to Mark Sapir for providing the example of the Thompson group $T$, and to Andreas Demleitner for a discussion on the same topic. \[part:loctoglob\] Complex orbifolds and orbimaps {#sec:orbifold} ============================== The definition of an *orbifold* - under the name of *$V$-manifold* - goes back to Satake [@Satake] in the real and Baily [@Baily] in the complex case. The notion was then rediscovered by Thurston [@Thurston], who finally gave it the name *orbifold*. Complex orbifolds are *locally* - but not necessarily globally - quotients of smooth complex manifolds, which makes them complex analytic spaces with an additional local quotient structure. We will use the following definition, see e.g. [@Comar Sec. 2.1]. \[def:orbifold\] Let $X$ be a complex analytic space of dimension $n$. An *orbifold chart* on $X$ is a tuple $(U',G,\varphi,U)$, where $U' \subseteq \CC^n$ is a connected open complex analytic subspace, $G$ is a finite subgroup of the automorphism group of $U'$, and $\varphi\colon U' \to U \subseteq X$ is a proper and finite holomorphic map to the open subspace $U \subseteq X$, such that $\varphi \circ g =\varphi$ for every $g \in G$. We require the induced map $U'/G \to U$ to be a homeomorphism. An *injection* betweeen two orbifold charts $(U',G,\varphi,U)$ and $(V',H,\psi,V)$ is a holomorphic embedding $\lambda:U' \to V'$, such that $\psi \circ \lambda = \varphi$. An *orbifold atlas* on $X$ is a family ${\mathcal{U}}=\{(U_i',G_i,\varphi_i,U_i)\}$, such that $X=\bigcup_{i} U_i$, and, for two charts $(U_i,G_i,\varphi_i,U_i)$ and $(U_j,G_j,\varphi_j,U_j)$, and any $x \in U_i \cap U_j$, there is a third chart $(U_k,G_k,\varphi_k,U_k)$, such that $x \in U_k \subseteq U_i \cap U_j$, and there are injections $\lambda_{ik}:U_k'\to U_i'$ and $\lambda_{jk}:U_k'\to U_j'$. An atlas ${\mathcal{U}}$ is a *refinement* of another atlas ${\mathcal{V}}$, if for every chart $V'$ of ${\mathcal{V}}$, there is an injection $U' \to V'$ from a chart from ${\mathcal{U}}$. An atlas ${\mathcal{U}}$ is *maximal*, if it has no nontrivial refinement. Let ${\mathcal{U}}$ be a maximal orbifold atlas on $X$. Then we call the pair ${\mathcal{X}}:=(X,{\mathcal{U}})$ a (complex) *orbifold*. We sometimes will call $U' \to U$ a *local uniformization* and $G$ the *local uniformizing group*. By the slice theorem, there is always an atlas consisting of *linear charts* $(\CC^n, G, \varphi,U)$, such that $G$ is a subgroup of the unitary group $U(n)$ [@MoerPronk Rem. (5)]. The actions of the local uniformizing groups $G \subset \Aut(U')$ and the injections $\lambda:U'\to V'$ behave well with respect to each other. Consider for example a chart $(U',G,\varphi,U)$ and an element $g \in G$, then since $\varphi \circ g =\varphi$ holds, $G : U' \to U'$ is an injection. Moreover, the following holds [@MoerPronk Rem. (3), Prop. A.1]. \[le:injgroup\] Let ${\mathcal{X}}:=(X,{\mathcal{U}})$ be a complex orbifold and $(U',G,\varphi,U)$, $(V',H,\psi,V)$ two orbifold charts on ${\mathcal{X}}$. Let $\lambda, \mu \colon U' \to V'$ be two injections between the charts. Then there is a unique $h \in H$, such that $h \circ \lambda=\mu$. In particular, for $g \in G$ the composition $\mu:=\lambda \circ g$ defines an injection $U' \to V'$. The unique $h \in H$ with $\lambda \circ g = h \circ \lambda$ is denoted by $\lambda(g)$. The induced map $\lambda:G \to H$ is an injective group homomorphism. The following is a direct consequence of Lemma \[le:injgroup\], which we haven’t found in the literature. Let ${\mathcal{X}}:=(X,{\mathcal{U}})$ be a complex orbifold and $(U',G,\varphi,U)$ and orbifold chart on ${\mathcal{X}}$. Let $\lambda, \mu \colon U' \to U'$ an injection from $(U',G,\varphi,U)$ to itself. Then there is a $g \in G$, such that $\lambda=g$. Let $(U',G,\varphi,U)$ be an orbifold chart around $x \in U \subseteq X$, and $p \in \varphi^{-1}(x)$. Up to conjugacy, the isotropy subgroup $G_p$ is determined by $x$. Moreover, according to Lemma \[le:injgroup\], if $(V',H,\psi,V)$ is another chart around $x$ and $q \in \psi^{-1}(x)$, then $G_p \cong H_q$, so the following is well defined up to isomorphy [@sasakian Def. 4.1.2]. Let ${\mathcal{X}}=(X,{\mathcal{U}})$ be an orbifold and $x \in X$. For an orbifold chart $(U',G,\varphi,U)$ around $x$ and $p \in \varphi^{-1}(x)$, we call $G_x:=G_p$ the *isotropy group of $x$*. We call those $x \in X$ with $G_x=\{ e_G\}$ *orbifold regular points*, and all points with $G_x\neq \{ e_G\}$ *orbifold singular points*. Note that the singular points (in the usual sense) of the complex analytic space $X$ are a subset of the orbifold singular points of ${\mathcal{X}}=(X,{\mathcal{U}})$. In particular, an orbifold singular point $x$ is a smooth point of $X$ if and only if $G_x$ is a *reflection group* (for some and in consequence for all local uniformizations around $x$). A direct consequence of the well-definedness of the isotropy group of points of ${\mathcal{X}}$ is the following stricter version of the already mentioned [@MoerPronk Rem. (5)], which again we haven’t found in the literature. Let ${\mathcal{X}}:=(X,{\mathcal{U}})$ be a complex orbifold and $x \in X$ with isotropy group $G_x$. Then there is an orbifold chart $(\CC^n,G_x,\phi,U)$ around $x$, such that $\phi^{-1}(x)=0 \in \CC^n$ and $G_x$ acts as a subgroup of $U(n)$. Campana in [@CampFirstOrbi] introduced another notion of orbifold for pairs $(X,\Delta)$, where $\Delta$ is a certain divisor on a complex analytic space $X$. We will see that under certain conditions - which we will encounter in our setting -, his notion is equivalent to that of a complex orbifold we gave in Definition \[def:orbifold\]. In order to distinguish between the two notions, we will call such pairs $(X,\Delta)$ *geometric orbifolds*, following [@CampSchier]. A *geometric orbifold* is a pair $(X,\Delta)$, where $X$ is a complex analytic space and $\Delta$ a divisor of the form $$\sum_{i \in I} \left(1-\frac{1}{m_i}\right) \Delta_i,$$ where we assume that the $m_i$ are integers greater than zero and the $\delta_i$ are prime divisors. We say that the geometric orbifold $(X,\Delta)$ is *smooth*, if $X$ is a smooth complex manifold and $\supp(\Delta)$ is a simple normal crossing divisor. \[rem:geomorb\] A smooth geometric $n$-dimensional orbifold $(X,\Delta)$ always can be represented by a complex orbifold in the sense of Definition \[def:orbifold\]. Consider a local chart $\CC^n \to V \subset X$ of $X$ as an analytic space. Then after suitable adjustment, in this chart, $\Delta$ is given by $$\prod_{i=1}^{k} x_i^{1-1/m_i}.$$ So we have a *local uniformization* $$\begin{aligned} \CC^n &\to \CC^n \\ (x_1,\ldots,x_n) &\mapsto (x_1^{m_i},\ldots,x_k^{m_k},x_{k+1},\ldots,x_n),\end{aligned}$$ which is nothing but the quotient of the action of $\ZZ/m_1\ZZ \times \ldots \times \ZZ/m_k\ZZ$ acting diagonally on $\CC^n$ by roots of unity. If a local analytic chart of $X$ does not intersect $\Delta$, we can take it as orbifold chart. The compatibility of these charts is straightforward. We call this the *canonical orbifold structure* of a smooth geometric orbifold. The local uniformizing subgroups of the canonical orbifold structure of a smooth geometric orbifold are reflection groups. This can be seen from the fact that the analytic space $X$ is smooth or directly from the explicit orbifold charts in Remark \[rem:geomorb\]. The analogy between geometric orbifolds $(X,\Delta)$ and complex orbifolds ${\mathcal{X}}$ actually goes further [@BoyGalKoll Sec. 2], but we are only interested in the particular case of *smooth geometric orbifolds* here. We close this section with the definition of *orbimaps*. The original definitions from [@Satake; @Baily] do not in general induce morphisms of *orbibundles* and *orbisheaves* - which we will define in Sections \[sec:orbibundles\] and \[sec:orbisheaves\] respectively. This has been realized in [@MoerPronk] and additional compatibility criteria have been introduced to remedy this problem. This led to the equivalent notions of ’strong’ [@MoerPronk] and ’good’ [@ChenRuan] orbifold maps. Since we will work with orbibundles and -sheaves, for us the definition of a *holomorphic orbimap* includes the additional compatibility criteria. That is to say, our maps are always ’strong’/’good’, compare [@sasakian Def. 4.1.8]. \[def:orbimap\] Let ${\mathcal{X}}=(X,{\mathcal{U}})$ and ${\mathcal{Y}}=(Y,{\mathcal{V}})$ be two complex orbifolds. A map $f:X \to Y$ is called a *holomorphic orbimap* if the following hold: 1. For any $x \in X$, there are orbifold charts $(U_i',G_i,\varphi_i,U_i)$ of ${\mathcal{X}}$ around $x$ and $(V_i',H_i,\psi_i,V_i)$ of ${\mathcal{Y}}$ around $f(x)$, such that 1. $f(U) \subseteq V$ and 2. there is a holomorphic map $f'_i:U_i' \to V_i'$ satisfying $\psi \circ f'_i = f \circ \varphi$. 2. For any pair of charts $(U_i',G_i,\varphi_i,U_i)$ and $(U_j',G_j,\varphi_j,U_j)$ on ${\mathcal{X}}$, any corresponding pair $(V_i',H_i,\psi_i,V_i)$ and $(V_j',H_j,\psi_j,V_j)$ of charts on ${\mathcal{Y}}$ in the sense of item (1), and any injection $\lambda_{ji}\colon U_i' \to U_j'$, there is an injection $\mu_{ji}\colon V_i' \to V_j'$, such that 1. $f'_i \circ \lambda_{ji} = \mu_{ji} \circ f'_j$ and 2. if $(U'_k,G_k,\varphi_k,U_k)$ is another chart on ${\mathcal{X}}$ with an injection $\lambda_{ki}=\lambda_{kj} \circ \lambda_{ji}\colon U_i' \to U_k'$, and $(V'_k,H_k,\psi_k,V_k)$ the corresponding chart on ${\mathcal{Y}}$, then $\mu_{kj} \circ \mu_{ji} = \mu_{ki}$. In the setting of Definition \[def:orbimap\], consider an injection $\lambda_{ji}\colon U_i' \to U_j'$ and two different injections $\mu_{ji}\colon V_i' \to V_j'$ and $\mu^*_{ji}\colon V_i' \to V_j'$ both meeting the requirements of Item (2), (a). Then according to Lemma \[le:injgroup\], there is a *unique* $h \in H_j$, such that $\mu_{ji}= h \circ \mu_{ji}^*$. So the $\mu_{ji}$ are determined only up to multiplication with elements of $H_j$. Now let $i=j$ and consider an injection $\lambda_{ji}=g \colon U_i' \to U_i'$ given by an element $g \in G_i$. *In contrary* to the second statement of Lemma \[le:injgroup\], now there is not necessarily a *unique* $h \in H_i$, such that $f'_i \circ g = h \circ f'_i$. But if we fix an assignment $\lambda_{ji} \mapsto \mu_{ji}$ between injections on ${\mathcal{X}}$ and ${\mathcal{Y}}$ fulfilling the requirements of Definition \[def:orbimap\], then for each $i$, we get *group homomorphisms* $G_i \to H_i$ for all $i$ [@OrbiGromovWitten Sec. 4.4]. A system of charts on ${\mathcal{X}}$ and ${\mathcal{Y}}$ fulfilling Item (1) of Definition \[def:orbimap\] together with an assignment $\lambda_{ji} \mapsto \mu_{ji}$ between injections of such charts is called a *compatible system* in [@OrbiGromovWitten Sec. 4.4]. If a map between orbifolds allows a compatible system, it is called ’good’ [@OrbiGromovWitten Def. 4.4.1]. The problem is that one map may allow different compatible systems, as the following easy example shows [@OrbiGromovWitten Ex. 4.4.2b]. \[ex:diffcompsys\] Consider ${\mathcal{X}}=(\CC,{\mathcal{U}})$ with ${\mathcal{U}}=\{(\CC,\ZZ/2\ZZ,\{x \mapsto x^2\},\CC)\}$ and ${\mathcal{Y}}=(\CC^2,{\mathcal{V}})$ with ${\mathcal{V}}=\{(\CC^2,(\ZZ/2\ZZ)^2,\{(x,y) \mapsto (x^2,y^2)\},\CC^2)\}$. Both ${\mathcal{X}}$ are smooth orbifolds. Consider the map $f: x \mapsto (x,0)$ between the underlying spaces. Then it is clear that the two possible lifts of $f$ in the orbifold charts are $x \mapsto (x,0)$ and $x \mapsto (-x,0)$. But there are also essentially different compatible systems. For $g=1 \in \ZZ/2\ZZ$, acting on $\CC$ by $x \mapsto -x$, it is possible to choose $h=(1,0) \in (\ZZ/2\ZZ)^2$, acting by $(x,y) \mapsto (-x,y)$, or $h'=(1,1) \in (\ZZ/2\ZZ)^2$, acting by $(x,y) \mapsto (-x,-y)$. Both choices meet the requirements of Definition \[def:orbimap\]. As [@OrbiGromovWitten Le. 4.4.3] states, for any compatible system, there is a unique pullback of orbibundles. But for different compatible systems as in Example \[ex:diffcompsys\], these pullbacks may differ. Nevertheless, the only holomorphic orbimaps we encounter are *orbifold (universal) covers*. These always have a unique compatible system, since they are locally trivial in the orbifold sense. Orbibundles {#sec:orbibundles} =========== In this section, we define orbifold vector bundles or *orbibundles* as a reasonable generalization of (complex) vector bundles over (complex) manifolds. The probably most important notion is that of the *orbifold tangent space* $T{\mathcal{X}}$ and related constructions. \[def:orbibundle\] Let ${\mathcal{X}}=(X,{\mathcal{U}})$ be a complex orbifold. An *orbifold vector bundle* or *orbibundle* of rank $k$ over ${\mathcal{X}}$ is a collection of vector bundles $\pi'_i \colon E'_{i} \to U_i'$ with fiber $\CC^k$ for each orbifold chart $(U_i',G_i,\varphi_i,U_i)$ of ${\mathcal{X}}$, together with an action of $G_i$ on $E'_{i}$ by (ordinary) vector bundle maps, such that: 1. Each $\pi'_i$ is $G_i$-invariant, so that the following diagram is commutative for any $g \in G_i$: $$\xymatrix{ E'_{i} \ar[r]^{g} \ar[d]^{\pi'_i} & E'_i \ar[d]^{\pi'_i} \\ U_i' \ar[r]^{g} & U_i'. }$$ 2. For any injection $\lambda_{ji}:U_i' \to U_j'$ of charts on ${\mathcal{X}}$, there is a bundle isomorphism $\lambda_{ji}':E_i' \to \left.E_j'\right|_{\im(\lambda_{ji})}$, such that $\lambda_{ji}' \circ g = \lambda_{ji}(g) \circ \lambda_{ji}'$, where by $\lambda_{ji}: G_i \to G_j$ we denote the injective group homomorphism from Lemma \[le:injgroup\] as well. 3. For two injections $\lambda_{ji}:U_i' \to U_j'$ and $\lambda_{kj}:U_j' \to U_k'$, we have $(\lambda_{kj} \circ \lambda_{ji})'= \lambda_{kj}' \circ \lambda_{ji}'$. \[rem:loctriv\] The total space $E$ of an orbibundle is obtained from the local bundles $E'_i$ in the following way [@Comar Sec. 2.2]. Choosing small enough orbifold charts on ${\mathcal{X}}$, we can assume that $E_i' \cong U_i' \times \CC^k$ and the action of $G_i$ on $U_i' \times \CC^k$ is diagonal and acting as a subgroup of $\GL(k)$ on the second factor. Then since $\pi_i'$ is equivariant, setting $E_i:=E_i'/G_i$, we have a unique ’projection’ $\pi_i$, so that the following diagram commutes: $$\xymatrix{ E'_{i} \ar[r] \ar[d]^{\pi'_i} & E_i \ar[d]^{\pi_i} \\ U_i' \ar[r]^{\varphi_i} & U_i. }$$ Now we can glue the sets $E_i$ in the following way, stemming from the gluing condition on ${\mathcal{X}}$: let $x \in U_i \cap U_j \neq \emptyset$. Then according to Definition \[def:orbifold\] there is a chart $x \in U_k$ with injections $\lambda_{ik}:U'_k \to U_i'$ and $\lambda_{jk}:U'_k \to U_j'$, which by Definition \[def:orbibundle\] (2) induce bundle isomorphisms $\lambda_{jk}':E_k' \to \left.E_j'\right|_{\im(\lambda_{jk})}$ and $\lambda_{ik}':E_k' \to \left.E_i'\right|_{\im(\lambda_{ik})}$. Gluing $E_i$ and $E_j$ acccording to this data results in an orbifold ${\mathcal{E}}$ with underlying space $E$ and an orbimap $\pi \colon {\mathcal{E}} \to {\mathcal{X}}$, which is locally given by the equivariant projections $\pi_i' \colon E_i' \to U_i'$ [@Comar Sec. 2.2]. \[ex:trivline\] Probably the easiest but still important example of an orbibundle is the *trivial line bundle*, given by trivial line bundles $E_i':=U_i' \times \CC$ on each chart $U_i'$ *together* with a *trivial* action of $G_i$ on the second factor. Then clearly $E_i \cong U_i \times \CC$ and the total space ${\mathcal{E}}$ is just ${\mathcal{X}} \times \CC$. Another very important example is that of the *tangent orbibundle* $T{\mathcal{X}}$. It can be constructed in the following natural way [@sasakian Ex. 4.2.10]. On a chart $U_i'$, take the tangent bundle $TU_i'\cong U_i' \times \CC^n$ and for any injection $\lambda_{ji}:U_i' \to U_j'$ of charts on ${\mathcal{X}}$, the bundle isomorphism $\lambda_{ji}':E_i' \to \left.E_j'\right|_{\im(\lambda_{ji})}$ is given by $\lambda_{ji}$ on the first factor and the *Jacobian* $\Jac (\lambda_{ji})$ on the second one. This construction obviously generalizes to the *cotangent orbibundle* $T^*{\mathcal{X}}$, (symmetric, antisymmetric) tensor orbibundles et cetera [@OrbifoldSurvey]. Locally around $x \in X$, the fiber $\pi^{-1}(x) \subseteq T{\mathcal{X}}$ is not isomorphic to $\CC^n$, but is holomorphic to a small neighbourhood of $x \in X$, because in a local chart, the actions of $g \in G_i$ on $U_i'$ and of $\Jac(g)$ on $T_{\phi^{-1}(x)}U_i'$ are essentially the same. On the other hand, even if $(X,\Delta)$ is a smooth geometric orbifold with canonical orbifold structure ${\mathcal{X}}$, the underlying space of $T{\mathcal{X}}$ is *not necessarily* the ordinary tangent space $TX$. Now having defined orbibundles, we have to ask ourselves what is a reasonable definition of (holomorphic) *sections* of these. Obviously for an orbibundle $\pi: {\mathcal{E}} \to {\mathcal{X}}$, a section of ${\mathcal{E}}$ should be a holomorphic orbimap $s:{\mathcal{X}} \to {\mathcal{E}}$ satisfying $\pi \circ s = \id_X$. But what does this mean on a local chart $\pi'_i\colon E'_i \to U'_i$? As we have an action of $G_i$ on $U_i'$ and $E_i'$, $s$ locally corresponds to an *equivariant* holomorphic section $s_i:U_i' \to E_i'$, meaning $g \circ s_i = s_i \circ g$ for any $g \in G_i$. Of course the local sections must be compatible with injections as well, so that we arrive at the following definition [@sasakian Def. 4.2.9]. \[def:orbsecdiff\] Let $\pi: {\mathcal{E}} \to {\mathcal{X}}$ be an orbibundle. Then a holomorphic *section* of ${\mathcal{E}}$ is given by any of the two equivalent definitions: 1. $s:{\mathcal{X}} \to {\mathcal{E}}$ is a holomorphic orbimap satisfying $\pi \circ s = \id_{\mathcal{X}}$. 2. A collection of holomorphic sections $s_i:U_i' \to E_i'$ of the local bundles over charts of ${\mathcal{X}}$, such that for any injection $\lambda_{ji}\colon U_i \to U_j$, the following diagram commutes: $$\xymatrix@C=3em{ E'_{i} \ar[r]^{\lambda_{ji}'} & \left.E'_j\right|_{\im(\lambda_{ji})} \\ U_i' \ar[r]^{\lambda_{ji}} \ar[u]_{s_i} & \im(\lambda_{ji}) \ar[u]_{\left.s_j\right|_{\im(\lambda_{ji})}}. }$$ *Equivariance* of the local sections $s_i$ obviously is the right requirement, otherwise they would not glue to a global section $s:{\mathcal{X}} \to {\mathcal{E}}$. When (locally) the action of $G_i$ on the fiber is *trivial*, then of course *equivariance* means nothing else than *invariance*, as it is the case for the trivial line bundle from Example \[ex:trivline\]. Sections of this bundle clearly are in a one-to-one-correspondence with holomorphic orbimaps from ${\mathcal{X}}$ to $\CC$ endowed with the trivial orbifold structure. So they are a good candidate for a *structure orbisheaf* on ${\mathcal{X}}$, see Section \[sec:orbisheaves\]. In order to get *coherent sheaves* on the underlying space $X$, nonetheless, we have to deal with *invariant* sections of line bundles $E_i' \to U_i'$ or sheaves $\mathcal{F}'$ on the local unifomizations $U_i'$. The other way round works quite as well. If the underlying space $X$ of a complex orbifold is a manifold, then line bundles and Weil divisors coincide and we can pull them back to the local uniformizations, so they give orbibundles on ${\mathcal{X}}$. Now for example, we can ask ourselves which divisor on $X$ gives the canonical orbibundle $K_{{\mathcal{X}}}$. \[ex:kandiv\] To answer this question, we just have to pull back a top differential form in a local uniformization $$\begin{aligned} \varphi\colon \CC^n &\to \CC^n \\ (x_1,\ldots,x_n) &\mapsto (x_1^{m_i},\ldots,x_k^{m_k},x_{k+1},\ldots,x_n).\end{aligned}$$ We clearly have $$\varphi^* (dz_1 \wedge \ldots \wedge dz_n) = \prod_{i=1}^k m_i x_i^{m_i-1} dx_1 \wedge \ldots \wedge dx_n.$$ Thus we have to multiply with functions that along a ramification divisor $x_i=0$ are allowed to have poles of order at most $m_i-1$. On $X$, this means we have to multiply with functions that on the branch divisors $z_i=0$ have poles of order at most $\frac{m_i-1}{m_i}$. So $K_{\mathcal{X}}$ locally is the pullback of the *$\QQ$-divisor* $K_X + \Delta$ [@sasakian Prop. 4.4.15]. Orbisheaves {#sec:orbisheaves} =========== We first introduce the notion of an orbisheaf following [@MoerPronk] and [@sasakian Def. 4.2.1]. Let ${\mathcal{X}}=(X,{\mathcal{U}})$ be a complex orbifold. An *orbisheaf* $\mathcal{F}$ on ${\mathcal{X}}$ consists of a sheaf ${\mathcal{F}}_i'$ over $U_i'$ for each orbifold chart $(U_i',G_i,\varphi_i,U_i)$ of ${\mathcal{X}}$, such that for each injection $\lambda_{ji}:U_i' \to U_j'$ there is an isomorphism of sheaves ${\mathcal{F}}(\lambda_{ij}:{\mathcal{F}}_i' \to \lambda_{ji}^* {\mathcal{F}}_j'$, which is functorial. We are mainly interested in sheaves of modules over a reasonable *structure sheaf*, so first, we have to define such structure sheaf, see [@sasakian Def. 4.2.2]. The *structure orbisheaf* ${\mathcal{O}}_{{\mathcal{X}}}$ is the orbisheaf consisting of structure sheaves ${\mathcal{O}}_{U_i'}$ on each orbifold chart $U_i'$. On a complex orbifold ${\mathcal{X}}$, by ${\mathcal{O}}_{{\mathcal{X}}}$ we will always denote the structure sheaf of holomorphic functions. It is clear that this definition neither will give us a sheaf on the underlying space $X$ nor it coincides with the holomorphic sections in the sense of Definition \[def:orbsecdiff\] of the trivial orbibundle, see Remark \[rem:loctriv\]. We have to use local $G_i$-invariant sections of such sheaves and glue them together over $X$ [@sasakian Lemma 4.2.4]. We will often work with these invariant sections of orbisheaves (or *invariant* local sections of orbibundles, which do *not in general* coincide with the *equivariant* sections from Definition \[def:orbsecdiff\]). We will always denote sheaves on $X$ coming from invariant local sections of orbisheaves $\mathcal{F}$ by ${\mathcal{F}}_X$. In particular $\left({\mathcal{O}}_{\mathcal{X}}\right)_X \cong {\mathcal{O}}_X$ holds for the structure sheaves. Now recall that the functor $V \to V^G$ taking a vector space with an action of a *finite* group $G$ to its $G$-invariant subspace is exact. This means in particular that for a *coherent* orbisheaf ${\mathcal{F}}$ of ${\mathcal{O}}_{\mathcal{X}}$-modules, the sheaf ${\mathcal{F}}_X$ made up of (locally) $G_i$-invariant sections is a *coherent* sheaf of ${\mathcal{O}}_X$ modules. As exact sequences are preserved, it also makes sense to formulate orbisheaf cohomology, orbifold Dolbeault cohomology et cetera, see Section \[sec:L2\]. Orbimetrics {#sec:orbimetrics} =========== In this section, we consider metrics on orbifolds, or *orbimetrics*. By the preceding considerations, it is clear that these should be (invariant) metrics on the local uniformizations $U_i'$ of an orbifold ${\mathcal{X}}=(X,{\mathcal{U}})$. Let ${\mathcal{X}}=(X,{\mathcal{U}})$ be a complex orbifold and ${\mathcal{E}} \to {\mathcal{X}}$ an orbibundle. A *Hermitian orbimetric* on ${\mathcal{E}}$ is a collection of Hermitian metrics $h_i'$ on the local uniformizations $E_i' \to U_i'$, such that all $h_i'$ are $G_i$-invariant and all injections are Hermitian isometries. We similarly can define Riemannian orbimetrics [@sasakian Def. 4.2.11], Kähler orbiforms, Kähler orbifolds [@MaMarinescu Def. 5.4.7], positive line orbibundles [@MaMarinescu Prop. 5.4.8], Hodge orbifolds et cetera - all as $G_i$-invariant objects on the local uniformizations $U_i'$ by the usual definitions. On the other hand, if the underlying space $X$ of a complex orbifold ${\mathcal{X}}=(X,{\mathcal{U}})$ is smooth, then (usual) divisors or line bundles on $X$ can be pulled back to the local uniformizations and thus define orbibundles as we have seen in Example \[ex:kandiv\] in the case of the canonical divisor. Now when the underlying space $X$ is even a Kähler manifold $(X,\omega)$ with Kähler form $\omega$ (in the usual sense), then Claudon [@claudon Prop. 2.1] has constructed a Kähler *orbiform* $\omega'$ out of $\omega$ in the following way. \[ex:orbmet\] Let ${\mathcal{X}}=(X,\Delta=\sum_{j=1}^m (1-1/m_j)\Delta_j)$ be a complex orbifold, such that $(X,\omega)$ is a Kähler manifold for some $(1,1)$-form $\omega$ on $X$. In a local uniformization $\varphi \colon \CC^n \cong U_i' \to U_i \cong \CC^n$, we can assume that on $U_i$ with coordinates $z_1,\ldots,z_n$, the Kähler form $\omega$ is given by $$\sum_{j=1}^n i \del \delbar \left|z_j \right|^{2} = i \sum_{j=1}^n \dif z_j \wedge \dif \zbar_j.$$ Analogous to Example \[ex:kandiv\], the pullback under $\varphi$ is $$\varphi^*(\omega) = i \sum_{j=1}^n m_j^2\left|x_j\right|^{2(m_j-1)} \dif x_j \wedge \dif \xbar_j,$$ where $m_j=1$ if $x_j=0$ is not the restriction of a divisor $\Delta_j$. This form is clearly degenerate at the origin if $\Delta \cap U_i \neq \emptyset$. In particular, it is no Kähler orbiform. Now consider the global $(1,1)$-form $\omega_\Delta$ with values in ${\mathcal{O}}_X(2\Delta)$ given by $$\omega_\Delta = \sum_{j=1}^k i \del \delbar \left|s_j \right|^{2/m_j}$$ where $s_j \in {\mathcal{O}}_X(\Delta_j)$ is a section defining $\Delta_j$. Locally we can assume that $s_j$ is just given by $z_j$, so the pullback by $\varphi$ is $$\varphi^*(\omega_\Delta) = \sum_{j=1}^k \dif x_j \wedge \dif \xbar_j.$$ In general, $k < n$, so this form is denenerate as well. Now combine these two to a form $\omega'=\omega+\omega_\Delta$. Then on the one hand, $\omega'$ is smooth on $X \setminus \supp(\Delta)$, and for $c \in \RR_{>0}$ small enough, $\omega' \geq c\omega$ as currents. On the other hand, the pullback $$\varphi^*(\omega') = i \sum_{j=1}^k (1+m_j^2\left|x_j\right|^{2(m_j-1)}) \dif x_j \wedge \dif \xbar_j + \sum_{j=k+1}^m \dif x_j \wedge \dif \xbar_j$$ is a true Kähler form in the local uniformization $U_i'\cong \CC^n$. What we need here is a stronger result. Consider the following situation: ${\mathcal{X}}=(X,\Delta)$ is a complex orbifold with $X$ a manifold. Let $L$ be an ample line bundle on the manifold $X$. Then according to [@sasakian Thm. 4.3.14] and the preceding paragraph therein, the *first orbifold Chern class* of $L$ is just the usual first Chern class with respect to $X$. Thus $L$ (or the pullback to local uniformizations) defines an ample (or positive) line orbibundle. Now given a Hermitian positive line bundle $(L,h)$ on $X$ with curvature form $\Theta(L,h)$, such that $\omega = i\Theta(L,h)$ is a Kähler form, we want to *explicitly construct* an orbimetric $H$ on $L$ as an orbibundle, such that $i\Theta(L,H)$ is a Kähler *orbiform*. This directly leads to the notion of *singular Hermitian metrics*, introduced in [@DemaillySingPos Def. 2.1]. Let $X$ be a complex manifold and $(L,h)$ a hermitian line bundle on $X$. A *singular Hermitian metric* $H$ is a metric on $L$, given in a local trivialization $L \supseteq V \cong U \times \CC$ by $H=e^{-\phi}h$, where $\phi \in L^1_{\loc}(U,\RR)$ is a locally integrable function on $U$. We call $(L,H)$ a *singular Hermitian line bundle*. Due to [@MaMarinescu Def. 2.3.2], the *curvature current* of $(L,H)$ is given by $$\Theta(L,H)=\Theta(L,h)+ \del \delbar \phi.$$ Thus we have the following. \[prop:kaehlerorbi\] Let ${\mathcal{X}}=(X,\Delta=\sum_{j=1}^m (1-1/m_j)\Delta_j)$ be a complex orbifold, where the underlying space $X$ is a manifold. For any $j=1,\ldots,m$, let $s_j \in {\mathcal{O}}_X(\Delta_j)$ be a section defining $\Delta_j$. Let $(L,h)$ be a positive Hermitian line bundle on $X$. Then $(L,H)$ is a positive line orbibundle, where $H=e^{-\phi}h$ is given by $$\phi=\sum_{j=1}^{m} \left|s_j\right|^{2/m_j}.$$ In particular, the form $\omega'$ given by $$\omega':= i \Theta(L,h)= i \Theta(L,h) + i\del \delbar \phi$$ is a Kähler orbiform. Since $(L,h)$ is positive, the form $\omega=i \Theta(L,h)$ is a Kähler form on the complex manifold $X$. On the other hand, $\varphi$ is chosen in such way, that $i\del \delbar \phi$ coincides with $\omega_\Delta$ from Example \[ex:orbmet\]. Thus the computations from Example \[ex:orbmet\] verify the claim. Finally note that we can integrate $n$-forms by a partition of unity and by setting $$\int_{U_i} \sigma := \frac{1}{G_i} \int_{U_i'} \varphi^*(\sigma)$$ in a local uniformization $(U_i',G_i,\varphi_i,U_i)$, see e.g. [@sasakian Eq. (4.2.2)]. Thus if $({\mathcal{L}},h)$ is a Hermitian orbibundle on a complete Kähler orbifold $({\mathcal{X}},\omega')$, we have a scalar product $$\left\langle s_1, s_2 \right\rangle := \int_X \left\langle s_1, s_2 \right\rangle_{h} dV_\omega$$ for sections of ${\mathcal{L}}$ and an associated $L^2$-norm $\left|\cdot\right|_h$, see [@MaMarinescu Sec. 5.4.2]. The orbifold universal cover and the $\Gamma$-reduction {#sec:orbicover} ======================================================= \[def:orbifund\] The *orbifold fundamental group* of a geometric orbifold ${\mathcal{X}}=(X,\Delta)$ is the quotient $$\pi_1(X,\Delta):=\pi_1(X \setminus \supp(\Delta)) / \langle \gamma_i^{m_i}, i \in I \rangle,$$ where for each $i \in I$, $\gamma_i$ is a small loop around a general point of the divisor $\Delta_i$. Associated to the orbifold fundamental group, there is the notion of *orbifold universal cover* $\pi \colon \widetilde{{\mathcal{X}}} \to {\mathcal{X}}$. It is a ramified Galois cover between complex analytic spaces, étale over $X \setminus \supp(\Delta)$. Let ${\mathcal{X}}=(X,\Delta)$ be a smooth geometric orbifold. Then over a (sufficiently small) orbifold chart $(U_i',G_i',\varphi_i,U_i)$ of $X$ as in Remark \[rem:geomorb\], the preimage under the orbifold universal cover $\pi \colon \widetilde{{\mathcal{X}}} \to {\mathcal{X}}$ has connected components $V_i$, such that $V_i$ has a local uniformization $(V_i',H_i',\psi_i,V_i)$ with $H_i$ a subgroup of $G_i$. In particular, since $G_i$ is abelian, $H_i$ is so as well and $V_i$ only has toric singularities. Locally, $\left. \pi \right|_{V_i} \colon V_i \to U_i$ is a quotient by $G_i / H_i$ and the lift $V_i' \to U_i'$ is just the identity [@claudon Rem. 1.2]. So in a sense, the universal cover is locally trivial as we expect from a cover. As we mentioned before, the analogy between geometric and classical orbifolds not only holds if the underlying space is smooth. In particular, on the underlying space $X$ of a classical complex orbifold ${\mathcal{X}}=(X,{\mathcal{U}})$ one always can define a divisor $\Delta$, such that the geometric orbifold $(X,\Delta)$ has the canonical orbifold structure ${\mathcal{X}}=(X,{\mathcal{U}})$, see [@BoyGalKoll p. 561]. In particular, this holds for the orbifold universal cover $\widetilde{{\mathcal{X}}}$. But we do not need the structure of a geometric orbifold on $\widetilde{{\mathcal{X}}}$ here. An important observation for us will be that if $X$ is a complex analytic space, $\Delta_1,\ldots,\Delta_m$ are smooth prime divisors on $X$ with normal crossings, and small loops $\gamma_i$ around general points of $\Delta_i$ are of finite order $m_i$ in $\pi_1(X \setminus(\Delta_1 \cup \ldots \cup \Delta_m))$, then $$\pi_1(X \setminus(\Delta_1 \cup \ldots \cup \Delta_m)) = \pi_1\left(X,\sum_{i=1}^m \left(1-\frac{1}{m_i}\right)\Delta_i\right).$$ Note that by the Hopf-Rinow-Theorem for orbifolds [@Caramello Thm. 4.2.2], the orbifold covers of a complete orbifold (with respect to an orbimetric $\omega'$, cf. Section \[sec:orbimetrics\]), are complete with respect to the pullback metric (since orbifold geodesics can be lifted). In particular, the orbifold universal cover of a compact orbifold with a Hermitian orbimetric is complete with respect to the pullback orbimetric. An important ingredient for us is the *$\Gamma$-reduction* or *Shafarevich map*. This construction has been introduced by Kollár for proper normal projective varieties [@KollarShaf2 Def. 1.4] and independently by Campana for compact Kähler manifolds [@CampGamma1 Thm. 3.5,Def. 3.8]. Formulated on the universal cover $\widetilde{X}$ of a compact Kähler manifold $X$, it says that there is a unique almost holomorphic fibration $\widetilde{\gamma} \colon \widetilde{X} \dasharrow \Gamma(\widetilde{X})$, such that any compact irreducible subvariety of $\widetilde{X}$ through a *very general point* $x \in \widetilde{X}$ is contained in the fiber $\widetilde{\gamma}^{-1}(\widetilde{\gamma}(x))$. The general fibers of $\widetilde{\gamma}$ are exactly the maximal compact subvarieties of $\widetilde{X}$. The action of $\pi_1(X)$ on $\widetilde{X}$ descends to $\Gamma(\widetilde{X})$ and thus by quotienting induces an almost holomorphic fibration $\gamma \colon X \dasharrow \Gamma(X)$, of which the fibers are the maximal subvarieties with *finite* fundamental group. In turn, the connected components of the preimages of such fibers are exactly the fibers of $\widetilde{\gamma}$. This concept has been generalized by Claudon in [@claudon] to smooth geometric orbifolds - using Kähler orbiforms as in Example \[ex:orbmet\] -, see also [@CampSecOrbi Sec. 12.5]. We have the following [@claudon Thm. 0.2]. \[thm:G-red\] Let ${\mathcal{X}}=(X,\Delta)$ be a compact smooth geometric Kähler orbifold and $\pi \colon \widetilde{{\mathcal{X}}} \to {\mathcal{X}}$ its orbifold universal cover. There are almost holomorphic fibrations $ \widetilde{\gamma} \colon \widetilde{{\mathcal{X}}} \dasharrow \Gamma(\widetilde{{\mathcal{X}}}) $ and $ \gamma \colon {\mathcal{X}} \dasharrow \Gamma({\mathcal{X}}) $ , such that the diagram $$\xymatrix@C=3em{ \widetilde{{\mathcal{X}}} \ar@{-->}[r]^{\widetilde{\gamma}} \ar[d]^{/ \pi_1({\mathcal{X}})} & \Gamma(\widetilde{{\mathcal{X}}}) \ar[d]^{/ \pi_1({\mathcal{X}})} \\ {\mathcal{X}} \ar@{-->}[r]_{\gamma} & \Gamma({\mathcal{X}}) }$$ commutes and the following hold: 1. If $V \subseteq X$ is a smooth subvariety meeting $\Delta$ transversally, such that the image of $\pi_1(V,\left.\Delta\right|_V)$ in $\pi_1(X,\Delta)$ is finite, and $V$ meets the fiber of $\gamma$ through a very general point, then $V$ is contained in this fiber. 2. Every compact irreducible subvariety of $\widetilde{{\mathcal{X}}}$ through a very general point $x \in \widetilde{{\mathcal{X}}}$ is contained in the fiber $\widetilde{\gamma}^{-1}(\widetilde{\gamma}(x))$. 3. There exist open subsets $X^0 \subset X$ and $\Gamma({\mathcal{X}})^0 \subset \Gamma({\mathcal{X}})$, such that $\left. \gamma\right|_{X^0} \colon X^0 \to \Gamma({\mathcal{X}})^0$ is a proper holomorphic, topologically locally trivial fibration. Theorem 0.2 of [@claudon] is formulated only on the universal cover, while [@CampSecOrbi Thm. 12.23] is formulated on the orbifold ${\mathcal{X}}$ itself. The connection between the both is [@claudon Le. 2.2]. The third item has not been formulated in the orbifold case, but the argument at the end of the proof of Proposition 2.4 in [@KollarShaf2] works here as well. Dolbeault and $L^2$-cohomology for Kähler orbifolds {#sec:L2} =================================================== Following [@OrbifoldSurvey Sec. 5], we can define orbifold Dolbeault cohomology for complete Kähler orbifolds $({\mathcal{X}}=(X,{\mathcal{U}}),\omega)$ in the following way. Denote by $\Omega_X^{p,q}$ the sheaf of $(p,q)$-orbiforms, defined by the usual $(p,q)$-forms on the local uniformizations. The locally invariant sections define the sheaf $\Omega^{p,q}_X$ on the udnerlying space $X$ and we denote the space of global sections by $\Omega^{p,q}_X(X)$. The exterior derivative and the Dolbeault operators $\dif= \del + \delbar$ are well defined, with $$\del\colon \Omega^{p,q}_X \to \Omega^{p+1,q}_X, \qquad \delbar\colon \Omega^{p,q}_X \to \Omega^{p,q+1}_X.$$ The *$(p,q)$-th orbifold Dolbeault cohomology group* is defined by $$H^{p,q}(X) := \frac{\ker(\delbar\colon \Omega_X^{p,q}(X) \to \Omega_X^{p,q+1}(X))}{\im(\delbar\colon \Omega_X^{p,q-1}(X) \to \Omega_X^{p,q}(X))}.$$ If ${\mathcal{E}} \to {\mathcal{X}}$ is a holomorphic orbibundle, then one can similarly define the Dolbeault complex $(\Omega_X^{p,q}(X,{\mathcal{E}}),\delbar^{\mathcal{E}})$ of $(p,q)$-orbiforms with values in ${\mathcal{E}}$ as well as Dolbeault cohomology groups $H^{p,q}(X,{\mathcal{E}})$. Then the *Dolbeault isomorphism for orbifolds* holds, see [@MaMarinescu Sec. 5.4.2]. Now let ${\mathcal{E}}$ be endowed with a (smooth or singular) Hermitian orbimetric $h$. Following [@MaMarinescu Eq. (B.4.12)], we define the $L^2$-spaces $$L^2_{p,q}(X,{\mathcal{E}}):=\{s \in \Omega_X^{p,q}(X,{\mathcal{E}});~ \int_X \left| s\right|^2_h dV_\omega < \infty \},$$ and the $L^2$-Dolbeault cohomology groups by $$H_{(2)}^{p,q}(X,{\mathcal{E}}) := \frac{\ker(\delbar^{\mathcal{E}}) \cap L^2_{p,q}(X,{\mathcal{E}})}{\im(\delbar^{\mathcal{E}}) \cap L^2_{p,q}(X,{\mathcal{E}}) }.$$ Well-definedness follows from [@Ballmann Sec. C.3], which can be directly transferred to complete Kähler orbifolds. $L^2$-vanishing for orbifolds {#sec:orbivan} ============================= The singular Hermitian metrics from Section \[sec:orbimetrics\] will be more useful to us than just for constructing Kähler orbiforms from positive line bundles on the underlying space. For a singular Hermitian line bundle $(L,H)$ with $H=e^{-\phi}h$ on a complex manifold $X$, there is the notion of the *$L^2$-sheaf* ${\mathcal{L}}^2(L,H)$ of locally square-integrable functions with respect to $H$, given by $${\mathcal{L}}^2(L,H)(U) = \{ \sigma \in \Gamma(U,L);~ \left|\sigma\right|_h^2 e^{-\phi} \in L_{\loc}^1(U)\},$$ see [@TakayamaNonvanishing Eq. (3.1)]. In particular, the function $\phi$ defines a singular Hermitian metric $e^{-\phi} z \zbar$ on the trivial line bundle $X \times \CC$. This leads us to the definition of the *multiplier ideal sheaf* ${\mathcal{I}}(\phi):={\mathcal{L}}^2(X \times \CC,e^{-\phi})$. In particular, ${\mathcal{L}}^2(L,H) = L \otimes {\mathcal{I}}(\phi)$. Note that the functions $\phi$ may only be given locally, so in this notation $\phi$ can rather be seen as a collection of locally defined functions. On the other hand, it may still be possible to express $\phi$ globally by certain sections as e.g. in Proposition \[prop:kaehlerorbi\]. A *plurisubharmonic* or shortly *psh* function is defined by certain semicontinuity properties, see e.g. [@DemL2Van Def. (1.4)]. We will use the following characterization from [@MaMarinescu Prop. B.2.10, B.2.16], which is much more immediate in our setting. Let $X$ be a complex analytic manifold. A function $\phi: X \to \RR$ is called *plurisubharmonic* or *psh*, if $i\del \delbar \phi$ is a semipositive form. It is called *strictly psh*, if $\phi \in L^1_\loc(X)$ and $i\del \delbar \phi$ is (strictly) positive. The point is that obviously on the one hand, a singular Hermitian metric $H$ on a positive Hermitian line bundle $(L,h)$ defined by a psh function $\phi$ gives a *positive* $(1,1)$-form $\omega=i\Theta(L,H)$. On the other hand, we have the *Nadel coherence theorem* [@DemL2Van Prop. (5.7)], stating that ${\mathcal{I}}(\phi)$ is a *coherent sheaf* if $\phi$ is psh. This can be easily transformed to the orbifold setting. First let us define the analogue of the multiplier ideal sheaf following [@sasakian Def. 5.2.9]. Let ${\mathcal{X}}=(X,\Delta)$ be a complex orbifold and let $(L,H=he^{-\phi})$ be a singular Hermitian orbibundle on ${\mathcal{X}}$. The *multiplier ideal orbisheaf* ${\mathcal{I}}_X(\phi)$ is the orbisheaf defined on local uniformizations $U_i' \to U_i$ by $${\mathcal{I}}_X(\phi)(U_i')=\left\lbrace f \in {\mathcal{O}}_{{\mathcal{X}}}^{G_i}(U_i');~ |f|^2e^{-\phi} \in L^1_{\loc}(U_i')\right\rbrace.$$ The orbifold version of Nadel’s coherence theorem follows from the standard version since the functor taking $G_i$-invariant sections is exact by finiteness of $G_i$. Thus we have: Let ${\mathcal{X}}=(X,\Delta)$ be a complex orbifold and $(L,H=he^{-\phi})$ be a singular Hermitian orbibundle on ${\mathcal{X}}$. Then the (pushforward of the) multiplier ideal orbisheaf ${\mathcal{I}}_X(\phi)$ is a coherent sheaf of ${\mathcal{O}}_X$-modules on $X$. The next step to go now is the *Nadel vanishing theorem*. The orbifold version is the following [@DemKoll Thm. 6.5]. \[thm:nadelvan\] Let $({\mathcal{X}},\omega')$ be a Kähler orbifold, that is a complex orbifold ${\mathcal{X}}=(X,\Delta)$ with a Kähler orbiform $\omega'$. Let $(L,H=he^{-\phi})$ be a singular Hermitian orbibundle on ${\mathcal{X}}$, where $h$ is a smooth Hermitian orbimetric on $L$. Assume that there exists a constant $c \in \RR_{>0}$, such that $i\Theta(L,H)\geq c\omega'$. If $K_{\mathcal{X}} \otimes L$ is an *invertible* sheaf on $X$, then $$H^q(X,K_{\mathcal{X}} \otimes L \otimes {\mathcal{I}}_X(\phi))=0 ~\mathrm{for}~ q \geq 1.$$ Several things have to be noted. First, as Kollár and Demailly stress in the paragraph after [@DemKoll Thm. 6.5], for the orbifold version, it is really necessary that $K_{\mathcal{X}} \otimes L$ is an invertible sheaf on $X$. This is because on the one hand, the above tensor product $K_{\mathcal{X}} \otimes L \otimes {\mathcal{I}}_X(\phi)$ means first taking the usual tensor product on local uniformizations $U_i'$, then taking $G_i$-invariant sections, and finally the direct image sheaves on $U_i$. On the other hand, the statement is obtained by $L^2$-estimates - with respect to the weight $e^{-\phi}$ - of sections of $K_{\mathcal{X}} \otimes L$ on $X \setminus \supp(\Delta)$. It turns out that for us these $L^2$-estimates are even more important than the statement of Theorem \[thm:nadelvan\]. As they are not explicitly stated in [@DemKoll], we refer to [@DemL2Van Cor. (5.3)]. See also [@TianXu Thm. 4.4] and the subsequent paragraph therein for the orbifold case. \[prop:delbarest\] Let $({\mathcal{X}},\omega')$ be a complete Kähler orbifold. Let $(L,H=he^{-\phi})$ be a singular Hermitian orbibundle on ${\mathcal{X}}$, where $h$ is a smooth Hermitian orbimetric on $L$. Assume that there exists a constant $c \in \RR_{>0}$, such that $i\Theta(L,H)\geq c\omega'$. Then for any $\delbar$-closed form $g \in L_{p,q}^2({\mathcal{X}}, L)$, there is a form $f \in L_{p,q-1}^2({\mathcal{X}},L)$, with $\delbar f=g$ and $$\int_X \left|f\right|_H^2 \dif V_{\omega'} \leq \frac{1}{qc} \int_X \left|g\right|_H^2 \dif V_{\omega'}.$$ Maximal compact subspaces of orbifold universal covers {#sec:maxsubsp} ====================================================== This section is merely a translation of [@TakayamaNonvanishing Sec. 4] to the orbifold case. Following [@TakayamaNonvanishing Sec. 3B], for any complex analytic space $X$, by a *subvariety* $W \subseteq X$, we mean an irreducible reduced complex subspace. By a *maximal compact subspace* $Z \subseteq X$, we mean a not necessarily reduced nor irreducible compact subspace, such that every subvariety $W \subseteq X$ with $Z \cap W \neq \emptyset$ is contained in $Z$. We have the following (compare [@TakayamaNonvanishing Prop. 4.1]). \[prop:compsubs\] Let ${\mathcal{X}}=(X,\Delta)$ be a smooth complex compact orbifold and $\widetilde{{\mathcal{X}}}$ its orbifold universal cover with underlying space denoted by $\widetilde{X}$. Let $(L,h)$ be a positive Hermitian line orbibundle on ${\mathcal{X}}$, such that $({\mathcal{X}},\omega=i \Theta(L,h))$ becomes a complete Kähler orbifold. Denote the pullbacks of $L$, $H$, and $\omega$ by $\widetilde{L}$, $\widetilde{h}$ , and $\widetilde{\omega}$ respectively. Let moreover $Z\subseteq \widetilde{X}$ be a connected maximal compact subspace and $N \in \ZZ_{\geq 1}$. Then there exists a singular Hermitian metric $H$ on $\widetilde{L}$ with the following properties: 1. $i\Theta(\widetilde{L},H) \geq (1-1/N)\widetilde{\omega}$ as currents. 2. There exists an open neighbourhood $U$ of $Z$, such that $$U \cap \supp (\mathcal{O}_{\widetilde{X}}/\mathcal{I}_{\widetilde{X}}(H))= Z$$ and $\left.\mathcal{I}_{\widetilde{X}}(H)\right|_{U} \subset \left.\mathcal{I}_{Z}\right|_{U}$. 3. There exists a positive constant $c_0$, such that $h \leq c_0 H$. To prove the proposition, we need the following three lemmata. \[le:firstle\] Let ${\mathcal{X}}$, $\widetilde{{\mathcal{X}}}$, and $(L,h)$ be as in Proposition \[prop:compsubs\]. Let $\{x_i\}_{i \in \NN}$ be a discrete sequence of points in $\widetilde{X}$ with no accumulation point. Then there exists a subsequence $\{x_{i_k}\}_{k \in \NN}$ and a positive integer $m_0$, such that for any $m \in \ZZ_{>m_0}$ and $\ell\in \ZZ_{>0}$, the evaluation map $$H^0_{(2)}\left(\widetilde{X},\widetilde{L}^{\otimes m}\right) \to \bigoplus_{k=1}^\ell \mathcal{O}_{\widetilde{X}} / \mathcal{M}_{\widetilde{X},x_{i_k}}$$ is surjective. This is basically the proof of [@TakayamaNonvanishing Le. 4.2] translated to the orbifold setting. Since $\{x_i\}_{i \in \NN}$ has no accumulation point, we can take a subsequence, which by abuse of notation we again denote by $\{x_i\}_{i \in \NN}$, such that there exists $\epsilon >0$ with $\dist_{\omega}(x_i,x_j)> \epsilon \diam(X,\omega)$ for $i \neq j$. Now take $\epsilon \in \RR_{>0}$ and consider local uniformizations $(U_i'= B_\epsilon(0) \subseteq \CC^n,G_i,\varphi_i,U_i)$ around $x_i$, such that $\varphi_i(0)=x_i$ for any $i \in \NN$. By the *bounded geometry* of $\widetilde{{\mathcal{X}}}$ as orbifold cover of the compact orbifold ${\mathcal{X}}$, compare [@claudon Le. 2.1], there is a constant $c \in \RR_{>0}$ such that for any $i$ and the standard metric $g_i$ on $U_i'$, we have $$\frac{1}{c}g_i < \omega < c g_i.$$ Now take a smooth $U(n)$-invariant cutoff function $\rho:B_\epsilon(0) \to [0,1]$ with compact support satisfying $\rho \equiv 1$ on $B_{\epsilon/3}(0)$ and $B_{\epsilon}(0) \setminus B_{2\epsilon/3}(0)$. Define $$\phi:= \sum_{i \in \NN} n \rho(z) \log \sum_{j=1}^n \|z_j\|^2 \in L_\loc^1(\widetilde{{\mathcal{X}}},\RR).$$ It is obvious that $\phi$ is $U(n)$- and thus $G_i$-invariant for all $i \in \NN$. The multiplier ideal orbisheaf ${\mathcal{I}}_X(\phi)$ defines a complex subspace of $X$, which is exactly $\{x_i\}_{i \in \NN}$. Now since $(\widetilde{L},\widetilde{h})$ is positive, there is $a_0 \in \ZZ_{>1}$, such that $i \del \delbar \log \widetilde{\omega}^n + a_0 \widetilde{\omega}$ is positive, that is $K_{\widetilde{{\mathcal{X}}}}^{\otimes(-1)} \otimes \widetilde{L}^{\otimes a_0}$ is positive. Moreover, due to the definition of $\phi$ and the bounded geometry property from above, there is $b_0 \in \ZZ_{>1}$, such that $-b_0 \omega < i \del \delbar \phi < b_0 \omega$, see [@TakRems Le. 2.3]. The space $$H_{(2)}^1(X, {\mathcal{L}}^2(L^{\otimes m},e^{-\phi}h^{\otimes m}))=H_{(2)}^{1}(X, K_{\widetilde{{\mathcal{X}}}} \otimes K_{\widetilde{{\mathcal{X}}}}^{\otimes(-1)} \otimes \widetilde{L}^{\otimes m} \otimes {\mathcal{I}}_X(\phi))$$ vanishes for every $m>m_0:=a_0+b_0$ due to Proposition \[prop:delbarest\]. This means that the map $$H^0_{(2)}\left(\widetilde{X},\widetilde{L}^{\otimes m}\right) \to \bigoplus_{k=1}^\ell \mathcal{O}_{\widetilde{X}} / \mathcal{M}_{\widetilde{X},x_{i_k}}$$ indeed is surjective for all $\ell\in \ZZ_{>0}$. \[le:secondle\] Let ${\mathcal{X}}$, $\widetilde{{\mathcal{X}}}$, and $(L,h)$ be as in Proposition \[prop:compsubs\]. Let $Z \subseteq \widetilde{X}$ be a compact complex subspace, $Y \subseteq \widetilde{X}$ be a positive-dimensional non-compact subvariety, and $N$ a positive integer. Then there exist a positive integer $m$ and an $L^2$-section $s \in H^0_{(2)}\left(X,\widetilde{L}^{\otimes m} \otimes \mathcal{I}_Z^{mN}\right)$, such that $\left.s\right|_Y \neq 0$. Since $Y$ is non-compact, we can take a sequence of points $\{x_i\}_{i \in \NN}$ in $Y$ with no accumulation points in $\widetilde{X}$ (since $Y$ is closed). By Lemma \[le:firstle\], we can take a subsequence, which again we denote by $\{x_i\}_{i \in \NN}$, such that there exists $m \in \NN$ with the map $ H^0_{(2)}\left(X,\widetilde{L}^{\otimes m}\right) \to \bigoplus_{i=1}^\ell \mathcal{O}_X / \mathcal{M}_{X,x_{i}} $ being surjective for all $\ell \in \NN$. We consider now the exact sequence $$\xymatrix@C=1.5em{ 0 \ar[r] & H^0_{(2)}\left(\widetilde{X}, \widetilde{L}^{\otimes m} \otimes \mathcal{I}_Z^{mN}\right) \ar[r] & H^0_{(2)}\left(\widetilde{X}, \widetilde{L}^{\otimes m}\right) \ar[r] & H^0\left(\widetilde{X}, \widetilde{L}^{\otimes m} \otimes \mathcal{O}_X / \mathcal{I}_Z^{mN}\right).}$$ The last term has dimension $d \in \ZZ_{\geq 0}$, since $Z$ is compact. Using Lemma \[le:firstle\], we choose $\ell \in \ZZ_{>d}$ and $L^2$-sections $\{s_i\}_{i=1}^{\ell} \subset H^0_{(2)}\left(X, \widetilde{L}^{\otimes m}\right)$ such that $s_i(x_i) \neq 0$ and $s_i(x_j)=0$ for $1\leq i \neq j \leq \ell$. Since $\ell > d$, there is a linear combination $s$ of the $s_i$’s which is a nontrivial $L^2$-section in $H^0_{(2)}\left(X, \widetilde{L}^{\otimes m} \otimes \mathcal{I}_Z^{mN}\right)$. Also $\left. s\right|_Y$ is not the zero section over $Y$ due to the choice of the $s_i$’s. Let $\alpha \in \QQ_{>0}$. Following [@TakayamaNonvanishing], by a *multivalued $L^2$-section* of $\widetilde{L}^{\otimes \alpha}$, we denote a section $s$ of $\widetilde{L}^{\otimes \alpha}$, such that there is $p \in \ZZ_{>0}$ with $p\alpha \in \ZZ$ and $s^p \in H_{(2)}^0(\widetilde{X},\widetilde{L}^{\otimes p\alpha})$. We can then define the pointwise length $$\left|s\right|_{\widetilde{h}}:=\left(\widetilde{h}^{\otimes p\alpha}(s_i^p,s_i^p)\right)^{1/(2p)}$$ and the zero locus $(s)_0=(s^p)_0$ of such sections. If $k \in \ZZ_{>0}$ and $s=\{s_i\}_{i=1,\ldots,k}$ is a finite number of multivalued $L^2$-sections of $L^{\otimes \alpha}$, we denote $\left|s\right|:= \sum_{i=1}^k \left|s_i\right|_h^2$ and $(s)_0:= \bigcap_{i=1}^k (s_i)_0$. Moreover, we define a multiplier ideal sheaf for $s$ by $${\mathcal{I}}(s):={\mathcal{L}}\left({\mathcal{O}}_{\widetilde{{\mathcal{X}}}},(\left|s\right|^2)^{-1}\right).$$ By [@TakRems Le. 2.4], for $k \in \ZZ_{>0}$, the pointwise length of an $L^2$-section $s \in H_{(2)}^0(\widetilde{X},\widetilde{L}^{\otimes k})$ tends to zero at infinity. In particular, in the above setting, if $s^p \in H_{(2)}^0(\widetilde{X},\widetilde{L}^{\otimes p\alpha})$, then $s^q \in H_{(2)}^0(\widetilde{X},\widetilde{L}^{\otimes q\alpha})$ as well for $q \in \ZZ_{\geq p}$. \[le:thirdle\] Let ${\mathcal{X}}$, $\widetilde{{\mathcal{X}}}$, and $(L,h)$ be as in Proposition \[prop:compsubs\]. Let $Z \subseteq \widetilde{X}$ be a compact complex subspace, $U \subseteq \widetilde{X}$ a relatively compact open subset, and $N$ a positive integer. Then there is some $k \in \ZZ_{>0}$ and multivalued $L^2$-sections $s=\{s_i\}_{i=1,\ldots,k}$ of $\widetilde{L}^{\otimes 1/N}$ such that the following hold: 1. The set of common zeros $(s)_0$ of the $s_i$ has no non-compact irreducible component that intersects $U$. 2. The multiplier ideal sheaf $\mathcal{I}(s)$ is contained in the ideal sheaf $\mathcal{I}_{Z}$. First, note that there exists a positive integer $q$, such that, for every $m \in \ZZ_{>0}$, we have $${\mathcal{L}}\left({\mathcal{O}}_{\widetilde{{\mathcal{X}}}},(\left|s\right|^2)^{-1/(mN)}\right) \subset \mathcal{I}_Z$$ for any set of sections $s=\{s_i\}_{i=1,\ldots,k} \subset H^0_{(2)}(X, L^{\otimes m} \otimes \mathcal{I}_{\red Z}^{mqN})$, where $\red Z$ is the reduced structure of $Z$. Fix such an integer $q$. By Lemma \[le:secondle\], there is $m_1 \in \ZZ_{>0}$ and a nonzero $L^2$-section $s_1' \in H^0_{(2)}(X, \widetilde{L}^{\otimes m_1} \otimes \mathcal{I}_{\red Z}^{m_1qN})$. We set $s_1:={s'_1}^{1/(m_1N)}$, which is a multivalued section of $L^{\otimes 1/N} \otimes \mathcal{I}_{\red Z}^{q}$. Now if there is no non-compact irreducible component $Y$ of $(s_1)_0$ intersecting $U$, set $s:=\{s_1\}$. If there *is* a non-compact irreducible component $Y$ of $(s_1)_0$ intersecting $U$, then apply Lemma \[le:secondle\] for $Z$ and this $Y$. It follows that there is an $L^2$-section $s_2$ of $L^{\otimes 1/N}$ such that $s_2^{m_2N} \in H^0_{(2)}(X, L^{\otimes m_2} \otimes \mathcal{I}_{\red Z}^{m_2qN})$ for a positive integer $m_2$, such that $\left.s_2\right|_Y$ is not the zero section. Now we pass to $(s_1)_0 \cap (s_2)_0$ and check if there is a non-compact irreducible component intersecting $U$. If yes, proceed again with Lemma \[le:secondle\] to obtain a section $s_3$ et cetera. Since $U$ is relatively compact, after a finite number of steps, we have $L^2$-sections $s_1,\ldots,s_k \in \widetilde{L}^{\otimes 1/N}$ satisfying the requirements of the lemma. First let $U$ be a relatively compact open neighbourhood of the connected maximal compact subspace $Z$. Apply Lemma \[le:thirdle\] on this $U$ and $Z$, $N$ from the proposition. We use the $L^2$-sections $s=\{s_i\}_{i=1}^{m}$ of $\widetilde{L}^{\otimes 1/N}$ from the lemma to construct a singular Hermitian metric $$H:=h^{\otimes (1-1/N)} \frac{h^{\otimes 1/N}}{\left| s \right|^2}$$ of $\widetilde{L}$, having the properties: 1. $i\Theta(\widetilde{L},H) = i\Theta(\widetilde{L}^{\otimes (1-1/N)}, h^{\otimes (1-1/N)}) + i\Theta(h^{\otimes 1/N}(\left| s \right|^2)^{-1}) \geq (1-1/N)\widetilde{\omega}$. 2. $\mathcal{I}_{\widetilde{X}}(H) =\mathcal{I}(s)$, so $\left.\mathcal{I}(H)\right|_U$ defines a compact complex subspace of $U$ containing $Z$. 3. There is an upper bound for $\left|s\right|^2$, since by [@TakRems Le. 2.4], $\left|s\right|$ tends to zero at infinity. So there is a positive constant such that $h \leq c_0 H$. Since $Z$ is a *maximal* compact subspace, we have $ U \cap \supp (\mathcal{O}_{\widetilde{X}}/\mathcal{I}_{\widetilde{X}}(H))= Z $, and the proposition is proven Proof of Theorem \[thm:loctoglob\] {#sec:loctoglob} ================================== In this section we prove Theorem \[thm:loctoglob\], which makes up the local-to-global part of the induction in the proof of our main theorems. First we recall the necessary definitions. Definitions {#definitions .unnumbered} ----------- We call a pair $(Y,D)$ of a normal complex algebraic variety $Y$ and an effective $\QQ$-divisor $D=\sum d_j D_j$ on $Y$ with $K_Y+D$ being $\QQ$-Cartier a *log pair*. We say that a birational divisorial contraction $f: X \to Y$ is a *log resolution* of the pair $(Y,D)$, if $X$ is smooth and $f_*^{-1}\supp(D) \cup E_1 \cup \ldots \cup E_k$ is a simple normal crossing divisor, where $E_i$, $i=1,\ldots,k$ are the $f$-exceptional prime divisors. We call a log pair $(Y,D)$ *Kawamata log terminal* or *klt* shortly, if $0<d_i<1$ and there exists a log resolution $f: X \to Y$, such that we can write $$K_X + f_*^{-1} (D) + \sum E_i = f^*(K_Y + D) + \sum a_i E_i,$$ where the $a_i$, which we call *log-discrepancies*, are greater than zero. Note that $f_*^{-1} (D)=\sum d_i f_*^{-1} (D_i)$. We call a projective variety $Y$ *weakly Fano*, if there exists an effective $\QQ$-divisor $D=\sum d_j D_j$ on $Y$, such that $(Y,D)$ is klt and $-(K_Y+D)$ is big and nef. The statement of Theorem \[thm:loctoglob\] to prove now is the following: assume that $n$-dimensional klt-singularities have *finite regional fundamental group*, then $n$-dimensional weakly Fano pairs $(Y,D)$ have *finite orbifold fundamental group* $\pi_1(Y_\sm,D)$. Compact orbifolds supported on a log resolution {#compact-orbifolds-supported-on-a-log-resolution .unnumbered} ----------------------------------------------- The proof of the above statement relies on the following two propositions, which essentially say that for a log resolution $f:X \to Y$ of a weakly Fano pair $(Y,D)$ with exceptional divisor $E$, for any admissible $\QQ$-divisor $\Delta$ supported on $\supp(E) \cup \supp(f_*^{-1} D)$, the smooth geometric orbifold $(X,\Delta)$ has finite fundamental group. \[prop:nontrivsec\] Let $(Y,D=\sum_{j=1}^l d_j D_j)$ be a weakly Fano variety, with $d_j=1-1/e_j$ for some $e_j \in \ZZ_{>0}$. Let $f:X \to Y$ be a log resolution with exceptional prime divisors $E_1,\ldots,E_k$. For arbitrary $m_i \in \ZZ_{>0}$, consider the smooth geometric orbifold ${\mathcal{X}}:=(X,\Delta:=\sum d_i f_*^{-1} (D_i) + \sum (1-1/m_i)E_i)$. Define a divisor $$L:= -f^*(K_Y + D) + \sum_{0 <\frac{1}{m_i}-a_i<1} \left(\frac{1}{m_i}-a_i\right) E_i + \sum_{\frac{1}{m_i} \leq a_i} \left(\left\lceil a_i-\frac{1}{m_i} \right\rceil +\frac{1}{m_i}-a_i\right) E_i,$$ where the $a_i > 0$ are the log-discrepancies. Consider $L$ as an orbibundle on ${\mathcal{X}}$. Then the orbifold universal cover $\pi:\widetilde{{\mathcal{X}}} \to {\mathcal{X}}$ has a nontrivial $L^2$-section $$\nu \in H_{(2)}^0(\widetilde{{\mathcal{X}}},K_{\widetilde{{\mathcal{X}}}} \otimes \pi^*L).$$ Consider a pair $(Y,D)$, a log resolution $f:X \to Y$ and arbitrary $m_i \in \ZZ_{>0}$ as in the proposition. Set $\Delta:=\sum d_i f_*^{-1} (D_i) + \sum (1-1/m_i)E_i$. Consider the smooth geometric projective orbifold ${\mathcal{X}}=(X,\Delta)$. Then the orbifold canonical divisor of ${\mathcal{X}}$ is defined by $K_{{\mathcal{X}}}:= K_X +\Delta$, see Section \[sec:orbibundles\]. By the above ramification formula, we can write $$K_{{\mathcal{X}}}= f^*(K_Y + D) + \sum c_i E_i,$$ where $c_i:=a_i-1/m_i>-1$ and $-f^*(K_Y + D)$ is big and nef. Now define $$\Delta':=\sum_{-1 < c_i <0} (-c_i) E_i + \sum_{0 \leq c_i } (\left\lceil c_i \right\rceil -c_i) E_i, \quad E:=\sum_{0 \leq c_i } \left\lceil c_i \right\rceil E_i,$$ and $L:= -f^*(K_Y + D) + \Delta'$. With these definitions, the ramification formula becomes $ K_{{\mathcal{X}}} + L = E $. Now since $L$ is the sum of a big and nef and a simple normal crossing $\QQ$-divisor with coefficients strictly between $0$ and $1$, $L=A+\Delta''$, where $A$ is an ample $\QQ$-divisor and $\Delta''=\Delta'+N$, where $N$ is a very small effective $\QQ$-divisor. This means in particular that the pair $(X,\Delta'')$ is klt. Since $A$ is an ample $\QQ$-divisor, there is a positive integer $a$, such that, by Proposition \[prop:kaehlerorbi\], $A^{\otimes a}$ is a positive line *orbibundle* with orbimetric $h_A$. Denote by $\omega:=i\Theta(A^{\otimes a},h_{A^{\otimes a}})$ the corresponding Kähler orbiform. Then $({\mathcal{X}}, \omega)$ is a compact Kähler orbifold. Now following [@TakayamaNonvanishing Sec. 3B], take a multivalued canonical section $\sigma_{\Delta''}$ of ${\mathcal{O}}_X(\Delta'')$, that is $m\Delta''$ is an integral effective divisor for some positive integer $m$, and $\divisor(\sigma_{\Delta''}^m)=m\Delta''$. Take in addition a Hermitian metric $h_{m\Delta''}$ of ${\mathcal{O}}_X(m\Delta'')$ and define a function $\left|\Delta''\right|:=\left|\sigma_{\Delta''}^m\right|_{h_{m\Delta''}}^{1/m}$. Since the pair $(X,\Delta'')$ is klt, we have $${\mathcal{L}}\left( {\mathcal{O}}_{{\mathcal{X}}}, \left|\Delta''\right|^{-2}\right)={\mathcal{O}}_{{\mathcal{X}}}$$ by [@sasakian Def. 5.2.13], compare also [@KollarShaf Prop. 10.7]. Now consider the orbifold universal cover $\pi \colon \widetilde{{\mathcal{X}}} \to {\mathcal{X}}$ of ${\mathcal{X}}=(X,\Delta)$ and the $\Gamma$-reduction $\gamma\colon {\mathcal{X}} \dasharrow \Gamma({\mathcal{X}})$ from Theorem \[thm:G-red\]. Let $F$ be a very general fiber of the restriction $\left. \gamma\right|_{X^0} \colon X^0 \to \Gamma({\mathcal{X}})^0$. The preimage $\pi^{-1}(F)$ is a disjoint countable union of copies of a finite cover of $F$. So the restriction of $\pi$ to a connected component $\widetilde{F}$ of this preimage is a finite cover $\left.\pi\right|_{\widetilde{F}}\colon \widetilde{F} \to F$, étale over $F \setminus \supp(\Delta)$. Denote by $\widetilde{L}$, $\widetilde{A}$, and $\widetilde{\Delta''}$ the pullbacks (as orbibundles) of $L$, $A$, and $\Delta''$ by $\pi$ respectively. In the same manner, denote the pullback of functions, orbimetrics, and orbiforms by a tilde. In particular, $(\widetilde{{\mathcal{X}}},\widetilde{\omega}:=\pi^*\omega)$ is a complete Kähler orbifold. If $\gamma(F) \subset U \subset \Gamma({\mathcal{X}})^0$ is a sufficiently small neighbourhood of $\gamma(F)$ biholomorphic to the unit ball $B_1(0) \subseteq \CC^n$, then the connected component $\widetilde{U}$ of $\pi^{-1} \circ \gamma^{-1}(U)$ containing $\widetilde{F}$ is a relatively compact open neighbourhood of $\widetilde{F}$ biholomorphic to $\widetilde{F} \times U$. Since $E$ is effective, there is a nonzero section $\sigma \in H^0(X,E) \cong H^0(X,K_{{\mathcal{X}}} \otimes L)$. We can pull back $\left. \sigma \right|_{\gamma^{-1}(U)}$ via $\left.\pi\right|_{\widetilde{F}}$ to get a section $\widetilde{\sigma} \in H^0(\widetilde{U}, K_{\widetilde{X}}\times \widetilde{L})$. Let $\rho:U \to [0,1]$ be a smooth cutoff function with $\rho \equiv 1$ on a neighbourhood of $\gamma(F)$. Then $(\rho \circ \gamma \circ \pi) \cdot \widetilde{\sigma}$ is a smooth $\widetilde{L}$-valued $(n,0)$-form on $\widetilde{{\mathcal{X}}}$. Take a positive integer $N>a$. By Lemma \[le:thirdle\], we have $k \in \ZZ_{>0}$ and multivalued $L^2$-sections $s=\{s_i\}_{i=1,\ldots,k}$ of $\widetilde{A}^{\otimes a/N}$, such that their set of zeros $\widetilde{U} \cap (s)_0$ is compact and ${\mathcal{I}}(s)$ is contained in the ideal sheaf ${\mathcal{I}}_{\widetilde{F}}$. By shrinking $U$ if necessary, we can assume that $\widetilde{U} \cap (s)_0 =\widetilde{F}$. Now we want to define a singular Hermitian metric on $\widetilde{L}=\widetilde{A} \otimes \widetilde{\Delta''}$. Recall that we have a Hermitian metric $h_{m\Delta''}$ of the line bundle $\Delta''^{\otimes m}$. Define $$H_s:= \widetilde{h_{A^{\otimes a}}}^{\otimes 1/a-1/N} \times \frac{ \widetilde{h_{A^{\otimes a}}}^{\otimes 1/N}}{\left|s\right|^2} \times \frac{ \widetilde{h_{m\Delta''}}^{\otimes 1/m}}{\widetilde{\left|\Delta''\right|}^2}.$$ This is a singular Hermitian metric of $\widetilde{L}$. Since $N>a$, the curvature $i\Theta(\widetilde{L},H_s) \geq (1/a-1/N)\widetilde{\omega}$ is positive. By klt-ness of $(X,\Delta'')$, we have $${\mathcal{L}}\left( {\mathcal{O}}_{\widetilde{{\mathcal{X}}}}, \widetilde{\left|\Delta''\right|}^{-2}\right)={\mathcal{O}}_{\widetilde{{\mathcal{X}}}},$$ so $\mathcal{I}(H_s) =\mathcal{I}(s)$ as in the proof of Proposition \[prop:compsubs\]. Now consider the $(n,0)$-form $(\rho \circ \gamma \circ \pi) \cdot \widetilde{\sigma}$ from above. The $(n,1)$-form $\tau:=\delbar (\rho \circ \gamma \circ \pi) \cdot \widetilde{\sigma} = \widetilde{\sigma} \delbar (\rho \circ \gamma \circ \pi)$ is $\delbar$-closed and square-integrable with respect to $H_s$ and $\widetilde{\omega}$, because its support lies in the relatively compact $\widetilde{U} \setminus \widetilde{F}$ and the poles of $H_s$ lie in $\widetilde{F}$. By Proposition \[prop:delbarest\], there is a $\widetilde{L}$-valued $(n,0)$-form $\upsilon$ on $\widetilde{{\mathcal{X}}}$, with $\delbar \upsilon = \tau$, again square-integrable with respect to $H_s$ and $\widetilde{\omega}$. Now set $\nu:=(\rho \circ \gamma \circ \pi) \cdot \widetilde{\sigma} - \upsilon$. Applying $\delbar$, we see that $\nu$ is holomorphic, and since $\upsilon$ is integrable with respect to $H_s$, we have $\left.\upsilon\right|_{\widetilde{F}} \equiv 0$ and thus $\left.\nu \right|_{\widetilde{F}}$ is not trivial. As we know from the proof of Proposition \[prop:compsubs\], $\left|s\right|^2$ is bounded. So there is a positive constant $c$, such that $$\int_{\widetilde{{\mathcal{X}}}} \left|\upsilon\right|_{\widetilde{h_{A^{\otimes a}}}}^2 \dif V_{\widetilde{\omega}} \leq c \int_{\widetilde{{\mathcal{X}}}} \left|\upsilon\right|_{H_s}^2 \dif V_{\widetilde{\omega}}.$$ Moreover, since $(\rho \circ \gamma \circ \pi) \cdot \widetilde{\sigma}$ is supported on $\widetilde{U}$, it is square-integrable with respect to $\widetilde{h_{A^{\otimes a}}}$ and $\widetilde{\omega}$ as well. So $\nu$ is a nontrivial section of $H^0_{(2)}(\widetilde{{\mathcal{X}}},K_{\widetilde{{\mathcal{X}}}} \otimes \widetilde{L})$. \[prop:fundfin\] Let $(Y,D=\sum d_i D_i)$ be a weakly Fano variety, $f:X \to Y$ a log resolution with exceptional prime divisors $E_i$, and $m_i \in \ZZ_{>0}$ arbitrary. Then the smooth geometric orbifold ${\mathcal{X}}:=(X,\Delta=f_*^{-1} D + \sum (1-1/m_i)E_i)$ has *finite orbifold fundamental group* $$\left|\pi_1({\mathcal{X}})\right|<\infty.$$ Define the divisor $L$ as in Proposition \[prop:nontrivsec\]. Then by Proposition \[prop:nontrivsec\], there is a nontrivial section $\nu \in H^0_{(2)}(\widetilde{{\mathcal{X}}},K_{\widetilde{{\mathcal{X}}}} \otimes \widetilde{L})$. For any $k \in \ZZ_{>0}$, the power $\nu^{\otimes 2k}$ is a global $L^1$-section of $(K_{\widetilde{{\mathcal{X}}}} \otimes \widetilde{L})^{\otimes 2k}$. This is due to the fact that ${\mathcal{X}}=\widetilde{{\mathcal{X}}}/\pi_1({\mathcal{X}})$ is compact, see [@GromovKaehler p. 286]. Here it is not necessary that $\pi_1({\mathcal{X}})$ acts freely on the complex analytic space $\widetilde{X}$. The Poincaré series $$P(\nu^{\otimes 2k}):=\sum_{g \in \pi_1({\mathcal{X}})} g^*\nu^{\otimes 2k}$$ converges and defines a holomorphic $\pi_1({\mathcal{X}})$-invariant section of $(K_{\widetilde{{\mathcal{X}}}} \otimes \widetilde{L})^{\otimes 2k}$. Now consider products $\bigotimes_{k_i} P(\nu^{\otimes 2k_i})$ of these sections. Then [@GromovKaehler Prop. 3.2.A] says that *if $\pi_1({\mathcal{X}})$ is infinite* there exists at least one $\kappa$ and at least two partitions $\kappa=\sum k_i$ and $\kappa=\sum k'_i$, such that $$\frac{\bigotimes_{k_i} P(f^{\otimes 2k_i})}{\bigotimes_{k'_i} P(k^{\otimes 2k'_i})}$$ is a nonconstant meromorphic $\pi_1({\mathcal{X}})$-invariant function on $\widetilde{{\mathcal{X}}}$. Thus $\bigotimes_{k_i} P(f^{\otimes 2k_i})$ and $\bigotimes_{k'_i} P(k^{\otimes 2k'_i})$ define two linearly independent sections of $H^0(X,(K_{{\mathcal{X}}} \otimes L)^{\otimes 2\kappa})$. But on the other hand, since $E$ and thus also $2\kappa E$ is effective $f$-exceptional, we have $\dim H^0(X,{\mathcal{O}}_X(2\kappa E))=1$. But we have seen that $E$ is linearly equivalent to $K_{{\mathcal{X}}}+L$ (seen as a divisor). This is a contradiction, so $\pi_1({\mathcal{X}})$ is finite. Proof of Theorem \[thm:loctoglob\] {#proof-of-theoremthmloctoglob .unnumbered} ---------------------------------- Now by the induction hypothesis, for an $n$-dimensional weakly Fano pair $(Y,D)$, we can relate finiteness of $\pi_1(Y_\sm,D)$ to the finiteness of the fundamental group of a compact orbifold supported on a log resolution, which finishes our proof. Let $(Y,D)$ be an $n$-dimensional weakly Fano pair and assume that $n$-dimensional klt singularities have finite regional fundamental group. Consider a log resolution $f:X \to Y$ with exceptional divisor $E=\bigcup E_i$, where $E_i$, $i \in I$ are prime. Then $$\pi_1(Y_\sm,D) \cong \pi_1(X \setminus E,\left.f_*^{-1}D\right|_{X \setminus E}).$$ Now let $\gamma_i$ be a very small loop around a general point $e_i$ of $E_i$. Then $\gamma_i$ can be pushed forward to $Y_\sm$ and there it lies in the smooth locus of a very small neighbourhood of the image of $e_i$, which is a klt singularity $y_i$. So by the induction hypothesis, $f_* \gamma_i$ has finite order $m_i$ in $\pi_1^{\reg}(Y,y_i)$. Therefore, it has finite order in $Y_{\sm}\cong X \setminus E$. Thus $\langle \gamma_i^{m_i}, i \in I \rangle$ is trivial and by Definition \[def:orbifund\] of the orbifold fundamental group $$\begin{aligned} \pi_1(X \setminus E,\left.f_*^{-1}D\right|_{X \setminus E}) &= \pi_1(X \setminus E,\left.f_*^{-1}D\right|_{X \setminus E}) / \langle \gamma_i^{m_i}, i \in I \rangle \\ &= \pi_1\left(X,f_*^{-1}D +\sum \left(1-\frac{1}{m_i}E_i\right)\right)\end{aligned}$$ But the latter is finite by Proposition \[prop:fundfin\]. Thus $\pi_1(Y_\sm,D)$ is finite as well and we are done. \[part:globtoloc\] In order to complete the induction, we have to show in this part that if $(n-1)$-dimensional weakly Fano pairs $(Y,D)$ have finite orbifold fundamental group $\pi_1(Y_{\sm},\left.D\right|_{Y_{\sm}})$, then $n$-dimensional klt singularities have finite *regional* fundamental group. We do this by modifying an argument of [@TianXu]. First let us briefly recall the notions related to Whitney stratifications and their systems of tubular neighbourhoods. Whitney stratifications {#sec:Whit} ======================= We refer to [@Goresky Sec. 2] for the following definitions. Let $X$ be a complex analytic space of dimension $n$ embedded in a smooth complex manifold $M$. In our context, we can always assume $M \cong \PP_m(\CC)$ for some $m\geq n$. To a submanifold $N$ of $M$, we associate a *tubular neighbourhood* $T_N$ in the following way: choose a Riemannian metric $h$ on the normal bundle $E \to N$ and fix $\delta \in \RR_{>0}$. Then $T_N$ is the image of a smooth embedding $\phi \colon E_{\delta} \to M$, where $E_{\delta}:=\{ v \in E;~\left|v\right|_h<\delta\}$ and $\phi$ takes the zero section of $E$ identically to $N$ . For $0<\varepsilon<\delta$, we define $T_N(\varepsilon):=\phi(\{ v \in E;~\left|v\right|<\varepsilon\})$ and its boundary $S_N(\varepsilon):=\phi(\{ v \in E;~\left|v\right|_h=\varepsilon\})$. We write $S_N:=S_N(\delta)$. We have a *tubular distance function* $\rho_N(x):=\left|\phi^{-1}(x)\right|_h$ and a *projection* $\pi_N(x):=\phi \circ \pi \circ \phi^{-1}(x)$ both defined on $T_N$. A *Whitney stratification* of $X \subseteq M$ is a filtration by closed subsets $X_0 \subset X_1 \subset \ldots \subset X_n=X$, such that the connected components of $X_i \setminus X_{i-1}$are locally closed $i$-dimensional submanifolds of $M$, the *$i$-dimensional strata*. If $A$ and $B$ are strata with $A \cap \overline{B} \neq \emptyset$, then $A \subset \overline{B}$ and we write $A < B$. Any Whitney stratification allows a system of compatible tubular neighbourhoods of the strata, so called *control data*. For two strata $A < B$, the tubular distance functions and projections from above have to satisfy $\pi_A \circ \pi_B = \pi_A$ and $\rho_A \circ \pi_B = \rho_A$. Moreover, for some $0<\varepsilon$, the boundaries $S_N(\varepsilon)$ have to satisfy certain transversality properties. Namely if $A_1,\ldots,A_\mu$ and $B_1,\ldots,B_\eta$ are two disjoint collections of strata, then $S_{A_1}(\varepsilon) \cap \ldots \cap S_{A_\mu}(\varepsilon)$ and $S_{B_1}(\varepsilon) \cap \ldots \cap S_{B_\eta}(\varepsilon)$ are transversal and they are also transversal to any other stratum $C$. One can check these properties easily in the pictures of the next section. The work of Tian and Xu {#sec:WorkTianXu} ======================= It was shown in Lemmata 3.1 and 3.2 of [@TianXu], that if $(n-1)$-dimensional weakly Fano pairs $(Y,D)$ have finite orbifold fundamental group $\pi_1(Y_{\sm},\left.D\right|_{Y_{\sm}})$, then $n$-dimensional klt singularities have finite *local* fundamental group. Tian and Xu’s Lemma 3.4 {#tian-and-xus-lemma-3.4 .unnumbered} ----------------------- In [@TianXu Le. 3.4], finiteness of the *regional* fundamental group of an $n$-dimensional klt singularity $(X,x)$ is deduced from finiteness of the *local* fundamental group of $k\leq n$-dimensional klt singularities. Unfortunately, there is a gap in the proof, as described in the following. The proof uses a Whitney stratification of a neighbourhood of the singularity, together with a system of tubular neighbourhoods of the strata. Such a tubular neighbourhood minus the stratum itself is a fiber bundle over the stratum and the fiber over a point has finite fundamental group by assumption, since it is (homeomorphic to) a slice through a pointed neighbourhood of the point, which is klt. Then the Seifert-van Kampen theorem is invoked to merge all these tubular neighbourhoods together to a neighbourhood of $x$ with the whole singular locus removed. The way the tubular neighbourhoods fit together is depicted below. plot (3-,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) –plot (,[4-sqrt(-+6\*+16)]{}) plot (-3+,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) –plot (-,[4-sqrt(-+6\*+16)]{}); (0,0) circle (1cm); (0,0) circle \[radius=2pt\]; (-3,0)–(3,0); (A) at (0,0.1) ; (B) at (2.8,0) ; (B) at (2,0.5) ; (B) at (2.8,1) ; (A) at (0,0.95) ; (A) at (0,0.85) ; Here the singular locus $X_\sing=A \cup \{x\}$ has the zero-dimensional stratum $\{x\}$ and the one-dimensional stratum $A$, together with their tubular neighbourhoods $T_x$ and $T_A$. Note that their boundaries $S_x$ and $S_A$ are transversal. The regional fundamental group of $x$ is nothing else than the fundamental group of $U_x:=T_x \setminus X_\sing$. We have $T_x^0= U_x \cup (T_A \cap T_x)$. The intersection of $U_x$ and $T_A \cap T_x$ is just $T_A^0 \cap T_x$. Thus we have canonical group homomorphisms $h_1 \colon T_A^0 \cap T_x \to U_x$ and $h_2 \colon T_A^0 \cap T_x \to T_A \cap T_x$. But since $T_A^0 \cap T_x$ and $T_A \cap T_x$ both are fiber bundles over $A \cap T_x$ with the fiber having finite fundamental group due to the assumption and trivial fundamental group, respectively, $h_2$ has finite kernel. Now the Seifert-van Kampen theorem says that $\pi_1(T_x^0)$ - which is finite by assumption - is the quotient of the free product $\pi_1(U_x) * \pi_1(T_A \cap T_x)$ by the normal subgroup $N$ generated by all elements $h_1(g)h_2(g)^{-1}$, where $g \in T_A^0 \cap T_x$. The intersection of $N$ with $\pi_1(U_x) \subseteq \pi_1(U_x) * \pi_1(T_A \cap T_x)$ is nothing but the image under $h_1$ of the kernel of $h_2$. Now in the proof of [@TianXu Le. 3.4], it is argued that since $h_2$ has finite kernel, $\pi_1(U_x)$ can not be infinite. This is not necessarily true, since $h_1$ does not have to be surjective, that is, $h_1(\ker h_2)$ does not have to be a normal subgroup, and its normal closure can be infinite. In fact, it is not hard to see that deducing finiteness of the regional fundamental group from finiteness of the local fundamental group is equally hard as deducing finiteness of $\pi_1(Y_{\sm},D)$ for weakly Fano pairs $(Y,D)$ from it. This is since $Y$ has a Whitney stratification as well and since we know that $Y$ is simply connected, the above arguments could be applied in the exact same manner. But as Tian and Xu suggested, one could try to modify Lemma 3.1 of [@TianXu] to directly prove finiteness of the regional fundamental group. So let us have a close look at this lemma. Tian and Xu’s Lemma 3.1 {#tian-and-xus-lemma-3.1 .unnumbered} ----------------------- In [@TianXu Le. 3.1], a klt singularity $(X,x)$ is blown up one time such that the only exceptional prime divisor $E$ ( the so called Kollár component) admits a divisor $\Delta_E$, such that $(E,\Delta_E)$ is weakly Fano. Then it is shown that the fundamental group of a certain open subset $V^0$ of a neighbourhood $U$ of $E$ surjects to the fundamental group of $U^0:=U \setminus E$, which is nothing else but the local fundamental group of $x$. Then in [@TianXu Le. 3.2], it is shown that from finiteness of $\pi_1(E_{\sm},\Delta_E)$ follows finiteness of $\pi_1(V^0)$. Now if we could show that $\pi_1(V^0)$ even surjects to $\pi_1(U_{\sm} \setminus E)$ - which is nothing but the regional fundamental group of $x$, we would be done. So how does the proof of [@TianXu Le. 3.1] look like and how can it be modified? After the blowup $f: Y \to X$, extracting the Kollár component $E= f^{-1}(x)$, a Whitney stratification of $E$ is chosen, with biggest stratum $E_0 := E_{\sm}$. Choose $\varepsilon>0$. From the tubular neighbourhoods of the strata, a neighbourhood $U(\varepsilon)$ of $E$ in $Y$ is defined as follows, after [@Goresky Def. 7.1]: $$U(\varepsilon):= \bigcup_{S \subseteq E \mathrm{~stratum}} T_S(\varepsilon).$$ Note that strictly speaking, we have to embed $Y$ in a smooth manifold $M$ and the tubular neighbourhoods are neighbourhoods in $M$, not in $Y$. So as Goresky points out, the closure $\overline{U(\varepsilon)}$ in $M$ is a *manifold with corners*, the corners being the intersections $S_{A_1}(\varepsilon) \cup \ldots S_{A_k}(\varepsilon)$, where $A_1 < \ldots < A_k$ are incident strata. Nevertheless, we will denote the intersections of all these objects with $Y$ in the same way. We can draw a similar picture as before to depict the situation. (0,0) circle (1cm); plot (3-,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) –plot (,[4-sqrt(-+6\*+16)]{}) plot (-3+,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) –plot (-,[4-sqrt(-+6\*+16)]{}); (0,0) circle (28.2027559pt); (0,0) circle \[radius=2pt\]; (-3,0)–(3,0); (A) at (0,0.1) ; (B) at (2.8,0) ; (A) at (0,0.95) ; (A) at (0,0.85) ; Here $x \in E$ is the only stratum apart from $E_0$. The closure $\overline{U(\varepsilon)}$ has boundary $(S_{E_0}(\varepsilon) \cup S_x(\varepsilon)) \setminus (T_{E_0}(\varepsilon) \cup T_x(\varepsilon))$ and corners $S_{E_0}(\varepsilon) \cap S_x(\varepsilon)$. Now following [@Goresky Sec. 7], we construct a deformation retraction $\psi:U(\varepsilon) \to E$ as follows. First for every stratum consider a retraction $r_A \colon T_A(2\varepsilon) \setminus A \to S_A(2\varepsilon)$, such that the following hold whenever $A < B$ are incident strata: $$r_A \circ r_B = r_B \circ r_A, \quad \rho_A \circ r_B = \rho_A, \quad \rho_B \circ r_A = \rho_B, \quad \pi_A \circ r_A = \pi_A, \quad \pi_B \circ r_B = \pi_B.$$ These retractions have been constructed in [@GorFamLin Sec. 2] under the name *families of lines*. From these one can define homeomorphisms $h_A \colon T_A(2\varepsilon) \setminus A \to S_A(2\varepsilon) \times (0,2\varepsilon)$, where $h_A(p)=(r_A(p),\rho_A(p))$. Now fix a smooth nondecreasing function $q$ with $q(t)=0$ for $t \leq \varepsilon$, $q(t)>0$ for $t > \varepsilon$ and $q(t)=t$ for $t \geq 2\varepsilon$. Now define $$H_A(p):= \left\lbrace \begin{matrix} p & \mathrm{if~} p \notin T_A(2\varepsilon) \setminus A \\ h_A^{-1}(r_A(p),q(\rho_A(p)) & \mathrm{if~} p \in T_A(2\varepsilon) \setminus A. \\ \end{matrix} \right.$$ Thus $H_A(\overline{T_A(\varepsilon)})=A$ and $H_A(T_A \setminus \overline{T_A(\varepsilon)})=T_A$. Now define $\tilde{\psi}: U(2\varepsilon) \to U(2\varepsilon)$ by $\tilde{\psi}:= H_{A_1} \circ \ldots \circ H_{A_N}$, where $A_1,\ldots,A_N$ are the strata of $E$ in any order. Let $\psi:=\left.\tilde{\psi}\right|_{U(\varepsilon)}$. The restriction of $\psi$ to $E$ is homotopic to the identity [@GM p. 220]. For each stratum $A$ and $\eta >0$ define the $\eta$-interior $$A^\eta:= A \setminus \bigcup_{B < A} \overline{T_B(\eta)}$$ as in [@GM83 p. 180]. With this definition, we see that $\psi(\pi_A^{-1}(A^\varepsilon))=A$ and $\psi(A \setminus A^\epsilon) \subseteq \bigcup_{B < A} B$. In our picture, $\psi$ collapses the (darkgray) $T_x(\varepsilon)$ to $x$ and the (lightgray) $\pi_{E_0}^{-1}(E_0^\varepsilon)$ to $E_0$: (0,0) circle (1cm); plot (3-,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) –plot (,[4-sqrt(-+6\*+16)]{}) plot (-3+,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) –plot (-,[4-sqrt(-+6\*+16)]{}); (0,0) circle (28.2027559pt); (0,0) circle \[radius=2pt\]; (-3,0)–(3,0); (-3,0)–(-1,0) (1,0)–(3,0); (A) at (0,0.1) ; (B) at (3,0) ; (B) at (1.35,0.5) ; (A) at (0,0.95) ; Now what is shown in [@TianXu Le. 3.1], is that the canonical group homomorphism $ \pi_1(V^0) \to \pi_1(U(\varepsilon)\setminus E), $ where $V^0:=\psi^{-1}(E_0) \setminus E=\pi_{E_0}^{-1}(E_0^\varepsilon) \setminus E$, is surjective. This is done by adding to $V^0$ closures in $U(\varepsilon)$ of the sets $V_A^0:=\psi^{-1}(A) \setminus E$ for all strata $A$, starting with those of highest dimension. Since all boundaries are collared, it is possible to invoke the Seifert-van Kampen theorem, in order to show that $\pi_1(V^0) \to \pi_1(V^0 \cup \overline{V_A^0})$ is surjective, and so on. This is done by successive fiber bundle decompositions of $\overline{V_A^0}$. In order to really see what happens, we need a higher-dimensional picture with more strata. in [-3,-2.9,...,3]{} plot (,3-,[4-sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) plot (,-3+,[4-sqrt(-(3-)\*(3-)+6\*(3-)+16)]{});; (-3,3,-1)–(3,3,-1) (-3,-3,-1)–(3,-3,-1); in [-90,-81,...,90]{} plot (xyz cylindrical cs:radius=[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{},angle=,z=[3-]{});; in [-90,-81,...,90]{} plot (xyz cylindrical cs:radius=[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{},angle=,z=[-3+]{});; plot (xyz cylindrical cs:radius=[1]{},angle=,z=[-3]{}); in [3,12,...,354]{} plot (xyz spherical cs:radius=1,longitude=,latitude=);; (-3,-3,0) – (3,-3,0) – (3,3,0) – (-3,3,0) – cycle; (0,0,0) circle \[radius=2pt\]; (-3,0,0)–(3,0,0); in [-3,-2.9,...,3]{} plot (,3-,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) plot (,-3+,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{});; (-3,3,1)–(3,3,1) (-3,-3,1)–(3,-3,1); in [-90,-99,...,-270]{} plot (xyz cylindrical cs:radius=[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{},angle=,z=[3-]{});; plot (xyz cylindrical cs:radius=[1]{},angle=,z=[3]{}); in [-90,-99,...,-270]{} plot (xyz cylindrical cs:radius=[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{},angle=,z=[-3+]{});; plot (xyz cylindrical cs:radius=[1]{},angle=,z=[-3]{}); in [3,12,...,354]{} plot (xyz spherical cs:radius=1,longitude=,latitude=);; plot (xyz spherical cs:radius=1,longitude=,latitude=0); Here, the horizontal plane depicts the divisor $E$, having three strata: the origin $o$, a one-dimensional stratum $A$, and the big open stratum $E_0$. It holds $\{o\}<A<E_0$. Also the ( boundaries of the) tubular neighbourhoods $T_N(\varepsilon)$ of these strata are depicted, and their union is the open neighbourhood $U(\varepsilon)$ of $E$. Now we have $$V:=\psi^{-1}(E_0)=\pi_{E_0}^{-1}(E_0^\varepsilon)=T_{E_0}(\varepsilon) \setminus (\overline{T_{A}(\varepsilon)} \cup \overline{T_{o}(\varepsilon)}),$$ which is depicted below, and in order to get $V^0$ we have to subtract $E$. In a first step, the closure of $V_A^0:=\psi^{-1}(A)\setminus E$ has to be added to $V^0$. The Seifert-van Kampen theorem can be used to compute the fundamental group of the resulting space. Taking into account that all these spaces have collared boundaries, we can assume the intersection of $V^0$ and $V_A^0$ is $\del V_A^0 \cap T_{E_0}(\varepsilon)$, which is denoted ${\mathcal{L}}_2$ in [@TianXu]. Then if $\pi_1({\mathcal{L}}_2) \to \pi_1(V_A^0)$ is surjective, so is $\pi_1(V^0)\to \pi_1(\psi^{-1}(A \cup E_0)$. But ${\mathcal{L}}_2$ is a fiber bundle over $A^\varepsilon$, with fiber $L_2$ homotopic to $\pi_A^{-1}(a) \cap \del V \setminus E$ for some $a \in A^\varepsilon$, which is depicted in the cross-section through $a$ below. plot (3-,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) –plot (,[4-sqrt(-+6\*+16)]{}) plot (-3+,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) –plot (-,[4-sqrt(-+6\*+16)]{}); (0,0) circle (28.2027559pt); (0,0) circle \[radius=2pt\]; (-3,0)–(3,0); (-3,0)–(-1,0) (1,0)–(3,0); (A) at (0,0.1) ; (B) at (3,0) ; (B) at (1.35,0.5) ; (A) at (0,0.95) ; (A) at (-0.9,-0.4) ; (-2,0.505) rectangle (2,-0.505); (0,0) circle (1cm); On the other hand, also $V_A^0$ is a fiber bundle over $A^\varepsilon$, with fiber $L$ homotopic to $\pi_A^{-1}(a) \setminus E$ for some $a \in A^\varepsilon$. Thus if $\pi_1(L_2) \to \pi_1(L)$ is surjective, then so is $\pi_1({\mathcal{L}}_2) \to \pi_1(V_A^0)$. Now $L_2$ and $L$ have a fiber bundle structure as well. There is a morphism $\varphi_A \colon V_A=\psi^{-1} \to \DD$, where $\DD=\{z \in \CC;~|z|<\varepsilon\}$, such that $Z_{A,0}:=\varphi^{-1}(0)=V_A \cap E$ and $\varphi_A$ is a topological fibration over $\DD^0:=\DD \setminus \{0\}$, see [@TianXu p. 260]. Compare also the map $f$ in [@GM83 Sec. 6.1] and [@GM Part II, Sec. 6.13.1]. In our picture, we see that approximately the fibers $Z_{A,t}$ for $t \in \DD$ are horizontal sections of $V_A$. plot (3-,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) –plot (,[4-sqrt(-+6\*+16)]{}) plot (-3+,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) –plot (-,[4-sqrt(-+6\*+16)]{}); (0,0) circle (28.2027559pt); (0,0) circle \[radius=2pt\]; (-3,0)–(3,0); (-3,0)–(-1,0) (1,0)–(3,0); (A) at (0,0.1) ; (B) at (3,0) ; (B) at (1.35,0.5) ; (A) at (0,1.05) ; (A) at (-0.9,-0.4) ; (-2,0.505) rectangle (2,-0.505); (-0.93969,0.34202)–(0.93969,0.34202); (C) at (-0.9,0.34202) ; (0,0) circle (1cm); So setting $Z_{a,t}:=Z_{A,t}\cap \pi_A^{-1}(a)$, we see that $L$ is a $Z_{a,t}$-bundle over $\DD^0$ and $L_2$ is a $\del Z_{a,t}$-bundle over $\DD^0$. So we have to show that $\pi_1(\del Z_{a,t}) \to \pi_1(Z_{a,t})$ is surjective. But $Z_{a,t}$ is homotopic to a collared affine analytic space of dimension $c$, where $c$ is the codimension of $A$ in $E$, see [@GM Part II, Prop. 6.13.5]. Since $c\geq 2$, it follows that $\pi_0(\del Z_{a,t}) \to \pi_0(Z_{a,t})$ is an isomorphism and $\pi_1(\del Z_{a,t}) \to \pi_1(Z_{a,t})$ is surjective. Repeating this procedure for all strata of $E$, Lemma 3.1 of [@TianXu] is proven. Finiteness of the regional fundamental group {#sec:Le31mod} ============================================ In this section, we prove Theorem \[thm:globtoloc\], the global-to-local part of our induction, by modifying the proof of [@TianXu Le. 3.1] appropriately. As in Lemma 3.1 of [@TianXu], we start with an $n$-dimensional singularity $x \in X$ of a klt pair $(X,\Delta)$. We assume that the smooth locus of $(n-1)$-dimensional weakly Fano pairs has finite orbifold fundamental group. Let $f:Y \to X$ be a plt blowup extracting the Kollár component $E=f^{-1}(x)$. Consider a Whitney stratification of $Y$, such that the biggest stratum is $Y_\sm$ and for $k\leq n-2$, the $k$-dimensional strata are the relative interiors - with respect to $Y_\sing$ - of the irreducible $k$-dimensional components of the singular locus $Y_\sing$. This induces a Whitney stratification of $E$ by cutting each stratum with $E$. Fix this stratification. Let $0<\varepsilon<<1$ and $U(\varepsilon)$ be a neighbourhood of $E$ as constructed in the previous section. Then $\pi_1^\reg(X,x)\cong \pi_1(U(\varepsilon)\setminus ( E \cup Y_\sing))$. Again as in the previous section, construct the retraction $\psi \colon U(\varepsilon)\to E$. Let $E_0:=E \setminus Y_\sing$ be the big open stratum of $E$. Then if we choose $\varepsilon$ small enough, it is clear that $Y_\sing \cap U(\varepsilon)$ lies in $U(\varepsilon) \setminus \psi^{-1}(E_0)$. The situation is depicted below. in [-3,-2.9,...,3]{} plot (,3-,[4-sqrt(-(3-)\*(3-)+6\*(3-)+16)]{});; (-3,3,-1)–(3,3,-1); in [0,9,...,90]{} plot (xyz cylindrical cs:radius=[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{},angle=,z=[3-]{});; in [0,9,...,90]{} plot (xyz cylindrical cs:radius=[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{},angle=,z=[-3+]{});; plot (xyz cylindrical cs:radius=[1]{},angle=,z=[-3]{}); in [180,189,...,360]{} plot (xyz spherical cs:radius=1,longitude=,latitude=);; (-3,0,0) – (-3,0,-2.5) – (3,0,-2.5) – (3,0,0) – cycle; (0,0,0)–(0,0,-2.5); in [-3,-2.9,...,3]{} plot (,-3+,[4-sqrt(-(3-)\*(3-)+6\*(3-)+16)]{});; (-3,-3,-1)–(3,-3,-1); in [-90,-81,...,0]{} plot (xyz cylindrical cs:radius=[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{},angle=,z=[3-]{});; in [-90,-81,...,0]{} plot (xyz cylindrical cs:radius=[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{},angle=,z=[-3+]{});; plot (xyz cylindrical cs:radius=[1]{},angle=,z=[-3]{}); in [0,9,...,180]{} plot (xyz spherical cs:radius=1,longitude=,latitude=);; (-3,-3,0) – (3,-3,0) – (3,3,0) – (-3,3,0) – cycle; (0,0,0) circle \[radius=2pt\]; (-3,0,0)–(3,0,0); in [-3,-2.9,...,3]{} plot (,3-,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{});; (-3,3,1)–(3,3,1); in [90,99,...,180]{} plot (xyz cylindrical cs:radius=[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{},angle=,z=[3-]{});; plot (xyz cylindrical cs:radius=[1]{},angle=,z=[3]{}); in [90,99,...,180]{} plot (xyz cylindrical cs:radius=[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{},angle=,z=[-3+]{});; plot (xyz cylindrical cs:radius=[1]{},angle=,z=[-3]{}); in [-90,-81,...,90]{} plot (xyz spherical cs:radius=1,longitude=,latitude=);; plot (xyz spherical cs:radius=1,longitude=,latitude=0); (-3,0,0) – (-3,0,2.5) – (3,0,2.5) – (3,0,0) – cycle; (0,0,0)–(0,0,2.5); in [-3,-2.9,...,3]{} plot (,-3+,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{});; (-3,-3,1)–(3,-3,1); in [180,189,...,270]{} plot (xyz cylindrical cs:radius=[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{},angle=,z=[3-]{});; plot (xyz cylindrical cs:radius=[1]{},angle=,z=[3]{}); in [180,189,...,270]{} plot (xyz cylindrical cs:radius=[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{},angle=,z=[-3+]{});; plot (xyz cylindrical cs:radius=[1]{},angle=,z=[-3]{}); in [90,99,...,270]{} plot (xyz spherical cs:radius=1,longitude=,latitude=);; plot (xyz spherical cs:radius=1,longitude=,latitude=0); Here $Y_\sing$ has a $2$-dimensional stratum $Y_A$ that meets $E$ in the $1$-dimensional stratum $A$ and a $1$-dimensional stratum $Y_o$ that meets $E$ in the $0$-dimensional stratum $o$ ($A$ and $o$ as denoted in the last section). Now as in the proof of [@TianXu Le. 3.1], start with $V_{E_0}^0:=\psi^{-1}(E_0) \setminus E$. But instead of adding (the closures of) $V_N^0:=\psi^{-1}(N) \setminus E$ to $V_{E_0}^0$ for all strata $N$ of $E$, now we have to add $V_N^\sm:= V_N^0 \setminus Y_\sing$ in order to arrive at $U(\varepsilon)\setminus ( E \cup Y_\sing)$. Now everything works the same way as in [@TianXu Le. 3.1], *untill* we arrive at the $Z_{N,t}$ for some stratum $N$, compare the explanations in the previous section. Here, now we have to show that $\pi_1(\del Z_{n,t} \setminus Y_\sing) \to \pi_1(Z_{n,t} \setminus Y_\sing)$ is surjective for an element $n$ of the $\varepsilon$-interior $N^\varepsilon$ in order to finish the proof. In our picture, for $N=A$, the situation looks like this. plot (3-,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) –plot (,[4-sqrt(-+6\*+16)]{}) plot (-3+,[-4+sqrt(-(3-)\*(3-)+6\*(3-)+16)]{}) –plot (-,[4-sqrt(-+6\*+16)]{}); (0,0) circle (28.2027559pt); (-0.93969,0.34202)–(0.93969,0.34202); (0,-1)–(0,1); (0,0) circle \[radius=2pt\]; (-3,0)–(3,0); (-3,0)–(-1,0) (1,0)–(3,0); (A) at (-0.2,0.1) ; (A) at (0,0.85) ; \(A) at (-0.9,-0.4) ; (-2,0.505) rectangle (2,-0.505); \(C) at (-0.9,0.34202) ; (0,0) circle (1cm); Note that in general, the singular locus $Y_\sing$ can have nontrivial intersection with $\del Z_{N,t}$. This is the case for example for $N:=\{o\}$, the zero-dimensional stratum, where $\del Z_{o,t}$ has nontrivial intersection with the $2$-dimensional stratum $Y_A$ of $Y_\sing$. In [@TianXu Proof of Le. 3.1], it was argued that $Z_{a,t}$ is homeomorphic to an affine complex analytic space with collared boundary. This is due to [@GM Part II, Prop. 6.13.5]. Looking into the proof therein, we see that this statement is obtained by using Thom’s first isotopy lemma to show that $Z_{a,t}$ is homeomorphic to the intersection of $Z_{A,t}$ with smooth submanifolds of $M$ transversal to $A$ and a small euclidean ball around $a$. But this is an even stronger statement. It means that by this homeomorphy, we can assume that $\{x_a\}:=Z_{a,t} \cap Y_A$ is a klt singularity in some $c$-dimensional variety $Z$, and $Z_{a,t}$ in turn is the intersection of $Z$ with a small ball around $x_a$. Note that $x_a$ does not have to be isolated, since the singular locus $(Z_{a,t})_\sing=Z_{a,t} \cap Y_\sing$ in general is bigger. Nevertheless, we know that $\del Z_{a,t} \cap Y_\sing$ is nothing but the *regional link* (i.e. $\Link(x_a) \cap Z_\sm$) of $x_a$ and thus $\pi_1(\del Z_{a,t} \cap Y_\sing)=\pi_1^\reg(Z,x_a)=\pi_1(Z_{a,t} \cap Y_\sing)$. By repeating this procedure for every stratum $N$ of $E$, we arrive at the surjection $ \pi_1(V_{E_0}^0) \to \pi_1(U(\varepsilon)\setminus (E \cup Y_\sing))$ as wanted. By Lemma 3.2 of [@TianXu], we know that $\pi_1(V_{E_0}^0)$ is finite due to the induction hypothesis, so $\pi_1^\reg(X,x)$ is finite. By [@TianXu Le. 3.5], also the regional orbifold fundamental group $\pi_1^\reg(X,\Delta,x)$ is finite and the proof is finished.
--- abstract: 'A simple quark-diquark model for the baryons is constructed as a partial solution to the well known missing resonances problem. A complete classification of the baryonic states in the quark-diquark framework is given and the spectrum is calculated through a mass formula built to reproduce the rotational and vibrational Regge trajectories.' author: - Elena Santopinto - Giuseppe Galatà bibliography: - 'Bibliografia.bib' title: 'A quark-diquark baryon model' --- \[sec:introduzione\]Introduction. ================================= From the introduction of the quarks, the baryons have always been thought as made up of three constituent confined quarks. The light baryons, in particular, have been ordered according to the approximate SU(3)$_{f}$ symmetry, which requires that the baryons belong to the multiplets $[1]_{A}\oplus [8]_{M}\oplus [8]_{M}\oplus [10]_{S}$. However, when we consider the spatially excited resonances, many more states are predicted than observed and on the other hand, states with certain quantum numbers appear in the spectrum at excitation energies much lower than predicted [@Nakamura:2010zzi]. Considering only the non-strange sector up to an excitation energy of $2.41\; GeV$, in the average about 45 $N$ states are predicted, but only 12 are established (four- or three-star) and 7 are tentative (two- or one-star) [@Nakamura:2010zzi]. This is the so-called missing resonances problem. One possible solution to this problem is to describe two correlated quarks inside the baryons by means of the diquark effective degree of freedom. In this case the number of states predicted are considerably fewer. There have been several studies, ranging from one gluon exchange models to lattice QCD calculations, that have investigated the possibility of diquark correlations and found that they are indeed attractive (see for example [@Jaffe:2004ph; @Wilczek:2004im; @Burden:1996nh; @Hecht:2002ej; @Alexandrou:2006cq]). In this article we construct all the allowed states in the framework of the constituent quark-diquark model and we try to assign every known light baryons (with masses smaller than $2\;GeV$ circa) to the appropriate multiplet. Thinking the quark-diquark system as a stringlike object analogous to the quark-antiquark mesons [@Iachello:1991re; @Iachello:1991fj], we can, moreover, write a simple mass formula, constructed with the aim to reproduce both rotational and vibrational Regge trajectories. \[sec:modello\]A quark-diquark model for baryons. ================================================= In this model we hypothesize that the baryons are a bound state of two elements, a constituent quark and a constituent diquark. We think the diquark as two correlated quarks with no internal spatial excitations, or at least we hypothesize that their internal spatial excitations will be higher in energy than the scale of masses of the resonances we will consider, i.e. light resonances up to $2\;GeV$ masses. Actually calculations in a simple, Goldstone-theorem-preserving, rainbow-ladder DSE model [@Burden:1996nh; @Hecht:2002ej] have confirmed that the first spatially excited diquark, the vector diquark, has a mass much higher than the ground states, the scalar and the axial-vector diquarks. Diquarks are made up of two identical fermions and so they have to satisfy the Pauli principle. Since we consider diquarks with no spatial excitations, their colour-spin-flavour wave functions must be antisymmetric. This limits the possible colour-spin-flavour representations to be only $$\begin{aligned} & \text{colour \; in} ~ [\bar 3]~ \text{(AS),\; spin-flavour\;in} [21]_{sf}~\text{(S)} & \\ & \text{colour \;in }[6]~\text{(S)} \text{,\; spin-flavour\; in~} [15]_{sf}~ \text{(AS)}. & \end{aligned}$$ The decomposition of these SU$_{sf}$(6) representations in terms of SU(3)$_{f}\otimes $ SU(2)$_{s}$ is (in the notation $[\text{flavour\;repr.,\;spin}]$) $$\begin{aligned} & [21]_{sf}=[\bar 3,0]\oplus [6,1] & \\ & [15]_{sf}=[\bar 3,1]\oplus [6,0]. &\end{aligned}$$ Since the baryons must be colourless, we can allow only the diquark states in colour $[\bar 3]_{c}$: $$|[\bar 3]_{c},[\bar 3]_{f},0>, |[\bar 3]_{c},[6]_{f},1>.$$ The first of the above states is the scalar (or good) diquark, the second is the axial-vector (or bad) diquark. In the following we will represent scalar diquarks by their costituent quarks (denoted by $s$ if strange, $n$ otherwise) in a square bracket, while axial-vector diquarks are in a brace bracket. This choice is not casual, because the explicit expression of diquarks is the commutator of the constituent quarks for the scalar ones and the anticommutator for the axial-vector ones. \[sec:Pauli\]Baryons and the Pauli principle. ============================================= The Pauli principle implies that the baryons must be antisymmetric for exchange of each couple of quarks. First we describe the application of this principle to the baryons in the three quarks model, in order, then, to underline the differencies with the quark-diquark model. In the three quarks model we can have the spin-flavor states $$[6]\otimes [6]\otimes [6]=[56]_{S}\oplus [70]_{M}\oplus [70]_{M}\oplus [20]_{A},$$ where the subscripts indicate the symmetry of the state. Since we have two different relative angular momenta, we can have symmetric, mixed and antisymmetric spatial parts, independently from the spatial model adopted. In order to obtain an antisymmetric baryon, we have to combine the spin-flavor-spatial part with the antisymmetric color part. Thus, we need a symmetric spin-flavor-spatial part that can be obtained only through the combinations reported in the left side of Table \[tab:spinsaporespazio\]. ------------- ------- ------------- ------- spin-flavor space spin-flavor space $[56]_{S}$ $S$ $[56]_{S}$ $S$ $[70]_{M}$ $M$ $[70]_{M}$ $M$ $[20]_{A}$ $A$ ------------- ------- ------------- ------- : Allowed spin-flavor and spatial combinations in the three quarks model (left) and in the quark-diquark model (right).[]{data-label="tab:spinsaporespazio"} In the quark-diquark model we can have only the spin-flavor states $$[21]\otimes [6]=[56]_{S}\oplus [70]_{M}.$$ Since in the quark-diquark model we freeze one spatial degree of freedom, thus fixing one of the two relative angular momenta to zero and letting the other vary, we can have only symmetric (if the relative orbital angular momentum $L$ is even) or mixed (if $L$ is odd) spatial parts. We report in the right side of Table \[tab:spinsaporespazio\] the allowed spin-flavor-space combinations. Hence the sequence of states would be $$(SU(6)_{sf},L^{P})=([56],0^{+}),([70],1^{-}),([56],2^{+}),$$ and so on... . \[sec:formulamassa\]The mass formula ==================================== We write for the baryons in the quark-diquark model a simple mass formula which reproduces the Regge trajectories, inspired by the algebraic models for mesons and baryons [@Santopinto:2006my; @Iachello:1991re; @Iachello:1991fj; @Bijker:1994yr; @Bijker:1996tr; @Bijker:2000gq] : $$\begin{aligned} \label{eq:formulamassaqdq} & M^{2}=\Lambda +b\cdot L+c\cdot S(S+1)+d\cdot J+e\cdot I(I+1)+n\cdot \nu +g\cdot C_{2}(SU(3))+h\cdot C_{2}(SU(6))+& \nonumber \\ &+(M_{0}+N_{s}\cdot \Delta M_{s}+N_{[n,s]}\Delta M_{[n,s]}+N_{\{ n,n \}}\cdot \Delta M_{\{ n,n\}}+N_{\{ n,s \}}\cdot \Delta M_{\{ n,s\}}+N_{\{ s,s \}}\cdot \Delta M_{\{ s,s\}})^{2}, &\end{aligned}$$ where $\Lambda $ is an overall scale constant taken equal to $1\;GeV^{2}$, $M_{0}$ the sum of the masses of the non-strange scalar diquark $[n,n]$ and of the non-strange quark, $N_{s}$ and $\Delta M_{s}$ are the number of strange quarks and the mass difference between the strange quark and the non-strange one, $N_{[n,s]}$ and $\Delta M_{[n,s]}$ are the number of strange scalar diquarks and the mass difference between the strange scalar diquark and the non-strange one, $N_{\{ n,s \}}$ and $\Delta M_{\{ n,s \}}$ are the number of strange axial-vector diquarks and the mass difference between the strange axial-vector diquark and the non-strange scalar diquark, $N_{\{ s,s \}}$ and $\Delta M_{\{ s,s\}}$ are the number of double strange axial-vector diquarks and the mass difference between the double strange axial-vector diquark and the non-strange scalar diquark, $C_{2}(SU(3)_{f})$ and $C_{2}(SU(6)_{sf})$ are the quadratic Casimirs of flavour SU(3)$_{f}$ and spin-flavour SU(6)$_{sf}$ respectively, $L$ the relative orbital angular momentum, $S$ the total spin, $J$ the total angular momentum and $\nu $ the vibrational quantum number. ![image](grafnonstrani.pdf){width="10cm"} ![image](grafstrani.pdf){width="8cm"} \[sec:numeriquantici\]Quantum numbers. ====================================== In order to use the mass formula (\[eq:formulamassaqdq\]), it is necessary to assign to every baryon its quantum numbers, in particular those, like $L$ and $S$, not determined by the experiments. For this purpose we consider only well known baryons, namely the three and four stars baryons. We classify the light baryons following three guidelines. First of all we must obviously respect the quantum numbers that can be measured experimentally (like $J$, $P$, etc...). Then we must respect the constraint related to the diquark spin-flavor states: $$\begin{aligned} & [21]\otimes [6]=([\bar 3,0]\oplus [6,1])\otimes [3,\frac{1}{2}]= & \nonumber \\ & =([1,\frac{1}{2}]\oplus [8,\frac{1}{2}])\oplus ([8,\frac{3}{2}]\oplus [8,\frac{1}{2}]\oplus [10,\frac{3}{2}]\oplus [10,\frac{1}{2}]).\end{aligned}$$ As we can see only the baryons in a flavor octect and spin $\frac{1}{2}$ can be made up of both the scalar or the vector diquark, while the baryons in a flavor singlet can be composed only by scalar diquarks and those in a flavor decuplet only by vector axial-diquarks. Finally we must impose that the spin-flavor-space part must be symmetric. As we have seen in section \[sec:Pauli\], the consequence is that we must respect the sequence of states $([56],0^{+}),([70],1^{-}),([56],2^{+}),...$, where $$\begin{aligned} & [56]=[10,\frac{3}{2}]\oplus [8,\frac{1}{2}] & \nonumber \\ & [70]=[10,\frac{1}{2}]\oplus [8,\frac{1}{2}]\oplus [8,\frac{3}{2}]\oplus [1,\frac{1}{2}]. & \nonumber \end{aligned}$$ This means, for example, that we cannot have a flavor singlet with $L=0$. $J$ $L^{P}$ $S_{D}$ multiplets ($[SU(3)_{f},Spin]$) ------------------ -------------- --------- -------------------------------------- $2m+\frac{1}{2}$ $(2m)^{+}$ 0 $[8,\frac{1}{2}]$ $(2m+1)^{-}$ 0 $[8,\frac{1}{2}]$,$[1,\frac{1}{2}]$ $(2m)^{+}$ 1 $[8,\frac{1}{2}]$ $(2m+1)^{-}$ 1 $[8,\frac{1}{2}]$,$[10,\frac{1}{2}]$ $(2m-1)^{-}$ 1 $[8,\frac{3}{2}]$ $(2m)^{+}$ 1 $[10,\frac{3}{2}]$ $(2m+1)^{-}$ 1 $[8,\frac{3}{2}]$ $(2m+2)^{+}$ 1 $[10,\frac{3}{2}]$ $2m+\frac{3}{2}$ $(2m+1)^{-}$ 0 $[8,\frac{1}{2}]$,$[1,\frac{1}{2}]$ $(2m+2)^{+}$ 0 $[8,\frac{1}{2}]$ $(2m+1)^{-}$ 1 $[8,\frac{1}{2}]$,$[10,\frac{1}{2}]$ $(2m+2)^{+}$ 1 $[8,\frac{1}{2}]$ $(2m)^{+}$ 1 $[10,\frac{3}{2}]$ $(2m+1)^{-}$ 1 $[8,\frac{3}{2}]$ $(2m+2)^{+}$ 1 $[10,\frac{3}{2}]$ $(2m+3)^{-}$ 1 $[8,\frac{3}{2}]$ : \[Classgenerale\]General classification of the baryon multiplets in the quark-diquark model. $m$ is an integer $\geq 0$, $S_{D}$ is the diquark spin (0 is the scalar diquark, 1 the axial-vector diquark). For $J=\frac{1}{2}$ the states $[8,\frac{3}{2}]$ with $L^{P}=(2m-1)^{-}$ and $[10,\frac{3}{2}]$ with $L^{P}=(2m)^{+}$ are not allowed. The energy splittings and the actual ordering of the various multiplets will obviously depend on the details of the particular model used. singlet ----------------------------------- --------------- ---------------- ------------------ --------------- ----------------- ---------------- --------------- ----------------- ------------------ $J^{P},L,S,S_{D}$ $\frac{1}{2}$ $1$ $0$ $\frac{1}{2}$ $\frac{3}{2}$ $1$ $\frac{1}{2}$ $0$ $0$ $\frac{1}{2}^{+},0,\frac{1}{2},0$ $N(939)$ $\Sigma(1189)$ $\Lambda (1116)$ $\Xi (1318)$ no no no no no $\frac{1}{2}^{+},0,\frac{1}{2},1$ missing missing missing missing no no no no no $\frac{1}{2}^{+},2,\frac{3}{2},1$ no no no no $\Delta (1910)$ missing missing missing no $\frac{1}{2}^{-},1,\frac{1}{2},0$ $N(1535)$ $\Sigma(1620)$ $\Lambda (1670)$ missing no no no no $\Lambda (1405)$ $\frac{1}{2}^{-},1,\frac{1}{2},1$ missing missing missing missing $\Delta (1620)$ missing missing missing no $\frac{1}{2}^{-},1,\frac{3}{2},1$ $N(1650)$ missing $\Lambda (1800)$ missing no no no no no $\frac{3}{2}^{+},2,\frac{1}{2},0$ $N(1720)$ missing $\Lambda (1890)$ missing no no no no no $\frac{3}{2}^{+},0,\frac{3}{2},1$ no no no no $\Delta (1232)$ $\Sigma(1385)$ $\Xi (1530)$ $\Omega (1672)$ no $\frac{3}{2}^{+},2,\frac{1}{2},1$ missing missing missing missing no no no no no $\frac{3}{2}^{+},2,\frac{3}{2},1$ no no no no $\Delta (1920)$ missing missing missing no $\frac{3}{2}^{-},1,\frac{1}{2},0$ $N(1520)$ $\Sigma(1670)$ $\Lambda (1690)$ $\Xi (1820)$ no no no no $\Lambda (1520)$ $\frac{3}{2}^{-},1,\frac{1}{2},1$ missing missing missing missing $\Delta (1700)$ missing missing missing no $\frac{3}{2}^{-},1,\frac{3}{2},1$ $N(1700)$ $\Sigma(1940)$ $\Lambda (1960)$ missing no no no no no $\frac{3}{2}^{-},3,\frac{3}{2},1$ missing missing missing missing no no no no no $\frac{5}{2}^{+},2,\frac{1}{2},0$ $N(1680)$ $\Sigma(1915)$ $\Lambda (1820)$ $\Xi (2030)$ no no no no no $\frac{5}{2}^{+},2,\frac{1}{2},1$ missing missing $\Lambda (2110)$ missing no no no no no $\frac{5}{2}^{+},2,\frac{3}{2},1$ no no no no $\Delta (1905)$ missing missing missing no $\frac{5}{2}^{+},4,\frac{3}{2},1$ no no no no missing missing missing missing no $\frac{5}{2}^{-},3,\frac{1}{2},0$ missing missing missing missing no no no no missing $\frac{5}{2}^{-},3,\frac{1}{2},1$ missing missing missing missing $\Delta (1930)$ missing missing missing no $\frac{5}{2}^{-},1,\frac{3}{2},1$ $N(1675)$ $\Sigma(1775)$ $\Lambda (1830)$ missing no no no no no $\frac{5}{2}^{-},3,\frac{3}{2},1$ missing missing missing missing no no no no no In Table \[Classgenerale\] we report a general classification, valid for all quantum numbers, of the baryon multiplets in the quark-diquark model, while in Table \[tab:multiplettibarionicidiq\] we assign the light known baryons to each multiplet. The missing and the not allowed states are reported in the table. These tables are in part based on the analogous tables compiled by Bijker, Iachello and Leviatan [@Bijker:2000gq], Selem and Wilczek [@Selem:2006nd] and the PDG [@Nakamura:2010zzi]. It is important to underline that we lack a sure criterion to assign the diquark content to the baryons (i.e. we cannot say if a particular baryon should be made up of a scalar, an axial-vector or even a mixing of the two diquarks). We found only two sure elements on which the choice can be based: - Isospin and strangeness: We must remember that every baryon family has a definite isospin and strangeness: $N$ has isospin $I=\frac{1}{2}$ and strangeness $\mathcal{S}=0$, $\Delta $ has $I=\frac{3}{2}$ and $\mathcal{S}=0$, $\Lambda $ has $I=0$ and $\mathcal{S}=-1$, $\Sigma $ has $I=1$ and $\mathcal{S}=-1$, $\Xi $ has $I=\frac{1}{2}$ and $\mathcal{S}=-2$, $\Omega $ has $I=0$ and $\mathcal{S}=-3$. Thus, we must combine the diquark and the quark to reproduce isospin and strangeness of the baryon. But we can easily find that $[n,n]$ has $I=0$ and $\mathcal{S}=0$, $[n,s]$ has $I=\frac{1}{2}$ and $\mathcal{S}=-1$, $\{ n,n\}$ has $I=1$ and $\mathcal{S}=0$, $\{ n,s\}$ has $I=\frac{1}{2}$ and $\mathcal{S}=-1$ and $\{ s,s\}$ has $I=0$ and $\mathcal{S}=-2$. Combining the quark and the diquark together we find the possible diquark content. $N$ can be either $[n,n]n$ or $\{ n,n\}n$, $\Delta $ can be only $\{ n,n\}n$, $\Lambda $ can be $[n,n]s$ or $[n,s]n$ if they are in a flavour singlet otherwise they can be $[n,n]s$, $[n,s]n$ or $\{ n,s\}n$ if they belong to a flavour octect, $\Sigma $ can be $[n,s]n$, $\{ n,n\} s$ or $\{ n,s\}n$, $\Xi $ can be $[n,s]s$, $\{ n,s\} s$ or $\{ s,s\}n$ and finally $\Omega $ can be only $\{ s,s\}s$. - Diquark masses: We can say, following all the previous studies about the diquarks (as for example Refs. [@Jaffe:2004ph; @Wilczek:2004im; @Alexandrou:2006cq]), that the axial-vector diquark should be heavier than the scalar one. Thus, if we have two baryons with similar quantum numbers but different masses, we will assign the axial-vector diquark to the heavier one. In this first attempt we choose to assign to all the baryons being part of the same flavour multiplet an analogous diquark content(i.e. if we establish that a baryon should have for example a scalar diquark, then all the other baryons of the same multiplet will have a scalar diquark). In this way we think that all the mass differences inside a baryon multiplet should be addressed to the different strangeness of the various baryons. \[sec:risultati\]Fit and results ================================ [|c|c|c|c|c|c|c|c|c|]{} Resonances & $J$ & $S$ & composition & $SU(3)_{f}$ multiplet & $SU(6)_{sf}$ multiplet & $\nu $ & $M(exp)$ & $M(theo)$\ \ $N(939)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $[n,n]n$ & $8$ & $56$ & 0 & $0.939\pm 0.005$ & $0.930$\ $\Sigma (1189)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $[n,s]n$ & $8$ & $56$ & 0 & $1.189\pm 0.005$ & $1.189$\ $\Xi (1318)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $[n,s]s$ & $8$ & $56$ & 0 & $1.315\pm 0.005$ & $1.332$\ $\Delta (1232)$ & $\frac{3}{2}$ & $\frac{3}{2}$ & $\{n,n\}n$ & $10$ & $56$ & 0 & $1.231-1.233$ & $1.231$\ $\Omega (1672)$ & $\frac{3}{2}$ & $\frac{3}{2}$ & $\{s,s\}s$ & $10$ & $56$ & 0 & $1.672\pm 0.005$ & $1.672$\ $N(1440)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $[n,n]n$ & $8$ & $56$ & 1 & $1.420-1.470$ & $1.495$\ $\Sigma (1660)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $[n,s]n$ & $8$ & $56$ & 1 & $1.630-1.690$ & $1.668$\ $N(1710)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $\{n,n\}n$ & $8$ & $56$ & 1 & $1.680-1.740$ & $1.606$\ $\Lambda (1810)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $\{n,s\}n$ & $8$ & $56$ & 1 & $1.750-1.850$ & $1.774$\ $\Delta (1600)$ & $\frac{3}{2}$ & $\frac{3}{2}$ & $\{n,n\}n$ & $10$ & $56$ & 1 & $1.550-1.700$ & $1.699$\ \ $\Lambda (1116)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $[n,n]s\;(?)$ & $8$ & $56$ & 0 & $1.116\pm 0.005$ & $1.087$\ $\Sigma (1385)$ & $\frac{3}{2}$ & $\frac{3}{2}$ & $\{n,n\}s\;(?)$ & $10$ & $56$ & 0 & $1.383\pm 0.005$ & $1.359$\ $\Xi (1530)$ & $\frac{3}{2}$ & $\frac{3}{2}$ & $\{s,s\}n\;(?)$ & $10$ & $56$ & 0 & $1.532\pm 0.005$ & $1.537$\ $\Lambda (1600)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $[n,n]s\; (?)$ & $8$ & $56$ & 1 & $1.560-1.700$ & $1.598$\ [|c|c|c|c|c|c|c|c|c|]{} Resonances & $J$ & $S$ & composition & $SU(3)_{f}$ multiplet & $SU(6)_{sf}$ multiplet & $\nu $ & $M(exp)$ & $M(theo)$\ \ $N(1535)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $[n,n]n$ & $8$ & $70$ & 0 & $1.525-1.545$ & $1.529$\ $N(1520)$ & $\frac{3}{2}$ & $\frac{1}{2}$ & $[n,n]n$ & $8$ & $70$ & 0 & $1.515-1.525$ & $1.527$\ $\Sigma (1670)$ & $\frac{3}{2}$ & $\frac{1}{2}$ & $[n,s]n$ & $8$ & $70$ & 0 & $1.665-1.685$ & $1.697$\ $\Xi (1820)$ & $\frac{3}{2}$ & $\frac{1}{2}$ & $[n,s]s$ & $8$ & $70$ & 0 & $1.818-1.828$ & $1.800$\ $N(1650)$ & $\frac{1}{2}$ & $\frac{3}{2}$ & $\{n,n\}n$ & $8$ & $70$ & 0 & $1.645-1.670$ & $1.678$\ $\Lambda (1800)$ & $\frac{1}{2}$ & $\frac{3}{2}$ & $\{n,s\}n$ & $8$ & $70$ & 0 & $1.720-1.850$ & $1.840$\ $N(1700)$ & $\frac{3}{2}$ & $\frac{3}{2}$ & $\{n,n\}n$ & $8$ & $70$ & 0 & $1.650-1.750$ & $1.676$\ $N(1675)$ & $\frac{5}{2}$ & $\frac{3}{2}$ & $\{n,n\}n$ & $8$ & $70$ & 0 & $1.670-1.680$ & $1.675$\ $\Lambda (1830)$ & $\frac{5}{2}$ & $\frac{3}{2}$ & $\{n,s\}n$ & $8$ & $70$ & 0 & $1.810-1.830$ & $1.837$\ $\Delta (1620)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $\{n,n\}n$ & $10$ & $70$ & 0 & $1.600-1.660$ & $1.690$\ $\Delta (1700)$ & $\frac{3}{2}$ & $\frac{1}{2}$ & $\{n,n\}n$ & $10$ & $70$ & 0 & $1.670-1.750$ & $1.689$\ \ $\Lambda (1405)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $[n,n]s\;(?)$ & $8$ & $70$ & 0 & $1.402-1.410$ & $1.593$\ $\Lambda (1520)$ & $\frac{3}{2}$ & $\frac{1}{2}$ & $[n,n]s\;(?)$ & $8$ & $70$ & 0 & $1.520\pm 0.005$ & $1.591$\ $\Lambda (1670)$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $[n,s]n\;(?)$ & $8$ & $70$ & 0 & $1.660-1.680$ & $1.687$\ $\Lambda (1690)$ & $\frac{3}{2}$ & $\frac{1}{2}$ & $[n,s]n\;(?)$ & $8$ & $70$ & 0 & $1.685-1.695$ & $1.685$\ $\Sigma (1750)$ & $\frac{1}{2}$ & $\frac{3}{2}$ & $\{n,n\}s\;(?)$ & $8$ & $70$ & 0 & $1.730-1.800$ & $1.753$\ $\Lambda (1960)$ & $\frac{3}{2}$ & $\frac{3}{2}$ & $\{n,s\}n$ & $8$ & $70$ & 0 & & $1.839$\ $\Sigma (1940)$ & $\frac{3}{2}$ & $\frac{3}{2}$ & $\{n,n\}s\;(?)$ & $8$ & $70$ & 0 & $1.900-1.950$ & $1.850$\ $\Sigma (1775)$ & $\frac{5}{2}$ & $\frac{3}{2}$ & $\{n,n\}s\;(?)$ & $8$ & $70$ & 0 & $1.770-1.780$ & $1.788$\ [|c|c|c|c|c|c|c|c|c|]{} Resonances & $J$ & $S$ & composition & $SU(3)_{f}$ multiplet & $SU(6)_{sf}$ multiplet & $\nu $ & $M(exp)$ & $M(theo)$\ \ $N(1720)$ & $\frac{3}{2}$ & $\frac{1}{2}$ & $[n,n]n$ & $8$ & $56$ & 0 & $1.700-1.750$ & $1.697$\ $N(1680)$ & $\frac{5}{2}$ & $\frac{1}{2}$ & $[n,n]n$ & $8$ & $56$ & 0 & $1.680-1.690$ & $1.695$\ $\Sigma (1915)$ & $\frac{5}{2}$ & $\frac{1}{2}$ & $[n,s]n$ & $8$ & $56$ & 0 & $1.900-1.935$ & $1.850$\ $\Lambda (2110)$ & $\frac{5}{2}$ & $\frac{3}{2}$ & $\{n,s\}n$ & $8$ & $56$ & 0 & $2.090-2.140$ & $1.981$\ $\Delta (1910)$ & $\frac{1}{2}$ & $\frac{3}{2}$ & $\{n,n\}n$ & $10$ & $56$ & 0 & $1.870-1.920$ & $1.882$\ $\Delta (1920)$ & $\frac{3}{2}$ & $\frac{3}{2}$ & $\{n,n\}n$ & $10$ & $56$ & 0 & $1.900-1.970$ & $1.881$\ $\Delta (1905)$ & $\frac{5}{2}$ & $\frac{3}{2}$ & $\{n,n\}n$ & $10$ & $56$ & 0 & $1.865-1.915$ & $1.879$\ $\Delta (1950)$ & $\frac{7}{2}$ & $\frac{3}{2}$ & $\{n,n\}n$ & $10$ & $56$ & 0 & $1.915-1.950$ & $1.878$\ \ $\Lambda (1890)$ & $\frac{3}{2}$ & $\frac{1}{2}$ & $[n,s]n\;(?)$ & $8$ & $56$ & 0 & $1.850-1.910$ & $1.841$\ $\Lambda (1820)$ & $\frac{5}{2}$ & $\frac{1}{2}$ & $[n,s]n\;(?)$ & $8$ & $56$ & 0 & $1.815-1.825$ & $1.839$\ $\Sigma (1880)$ & $\frac{1}{2}$ & $\frac{3}{2}$ & $\{n,n\}s\;(?)$ & $8$ & $56$ & 0 & $1.880$ & $1.939$\ $N(2000)$ & $\frac{5}{2}$ & $\frac{3}{2}$ & $\{n,n\}n$ & $8$ & $56$ & 0 & & $1.831$\ $\Sigma (2080)$ & $\frac{3}{2}$ & $\frac{3}{2}$ & $\{n,s\}n\;(?)$ & $10$ & $56$ & 0 & $2.080$ & $2.022$\ $\Sigma (2070)$ & $\frac{5}{2}$ & $\frac{3}{2}$ & $\{n,s\}n\;(?)$ & $10$ & $56$ & 0 & $2.070$ & $2.020$\ $\Sigma (2030)$ & $\frac{7}{2}$ & $\frac{3}{2}$ & $\{n,s\}n\;(?)$ & $10$ & $56$ & 0 & $2.025-2.040$ & $2.019$\ We determine now the parameters of the mass formula (\[eq:formulamassaqdq\]) through a fit. We excluded from the fit the states for which their diquark content cannot be determined following the criterion described in section \[sec:numeriquantici\]. These states have a question mark next to their diquark content in Tables \[tab:risonanzebarionidiqL0\], \[tab:risonanzebarionidiqL1\] and \[tab:risonanzebarionidiqL2\]. The results of the fit are: $$\begin{aligned} M_{0} & = & (1.197\pm 0.015) \; GeV\\ \Delta M_{s} & = & (0.132\pm 0.007) \; GeV\\ \Delta M_{[n,s]} & = & (0.201\pm 0.005) \; GeV\\ \Delta M_{\{ n,n\}} & = & (0.135\pm 0.024) \; GeV\\ \Delta M_{\{ n,s\}} & = & (0.339\pm 0.023) \; GeV\\ \Delta M_{\{ s,s\}} & = & (0.441\pm 0.019) \; GeV\\ b & = & (1.011\pm 0.016) \; GeV^2\\ c & = & (0.046\pm 0.022) \; GeV^2\\ d & = & (-0.006\pm 0.015) \; GeV^2\\ e & = & (0.020\pm 0.008) \; GeV^2\\ n & = & (1.37\pm 0.05) \; GeV^2\\ g & = & (0.039\pm 0.007) \; GeV^2\\ h & = & (-0.154\pm 0.004) \; GeV^2\end{aligned}$$ \[sec:discussione\]Critical discussion of the results. ====================================================== The principal feature of our quark-diquark model is the drastic cut in the number of baryonic states. In fact, while all the existing baryonic resonances still fits well in our scheme, we have much less missing states than a normal three quarks constituent model. Nevertheless quite a few missing states still remain and these should be further investigated, both from a theoretical and an experimental point of view. $M_{[n,n]}$ $M_{\{ n,n \} }-M_{[n,n]}$ $M_{[n,s]}-M_{[n,n]}$ $M_{\{ n,s \} }-M_{[n,s]}$ $M_{\{ n,s \} }-M_{\{ n,n \} }$ $M_{\{ s,s \} }-M_{\{ n,s \} }$ Source ------------- ---------------------------- ----------------------- ---------------------------- --------------------------------- --------------------------------- ----------------------------------------- 0.688 0.202 0.272 - - - Maris [@Maris:2002yu; @Maris:2004ig] - 0.29 - 0.11 - - Wilczek [@Wilczek:2004im] - 0.210 - 0.150 - - Jaffe [@Jaffe:2004ph] 0.595 0.205 0.240 0.140 0.175 - Lichtenberg [@Lichtenberg:1996fi] 0.74 0.21 0.14 0.17 0.10 0.08 Roberts [@Burden:1996nh; @Hecht:2002ej] - 0.135 0.201 0.138 0.204 0.101 This work The mass formula resulting from the fit describes reasonably well the spectrum, with a $\chi ^{2}/n.d.f=8.75$. The resulting orbital and vibrational Regge trajectory slopes, $\alpha =b+d=1.005\;GeV^{2}$ and $n=1.37\;GeV^{2}$, agree quite well with the theoretical expectations in a string model [@Santopinto:2006my; @'tHooft:1974hx; @Johnson:1975sg; @Iachello:1991re; @Iachello:1991fj]. Important parameters of constituent quark models are, more than the absolute masses of the constituent quarks, which can vary greatly with the model used, the mass differences between these constituents, which tend to be more stable and may be compared with results obtained with both constituent and other models, such as QCD inspired and lattice ones. Our value for the mass difference between the strange and the non-strange quark $\Delta M_{s}$ is compatible with the estimates of the constituent quark models and with the PDG value for the current quark mass difference [@Nakamura:2010zzi]. The difference $\Delta M_{\{ n,n\}}=M_{\{ n,n \} }-M_{[n,n]}$, as well as the mass differences between $[n,s]$ and $[n,n]$, between $\{ n,s\}$ and $\{ n,n\}$ and between $\{ n,s \}$ and $[n,s]$, have been compared with the predictions made through the main other models for the constituent diquark (see Table \[tab:massediquarkaltri\]). Apart from $\Delta M_{\{ n,n\}}$, which is somewhat smaller than the other models, all the mass differences lie in the same range of values of the other works. We managed to describe in a sufficiently satisfactory way the baryons spectrum with a very simple mass formula, based essentially on only two elements: the constituent quark-diquark structure of the baryons and the Regge trajectories. Thus, we can conclude that these two elements should be the basis of future, more advanced investigations.
--- abstract: | Industrial facilities and critical infrastructures are transforming into “smart" environments that dynamically adapt to external events. The result is an ecosystem of heterogeneous physical and cyber components integrated in cyber-physical systems which are more and more exposed to *cyber-physical attacks*, *i.e.*, security breaches in cyberspace that adversely affect the physical processes at the core of the systems. We provide a formal *compositional metric* to estimate the *impact* of cyber-physical attacks targeting sensor devices of *I[o]{}T systems* formalised in a simple extension of Hennessy and Regan’s *Timed Process Language*. Our *impact metric* relies on a discrete-time generalisation of Desharnais et al.’s *weak bisimulation metric* for concurrent systems. We show the adequacy of our definition on two different attacks on a simple surveillance system. author: - Ruggero Lanotte - Massimo Merro - Simone Tini bibliography: - 'IoT\_bib.bib' title: 'Towards a formal notion of impact metric for cyber-physical attacks (full version)[^1]' --- Introduction ============ The *Internet of Things* (IoT) is heavily affecting our daily lives in many domains, ranging from tiny wearable devices to large industrial systems with thousands of heterogeneous cyber and physical components that interact with each other. *Cyber-Physical Systems* ([CPS]{}[s]{}) are integrations of networking and distributed computing systems with physical processes, where feedback loops allow the latter to affect the computations of the former and vice versa. Historically, [CPS]{}[s]{} relied on proprietary technologies and were implemented as stand-alone networks in physically protected locations. However, the growing connectivity and integration of these systems has triggered a dramatic increase in the number of *cyber-physical attacks* [@CPS-Book2015], *i.e.*, security breaches in cyberspace that adversely affect the physical processes, *e.g.*, manipulating *sensor readings* and, in general, influencing physical processes to bring the system into a state desired by the attacker. Cyber-physical attacks are complex and challenging as they usually cross the boundary between cyberspace and the physical world, possibly more than once [@GGIKLW2015]. Some notorious examples are: (i) the *Stuxnet* worm, which reprogrammed PLCs of nuclear centrifuges in Iran [@stuxnet], (ii) the attack on a sewage treatment facility in Queensland, Australia, which manipulated the SCADA system to release raw sewage into local rivers [@SlMi2007], or the (iii) the recent *BlackEnergy* cyber-attack on the Ukrainian power grid, again compromising the SCADA system [@ICS15]. The points in common of these systems is that they are all safety critical and failures may cause catastrophic consequences. Thus, the concern for consequences at the physical level puts *[CPS]{} security* apart from standard *IT security*. *Timing* is particularly relevant in [CPS]{} security because the physical state of a system changes continuously over time and, as the system evolves in time, some states might be more vulnerable to attacks than others [@KrCa2013]. For example, an attack launched when the target state variable reaches a local maximum (or minimum) may have a great impact on the whole system behaviour [@BestTime2014]. Also the *duration of the attack* is an important parameter to be taken into consideration in order to achieve a successful attack. For example, it may take minutes for a chemical reactor to rupture [@chemical-reactor], hours to heat a tank of water or burn out a motor, and days to destroy centrifuges [@stuxnet]. Actually, the estimation of the *impact* of cyber-physical attacks on the target system is crucial when protecting [CPS]{}[s]{} [@GeKiHa2015]. For instance, in industrial CPSs, before taking any countermeasure against an attack, engineers first try to estimate the impact of the attack on the system functioning (e.g., performance and security) and weight it against the cost of stopping the plant. If this cost is higher than the damage caused by the attack (as is sometimes the case), then engineers might actually decide to let the system continue its activities even under attack. Thus, once an attack is detected, *impact metrics* are necessary to quantify the perturbation introduced in the physical behaviour of the system under attack. The *goal* of this paper is to lay theoretical foundations to provide formal instruments to precisely define the notion of impact of cyber-physical attack targeting physical devices, such as *sensor devices* of IoT systems. For that we rely on a timed generalisation of *bisimulation metrics* [@DJGP02; @DGJP04; @BW05] to compare the behaviour of two systems up to a given tolerance, for time-bounded executions. *Weak bisimulation metric* [@DJGP02] allows us to compare two systems $M$ and $N$, writing $M \simeq_{p} N$, if the weak bisimilarity holds with a *distance* or *tolerance* $p \in [0,1]$, *i.e.*, if $M$ and $N$ exhibit a different behaviour with probability $p$, and the same behaviour with probability $1-p$. A useful generalisation is the *$n$-bisimulation metric* [@vB12] that takes into account bounded computations. Intuitively, the distance $p$ is ensured only for the first $n$ computational steps, for some $n \in \mathbb{N}$. However, in timed systems it is desirable to focus on the passage of time rather than the number of computational steps. This would allow us to deal with situations where it is not necessary (or it simply does not make sense) to compare two systems “ad infinitum” but only for a limited amount of time. ### Contribution. {#contribution. .unnumbered} In this paper, we first introduce a general notion of *timed bisimulation metric* for concurrent probabilistic systems equipped with a discrete notion of time. Intuitively, this kind of metric allows us to derive a *timed weak bisimulation with tolerance*, denoted with $\approx_p^k$, for $k\in \mathbb{N}^+ \cup \{ \infty \}$ and $p \in [0,1]$, to express that the tolerance $p$ between two timed systems is ensured only for the first $k$ time instants (${\mathsf{tick}}$-actions). Then, we use our timed bisimulation metric to set up a formal *compositional* theory to study and measure the *impact* of cyber-physical attacks on IoT systems specified in a simple probabilistic timed process calculus which extends Hennessy and Regan’s *Timed Process Language* (TPL) [@HR95]. IoT systems in our calculus are modelled by specifying: a *physical environment*, containing informations on the physical state variables and the sensor measurements, and a *logics* that governs both accesses to sensors and channel-based communications with other cyber components. We focus on *attacks on sensors* that may eavesdrop and possibly modify the sensor measurements provided to the controllers of sensors, affecting both the *integrity* and the *availability* of the system under attack. In order to make security assessments of our IoT systems, we adapt a well-know approach called *Generalized Non Deducibility on Composition* (GNDC) [@FM99] to compare the behaviour of an IoT system $M$ with the behaviour of the same system under attack, written $M \parallel A$, for some arbitrary cyber-physical attack $A$. This comparison makes use of our timed bisimulation metric to evaluate not only the *tolerance* and the *vulnerability* of a system $M$ with respect to a certain attack $A$, but also the *impact* of a successful attack in terms of the deviation introduced in the behaviour of the target system. In particular, we say that a system $M$ *tolerates an attack* $A$ if $M \parallel A \approx^{\infty}_0 M $, *i.e.*, the presence of $A$ does not affect the behaviour of $M$; whereas $M$ is said to be *vulnerable* to $A$ in the time interval $m..n$ with impact $p$ if $m..n$ is the smallest interval such that $M \parallel A \approx^{m-1}_0 M $ and $M \parallel A \approx^{k}_p M $, for any $k \geq n$, *i.e.*, if the perturbation introduced by the attack $A$ becomes observable in the $m$-th time slot and yields the maximum *impact* $p$ in the $n$-th time slot. In the concluding discussion we will show that the *temporal vulnerability window* $m..n$ provides several informations about the corresponding attack, such as *stealthiness* capability, duration of the *physical effects* of the attack, and consequent room for possible run-time *countermeasures*. As a case study, we use our timed bisimulation metric to measure the impact of two different attacks injecting *false positives* and *false negative*, respectively, into a simple surveillance system expressed in our process calculus. ### Outline. {#outline. .unnumbered} Section \[sectionMainDefinitions\] formalises our timed bisimulation metrics in a general setting. Section \[sec:impact\] provides a simple calculus of IoT systems. Section \[sec:cyber-physical-attackers\] defines cyber-physical attacks together with the notions of tolerance and vulnerability *w.r.t.* an attack. In Section \[sec:case\] we use our metrics to evaluate the impact of two attacks on a simple surveillance system. Section \[sec:conclusions\] draws conclusions and discusses related and future work. In this extended abstract proofs are omitted, full details of the proofs can be found in the Appendix. Timed Bisimulation Metrics {#sectionMainDefinitions} ========================== In this section, we introduce *timed bisimulation metrics* as a general instrument to derive a notion of timed and approximate weak bisimulation between probabilistic systems equipped with a discrete notion of time. In , we recall the semantic model of *nondeterministic probabilistic labelled transition systems*; in , we present our metric semantics. Nondeterministic Probabilistic Labelled Transition Systems {#sec:PTS} ---------------------------------------------------------- Nondeterministic probabilistic labelled transition systems (pLTS) [@S95] combine classic LTSs [@K76] and discrete-time Markov chains [@HJ94; @Ste94] to model, at the same time, reactive behaviour, nondeterminism and probability. We first provide the mathematical machinery required to define a pLTS. The state space in a pLTS is given by a set ${\mathcal{T}}$, whose elements are called $\emph{processes}$, or $\emph{terms}$. We use $t,t',..$ to range over ${\mathcal{T}}$. A (discrete) *probability sub-distribution* over ${\mathcal{T}}$ is a mapping $\Delta \colon {\mathcal{T}}\to [0,1]$, with $\sum_{t \in {\mathcal{T}}}\Delta(t) \in (0 , 1]$. We denote $\sum_{t \in {\mathcal{T}}}\Delta(t)$ by ${\mid\!\!{\Delta}\!\!\mid}$, and we say that $\Delta$ is a *probability distribution* if ${\mid\!\!{\Delta}\!\!\mid}=1$. The *support* of $\Delta$ is given by $\lceil \Delta \rceil = \{ t \in {\mathcal{T}}: \Delta(t) > 0 \}$. The set of all sub-distributions (resp. distributions) over ${\mathcal{T}}$ with finite support will be denoted with ${\mathcal D}_{\mathrm{sub}}({\mathcal{T}})$ (resp. ${\mathcal D}({\mathcal{T}})$). We use $\Delta$, $\Theta$, $\Phi$ to range over ${\mathcal D}_{\mathrm{sub}}({\mathcal{T}})$ and ${\mathcal D}({\mathcal{T}})$. \[def:pLTS\] A *pLTS* is a triple $({\mathcal{T}},\Act,{\xrightarrow{\, {} \, }})$, where: ${\mathcal{T}}$ is a countable set of *terms*, $\Act$ is a countable set of *actions*, and ${\xrightarrow{\, {} \, }} \, \subseteq {{\mathcal{T}}\times \Act \times {{\mathcal D}({\mathcal{T}})}}$ is a *transition relation*. In Definition \[def:pLTS\], we assume the presence of a special deadlocked term ${\mathsf{Dead}}\in {\mathcal{T}}$. Furthermore, we assume that the set of actions $\Act$ contains at least two actions: $\tau$ and ${\mathsf{tick}}$. The former to model internal computations that cannot be externally observed, while the latter denotes the passage of one time unit in a setting with a discrete notion of time [@HR95]. In particular, ${\mathsf{tick}}$ is the only *timed action* in $\Act$. We write $t {\xrightarrow{\, {\alpha} \, }} \Delta$ for $(t,\alpha,\Delta)\!\in \, {\xrightarrow{\, {} \, }}$, $t {\xrightarrow{\, {\alpha} \, }}$ if there is a distribution $\Delta \in {{\mathcal D}({\mathcal{T}})}$ with $t {\xrightarrow{\, {\alpha} \, }} \Delta$, and $t {\mathrel{{{\xrightarrow{\, {\alpha} \, }}}\makebox[0em][r]{$\not$\hspace{2ex}}}{\!}}$ otherwise. Let ${\mathit{der}}(t,\alpha) =\{\Delta\in{{\mathcal D}({\mathcal{T}})} \mid t {\xrightarrow{\, {\alpha} \, }}\Delta\}$ denote the set of the derivatives (i.e. distributions) reachable from term $t$ through action $\alpha$. We say that a pLTS is *image-finite* [@HPSWZ11] if ${\mathit{der}}(t,\alpha)$ is finite for all $t \in {\mathcal{T}}$ and $\alpha \in \Act$. In this paper, we will always work with image-finite pLTSs.\ *Weak transitions.* As we are interested in developing a *weak* bisimulation metric, we need a definition of weak transition which abstracts away from $\tau$-actions. \[sec:weak\] In a probabilistic setting, the definition of weak transition is somewhat complicated by the fact that (strong) transitions take terms to distributions; consequently if we are to use weak transitions then we need to generalise transitions, so that they take (sub-)distributions to (sub-)distributions. To this end, we need some extra notation on distributions. For a term $t \in {\mathcal{T}}$, the *point (Dirac) distribution at $t$*, denoted ${\overline{t}}$, is defined by ${\overline{t}}(t) = 1$ and ${\overline{t}}(t') = 0$ for all $t' \neq t$. Then, the convex combination $\sum_{i \in I}p_i \cdot \Delta_i$ of a family $\{\Delta_i\}_{i \in I}$ of (sub-)distributions, with $I$ a finite set of indexes, $p_i \in (0,1]$ and $\sum_{i \in I}p_i \le 1$, is the (sub-)distribution defined by $(\sum_{i \in I}p_i \cdot \Delta_i)(t) \deff \sum_{i\in I}p_i \cdot \Delta_i(t)$ for all $t \in {\mathcal{T}}$. We write $\sum_{i \in I}p_i \cdot \Delta_i$ as $p_1 \cdot \Delta_1 + \ldots + p_n \cdot \Delta_n$ when $I = \{ 1, \ldots , n \}$. Along the lines of [@Dengetal2008], we write $t {\xrightarrow{\, {\hat{\tau}} \, }} \Delta$, for some term $t$ and some distribution $\Delta$, if either $t {\xrightarrow{\, {\tau} \, }} \Delta$ or $\Delta = {\overline{t}}$. Then, for $\alpha \neq \tau$, we write $t {\xrightarrow{\, {\hat{\alpha}} \, }} \Delta$ if $t {\xrightarrow{\, {\alpha} \, }} \Delta$. Relation ${\xrightarrow{\, {\hat{\alpha}} \, }}$ is extended to model transitions from sub-distributions to sub-distributions. For a sub-distribution $\Delta =\sum_{i \in I}p_i \cdot {\overline{t_i}}$, we write $\Delta {\xrightarrow{\, {\hat{\alpha}} \, }} \Theta$ if there is a non-empty set of indexes $J\subseteq I$ such that: $t_j {\xrightarrow{\, {\hat{\alpha}} \, }} \Theta_j$ for all $j \in J$, $t_i {\mathrel{{{\xrightarrow{\, {\hat{\alpha}} \, }}}\makebox[0em][r]{$\not$\hspace{2ex}}}{\!}}$, for all $i \in I \setminus J$, and $\Theta = \sum_{j \in J}p_j \cdot \Theta_j$. Note that if $\alpha \neq \tau$ then this definition admits that only some terms in the support of $\Delta$ make the ${\xrightarrow{\, {\hat{\alpha}} \, }}$ transition. Then, we define the *weak transition relation* ${{\ext@arrow 0359\Rightarrowfill@{}{\, {\hat{\tau}} \, }}}$ as the transitive and reflexive closure of ${\xrightarrow{\, {\hat{\tau}} \, }}$, *i.e.*, ${{\ext@arrow 0359\Rightarrowfill@{}{\, {\hat{\tau}} \, }}} \, = ({\xrightarrow{\, {\hat{\tau}} \, }})^{\ast}$, while for $\alpha \neq \tau$ we let ${{\ext@arrow 0359\Rightarrowfill@{}{\, {\hat{\alpha}} \, }}}$ denote ${{\ext@arrow 0359\Rightarrowfill@{}{\, {\hat{\tau}} \, }}} {\xrightarrow{\, {\hat{\alpha}} \, }} {{\ext@arrow 0359\Rightarrowfill@{}{\, {\hat{\tau}} \, }}}$. Timed Weak Bisimulation with Tolerance {#sec:new} -------------------------------------- In this section, we define a family of relations ${\mathrel{\approx^{k}_{p}}}$ over ${\mathcal{T}}$, with $p \in [0,1]$ and $k \in \mathbb{N}^+ \cup \{ \infty \}$, where, intuitively, $t {\mathrel{\approx^{k}_{p}}} t'$ means that *$t$ and $t'$ can weakly bisimulate each other with a tolerance $p$ accumulated in $k$ timed steps*. This is done by introducing a family of *pseudometrics* ${\ensuremath{\mathbf{m}}}^k \colon {\mathcal{T}}\times {\mathcal{T}}\to [0,1]$ and defining $t {\mathrel{\approx^{k}_{p}}} t'$ iff ${\ensuremath{\mathbf{m}}}^{k}(t,t') = p$. The pseudometrics ${\ensuremath{\mathbf{m}}}^k$ will have the following properties for any $t,t' \in {\mathcal{T}}$: ${\ensuremath{\mathbf{m}}}^{k_1}(t,t') \le {\ensuremath{\mathbf{m}}}^{k_2}(t,t')$ whenever $k_1 < k_2$ (tolerance monotonicity); ${\ensuremath{\mathbf{m}}}^{\infty}(t,t') = p$ iff $p$ is the distance between $t$ and $t'$ as given by the weak bisimilarity metric in [@DJGP02] in an untimed setting; ${\ensuremath{\mathbf{m}}}^{\infty}(t,t') = 0$ iff $t$ and $t'$ are related by the standard weak probabilistic bisimilarity [@ALS00]. Let us recall the standard definition of pseudometric. \[def:pseudoquasimetric\] A function $d \colon {\mathcal{T}}\times {\mathcal{T}}\to [0,1]$ is a *1-bounded pseudometric* over ${\mathcal{T}}$ if - $d(t,t)= 0$ for all $t \in {\mathcal{T}}$, - $d(t,t') = d(t',t)$ for all $t,t' \in {\mathcal{T}}$ (symmetry), - $d(t,t') \le d(t,t'') + d(t'',t')$ for all $t,t',t''\in {\mathcal{T}}$ (triangle inequality). In order to define the family of functions ${\ensuremath{\mathbf{m}}}^{k}$, we define an auxiliary family of functions ${\ensuremath{\mathbf{m}}}^{k,h} \colon {\mathcal{T}}\times {\mathcal{T}}\to [0,1]$, with $k,h \in \mathbb{N}$, quantifying the tolerance of the weak bisimulation after a sequence of computation steps such that: the sequence contains exactly $k$ ${\mathsf{tick}}$-actions, the sequence terminates with a ${\mathsf{tick}}$-action, any term performs exactly $h$ untimed actions before the first ${\mathsf{tick}}$-action, between any $i$-th and $(i{+}1)$-th ${\mathsf{tick}}$-action, with $1\le i < k$, there are an arbitrary number of untimed actions. The definition of ${\ensuremath{\mathbf{m}}}^{k,h}$ relies on a *timed and quantitative* version of the classic bisimulation game: The tolerance between $t$ and $t'$ as given by ${\ensuremath{\mathbf{m}}}^{k,h}(t,t')$ can be below a threshold $\epsilon \in [0,1]$ only if each transition $t {\xrightarrow{\, {\alpha} \, }} \Delta$ is mimicked by a weak transition $t' {{\ext@arrow 0359\Rightarrowfill@{}{\, {\hat{\alpha}} \, }}} \Theta$ such that the bisimulation tolerance between $\Delta$ and $\Theta$ is, in turn, below $\epsilon$. This requires to lift pseudometrics over ${\mathcal{T}}$ to pseudometrics over (sub-)distributions in ${\mathcal D}_{\mathrm{sub}}({\mathcal{T}})$. To this end, we adopt the notions of *matching* [@Vil08] (also called coupling) and *Kantorovich lifting* [@Den09]. \[def\_matching\] A *matching* for a pair of distributions $(\Delta,\Theta) \in {\mathcal D}({\mathcal{T}}) \times {\mathcal D}({\mathcal{T}})$ is a distribution $\omega$ in the state product space ${\mathcal D}({\mathcal{T}}\times {\mathcal{T}})$ such that: - $\sum_{t' \in {\mathcal{T}}} \omega(t,t')=\Delta(t)$, for all $t \in {\mathcal{T}}$, and - $\sum_{t \in {\mathcal{T}}} \omega(t,t')=\Theta(t')$, for all $t' \in {\mathcal{T}}$. We write $\Omega(\Delta ,\Theta)$ to denote the set of all matchings for $(\Delta,\Theta)$. A matching for $(\Delta,\Theta)$ may be understood as a transportation schedule for the shipment of probability mass from $\Delta$ to $\Theta$ [@Vil08]. \[def:KantorovichLifting\] \[def:Kantorovich\] Assume a pseudometric $d\colon {\mathcal{T}}\times {\mathcal{T}}\to [0,1]$. The *Kantorovich lifting* of $d$ is the function ${\mathit{\mathbf{K}}}(d) \colon {{\mathcal D}({\mathcal{T}})} \times {{\mathcal D}({\mathcal{T}})} \to [0,1]$ defined for distributions $\Delta$ and $\Theta$ as: $ {\mathit{\mathbf{K}}}(d)(\Delta,\Theta) \deff \min_{\omega \in \Omega(\Delta,\Theta)} \sum_{s,t \in {\mathcal{T}}}\omega(s,t) \cdot d(s,t). $ Note that since we are considering only distributions with finite support, the minimum over the set of matchings $\Omega(\Delta,\Theta)$ used in is well defined. Pseudometrics ${\ensuremath{\mathbf{m}}}^{k,h}$ are inductively defined on $k$ and $h$ by means of suitable *functionals* over the complete lattice ${([0,1]^{{\mathcal{T}}\times {\mathcal{T}}},\sqsubseteq)}$ of functions of type ${\mathcal{T}}\times {\mathcal{T}}\to [0,1]$, ordered by $d_1 \sqsubseteq d_2$ iff $d_1(t, t') \le d_2(t,t')$ for all $t,t' \in {\mathcal{T}}$. Notice that in this lattice, for each set $D \subseteq [0,1]^{ {\mathcal{T}}\times {\mathcal{T}}}$, the supremum and infimum are defined as $\sup(D)(t,t') = \sup_{d \in D}d(t,t')$ and $\inf(D)(t,t') = \inf_{d \in D}d(t,t')$, for all $t,t' \in {\mathcal{T}}$. The infimum of the lattice is the constant function zero, denoted by ${\mathit{{\bf 0}}}$, and the supremum is the constant function one, denoted by ${\mathit{{\bf 1}}}$. \[def:metric\_sim\_functional\] The functionals ${\mathit{\mathbf{B}}}, {\mathit{\mathbf{B}}}_{{\mathsf{tick}}} \colon [0,1]^{{\mathcal{T}}\times {\mathcal{T}}} \to [0,1]^{ {\mathcal{T}}\times {\mathcal{T}}}$ are defined for any function $d \in [0,1]^{{\mathcal{T}}\times {\mathcal{T}}}$ and terms $t,t' \in {\mathcal{T}}$ as:\ $\begin{array}{rccl} \Bisimulation(d)(t,t') &=& \displaystyle \max\{ & d(t,t') , \\ &&& \displaystyle \sup_{\alpha \in \Act{\setminus} \{ \tick \}} \; \max_{t \transStep{\alpha}\Delta} \; \inf_{t' \TransStep{\hat{\alpha}} \Theta} \Kantorovich(d)\big(\Delta,\Theta + (1-\size{\Theta})\dirac{\dummyN} \big),\\ & & & \displaystyle \sup_{\alpha \in \Act {\setminus} \{ \tick \} } \; \max_{t' \transStep{\alpha}\Theta} \; \inf_{t \TransStep{\hat{\alpha}} \Delta} \Kantorovich(d)\big(\Delta + (1-\size{\Delta}) \dirac{\dummyN}, \Theta \big) \:\} \\[1.5 ex] \Bisimulation_{\tick}(d)(t,t') &=& \displaystyle \max\{ & d(t,t') , \\ & && \displaystyle \max_{t \transStep{\tick}\Delta}\; \inf_{t' \TransStep{\widehat{\tick}} \Theta} \Kantorovich(d)\big(\Delta,\Theta + (1-\size{\Theta})\dirac{\dummyN} \big),\\ & & & \displaystyle \max_{t' \transStep{\tick}\Theta} \; \inf_{t \TransStep{\widehat{\tick}} \Delta} \Kantorovich(d)\big(\Delta + (1-\size{\Delta}) \dirac{\dummyN}, \Theta \big) \; \} \end{array}$\ where $\inf \emptyset = 1$ and $\max \emptyset = 0$. Notice that all $\max$ in are well defined since the pLTS is image-finite. Notice also that any strong transitions from $t$ to a distribution $\Delta$ is mimicked by a weak transition from $t'$, which, in general, takes to a sub-distribution $\Theta$. Thus, process $t'$ may not simulate $t$ with probability $1{-}{\mid\!\!{\Theta}\!\!\mid}$. \[twbm\] The family of the *timed weak bisimilarity metrics* ${\ensuremath{\mathbf{m}}}^k \colon ({\mathcal{T}}\times {\mathcal{T}}) \to [0,1]$ is defined for all $k \in \mathbb{N}$ by $ {\ensuremath{\mathbf{m}}}^{k} = \begin{cases} {\mathit{{\bf 0}}}& \text{ if } k = 0 \\ \sup_{h \in \mathbb{N}}{\ensuremath{\mathbf{m}}}^{k,h} & \text{ if } k > 0 \end{cases} $ while the functions ${\ensuremath{\mathbf{m}}}^{k,h }\colon({\mathcal{T}}\times {\mathcal{T}}) \! \to \! [0,1]$ are defined for all $k \in \mathbb{N}^+$ and $h \in \mathbb{N}$ by\ $ {\ensuremath{\mathbf{m}}}^{k,h} = \begin{cases} \displaystyle{\mathit{\mathbf{B}}}_{\mathit{{\mathsf{tick}}}}({\ensuremath{\mathbf{m}}}^{k-1}) & \text{ if } h = 0 \\ {\mathit{\mathbf{B}}}({\ensuremath{\mathbf{m}}}^{k,h-1}) & \text{ if } h > 0. \end{cases} $ Then, we define ${\ensuremath{\mathbf{m}}}^{\infty} \colon ({\mathcal{T}}\times {\mathcal{T}}) \to [0,1]$ as ${\ensuremath{\mathbf{m}}}^{\infty} = \sup_{k \in \mathbb{N}}{\ensuremath{\mathbf{m}}}^{k}$. Note that any ${\ensuremath{\mathbf{m}}}^{k,h}$ is obtained from ${\ensuremath{\mathbf{m}}}^{k-1}$ by one application of the functional ${\mathit{\mathbf{B}}}_{{\mathsf{tick}}}$, in order to take into account the distance between terms introduced by the $k$-th ${\mathsf{tick}}$-action, and $h$ applications of the functional ${\mathit{\mathbf{B}}}$, in order to lift such a distance to terms that take $h$ untimed actions to be able to perform a ${\mathsf{tick}}$-action. By taking $\sup_{h \in \mathbb{N}} {\ensuremath{\mathbf{m}}}^{k,h}$ we consider an arbitrary number of untimed steps. The pseudometric property of ${\ensuremath{\mathbf{m}}}^k$ is necessary to conclude that the tolerance between terms as given by ${\ensuremath{\mathbf{m}}}^k$ is a reasonable notion of behavioural distance. \[q\_and\_m\_are\_metrics\] For any $k \ge 1$, ${\ensuremath{\mathbf{m}}}^{k}$ is a 1-bounded pseudometric. Finally, everything is in place to define our timed weak bisimilarity ${\mathrel{\approx^{k}_{p}}}$ with tolerance $p \in [0, 1]$ accumulated after $k$ time units, for $k \in \mathbb{N} \cup \{\infty\}$. \[def:distance-n\] Let $t, t' \in {\mathcal{T}}$, $k \in \mathbb{N}$ and $p \in [0,1]$. We say that *$t$ and $t'$ are weakly bisimilar with a tolerance $p$, which accumulates in $k$ timed actions*, written $t {\mathrel{\approx^{k}_{p}}} t'$, if and only if ${\ensuremath{\mathbf{m}}}^{k}(t,t') = p$. Then, we write $t {\mathrel{\approx^{\infty}_{p}}} t'$ if and only if ${\ensuremath{\mathbf{m}}}^{\infty}(t,t') = p$. Since the Kantorovich lifting ${\mathit{\mathbf{K}}}$ is monotone [@Pan09], it follows that both functionals ${\mathit{\mathbf{B}}}$ and ${\mathit{\mathbf{B}}}_{{\mathsf{tick}}}$ are monotone. This implies that, for any $k\geq 1$, $({\ensuremath{\mathbf{m}}}^{k,h})_{h \ge 0}$ is a non-decreasing chain and, analogously, also $({\ensuremath{\mathbf{m}}}^k)_{k \ge 0}$ is a non-decreasing chain, thus giving the following expected result saying that the distance between terms grows when we consider a higher number of ${\mathsf{tick}}$ computation steps. \[prop:tol-monotonicity\] For all terms $t,t' \in {\mathcal{T}}$ and $k_1,k_2 \in \mathbb{N}^+$ with $k_1 < k_2$, $t {\mathrel{\approx^{k_1}_{p_1}}} t'$ and $t {\mathrel{\approx^{k_2}_{p_2}}} t'$ entail $p_1 \le p_2$. We conclude this section by comparing our behavioural distance with the behavioural relations known in the literature. We recall that in [@DJGP02] a family of relations $\simeq_p$ for *untimed* process calculi are defined such that $t \simeq_p t'$ if and only if $t$ and $t'$ weakly bisimulate each other with tolerance $p$. Of course, one can apply these relations also to timed process calculi, the effect being that timed actions are treated in exactly the same manner as untimed actions. The following result compares the behavioural metrics proposed in the present paper with those of [@DJGP02], and with the classical notions of probabilistic weak bisimilarity [@ALS00] denoted $\approx$. \[prop\_simulazione\] Let $t,t' \in {\mathcal{T}}$ and $p \in [0,1]$. Then, - \[prop\_simulazione\_uno\] $t {\mathrel{\approx^{\infty}_{p}}} t'$ iff $t \simeq_p t'$ - \[prop\_simulazione\_due\] $t {\mathrel{\approx^{\infty}_{0}}} t'$ iff $t \approx t'$. A Simple Probabilistic Timed Calculus for IoT Systems {#sec:impact} ===================================================== In this section, we propose a simple extension of Hennessy and Regan’s *timed process algebra* TPL [@HR95] to express *IoT systems* and *cyber-physical attacks*. The goal is to show that timed weak bisimilarity with tolerance is a suitable notion to estimate the impact of cyber-physical attacks on IoT systems. Let us start with some preliminary notations. We use $x, x_k$ for *state variables*, $c,c_k,$ for *communication channels*, $z_,z_k$ for *communication variables*, $s,s_k$ for *sensors devices*, while $o$ ranges over both channels and sensors. *Values*, ranged over by $v,v'$, belong to a *finite* set of admissible values $\mathcal V$. We use $u, u_k$ for both values and communication variables. Given a generic set of names $\cal N $, we write $\mathcal{V}^{\cal N} $ to denote the set of functions $\mathcal N \rightarrow \mathcal{V} $ assigning a value to each name in $\mathcal N$. For $m \in \mathbb{N}$ and $n \in \mathbb{N} \cup \{ \infty \}$, we write $m..n$ to denote an *integer interval*. As we will adopt a discrete notion of time, we will use integer intervals to denote *time intervals*. *State variables* are associated to physical properties like *temperature*, *pressure*, etc. *Sensor names* are metavariables for sensor devices, such as *thermometers* and *barometers*. Please, notice that in cyber-physical systems, state variables cannot be directly accessed but they can only be tested via one or more sensors. \[def:SmartSys\] Let $\mathcal{X}$ be a set of state variables and $\mathcal S$ be a set of sensors. Let $\mathit{range}: \mathcal X \rightarrow 2^{\mathcal{V}}$ be a total function returning the range of admissible values for any state variable $x \in \mathcal X$. An *IoT system* consists of two components: - a *physical environment* $\xi = \langle {\xi_{\mathrm{x}}}{} , {\xi_{\mathrm{m}}}{} \rangle$ where: - ${\xi_{\mathrm{x}}}{} \in \mathcal{V}^{\mathcal X}$ is the *physical state* of the system that associates a value to each state variable in $\mathcal X$, such that ${\xi_{\mathrm{x}}}{}(x) \in \mathit{range}(x)$ for any $x \in \mathcal{X}$, - ${\xi_{\mathrm{m}}}{}: {\mathcal{V}}^{\mathcal X} \rightarrow \mathcal S \rightarrow {{\mathcal D}(\mathcal{V})}$ is the *measurement map* that given a physical state returns a function that associates to any sensor in $\mathcal S$ a discrete probability distribution over the set of possible sensed values; - a *logical (or cyber) component* $P$ that interacts with the sensors defined in $\xi$, and can communicate, via channels, with other cyber components. We write $\confCPS \xi P$ to denote the resulting IoT system, and use $M$ and $N$ to range over IoT systems. Let us now formalise the *cyber component* of an IoT system. Basically, we adapt Hennessy and Regan’s *timed process algebra TPL* [@HR95]. \[def:processes\] *Logical components* of IoT systems are defined by the following grammar:\ $\begin{array}{rl} P,Q \Bdf & \nil \q \big| \q \tick.P \q \big| \q P \parallel Q \q \big| \q \timeout {\mathit{pfx}.P} {Q} \q \big| \q H \langle \tilde{u} \rangle \q \big| \q %%\timeout{\mathit{phy}.P}{Q} \Bor \\[1pt] \ifelse b P Q \q \big| \q P{\setminus}c \\[3pt] \mathit{pfx} \Bdf & \snda o v \Bor \rcva o z % &\hspace*{6.35cm} \Box \end{array}$ The process ${\mathsf{tick}}.P$ sleeps for one time unit and then continues as $P$. We write $P \parallel Q$ to denote the *parallel composition* of concurrent processes $P$ and $Q$. The process ${\lfloor \mathit{pfx}.P \rfloor Q }$ denotes *prefixing with timeout*. We recall that $o$ ranges over both channel and sensor names. Thus, for instance, ${\lfloor \snda c v . P \rfloor Q }$ sends the value $v$ on channel $c$ and, after that, it continues as $P$; otherwise, if no communication partner is available within one time unit, it evolves into $Q$. The process ${\lfloor {c?(z)}.P \rfloor Q }$ is the obvious counterpart for channel reception. On the other hand, the process ${\lfloor {s?(z)}.P \rfloor Q }$ reads the sensor $s$, according to the measurement map of the systems, and, after that, it continues as $P$. The process ${\lfloor \snda s v . P \rfloor Q }$ writes to the sensor $s$ and, after that, it continues as $P$; here, we wish to point out that this a *malicious activity*, as controllers may only access sensors for reading sensed data. Thus, the construct ${\lfloor \snda s v.P \rfloor Q }$ serves to implement an *integrity attack* that attempts at synchronising with the controller of sensor $s$ to provide a fake value $v$. In the following, we say that a process is *honest* if it never writes on sensors. The definition of honesty naturally lifts to IoT systems. In processes of the form ${\mathsf{tick}}.Q$ and ${\lfloor \mathit{pfx}.P \rfloor Q }$, the occurrence of $Q$ is said to be *time-guarded*. *Recursive processes* $H \langle \tilde{u} \rangle$ are defined via equations $H(z_1,\ldots, z_k) = P$, where (i) the tuple $z_1,\ldots, z_k$ contains all the variables that appear free in $P$, and (ii) $P$ contains *only time-guarded occurrences* of the process identifiers, such as $H$ itself (to avoid *zeno behaviours*). The two remaining constructs are standard; they model conditionals and channel restriction, respectively. Finally, we define how to compose IoT systems. For simplicity, we compose two systems only if they have the same physical environment. \[def:composing-systems\] Let $M_1 = \confCPS \xi P_1$ and $M_2 = \confCPS \xi P_2$ be two IoT systems, and $Q$ be a process whose sensors are defined in the physical environment $\xi$. We write: - $M_1 \parallel M_2$ to denote $\confCPS \xi (P_1 \parallel P_2)$; - $M_1 \parallel Q$ to denote $\confCPS \xi { ({P_1}\parallel Q) }$; - $M_1{\setminus}c$ as an abbreviation for $\confCPS \xi {({P_1} {\setminus}c)}$. We conclude this section with the following abbreviations that will be used in the rest of the paper. We write $P{\setminus}\{ c_1, c_2, \ldots , c_n \}$, or $P {\setminus}\tilde{c}$, to mean $P{\setminus}{c_1}{\setminus}{c_2}\cdots{\setminus}{c_n}$. For simplicity, we sometimes abbreviate both $H(i)$ and $H \langle i \rangle$ with $H_i$. We write $\mathit{pfx}.P$ as an abbreviation for the process defined via the equation $\mathit{H} = {\lfloor \mathit{pfx}.P \rfloor \mathit{H} }$, where the process name $\mathit{H}$ does not occur in $P$. We write ${\mathsf{tick}}^{k}.P$ as a shorthand for ${\mathsf{tick}}.{\mathsf{tick}}. \ldots {\mathsf{tick}}.P$, where the prefix ${\mathsf{tick}}$ appears $k \geq 0$ consecutive times. We write ${\mathsf{Dead}}$ to denote a deadlocked IoT system that cannot perform any action. Probabilistic labelled transition semantics {#lab_sem} ------------------------------------------- $$\begin{array}{l@{\hspace*{5mm}}l} \Txiom{Write} {-} { { {\lfloor \snda o v .P \rfloor Q } } \trans{\snda o v} P} & \Txiom{Read} {-} { { {\lfloor {o?(z)} .P \rfloor Q } } \trans{{o?(z)}} {P} } \\[14pt] \Txiom{Sync} { P \trans{\snda o v} { P'} \Q Q \trans{{o?(z)}} { Q'} } { P \parallel Q \trans{\tau} {P'\parallel Q'{\subst v z}}} & \Txiom{Par} { P \trans{\lambda} P' \Q \lambda \neq {\mathsf{tick}}} { {P\parallel Q} \trans{\lambda} {P'\parallel Q}} \\[14pt] \Txiom{Res}{P \trans{\lambda} P' \Q \lambda \not\in \{ {\snda o v}, {{o?(z)}} \}}{P {\setminus}o \trans{\lambda} {P'}{\setminus}o} & \Txiom{Rec} { P{\subst {\tilde{v}} {\tilde{z}}} \trans{\lambda} Q \Q H(\tilde{z})=P} { H \langle \tilde{v} \rangle \trans{\lambda} Q} \\[14pt] \Txiom{Then}{\bool{b}=\true \Q P \trans{\lambda} P'} {\ifelse b P Q \trans{\lambda} P'} & \Txiom{Else}{\bool{b}=\false \Q Q \trans{\lambda} Q'} {\ifelse b P Q \trans{\lambda} Q'} \\[14pt] \Txiom{TimeNil}{-} { \nil \trans{{\mathsf{tick}}} \nil} & \Txiom{Delay} {-} { { {\mathsf{tick}}.P} \trans{{\mathsf{tick}}} P} \\ [14pt] \Txiom{Timeout} {-} { {{\lfloor \mathit{pfx}.P \rfloor Q } } \trans{{\mathsf{tick}}} Q} & \Txiom{TimePar} { P \trans{{\mathsf{tick}}} {P'} \Q Q \trans{{\mathsf{tick}}} {Q'} } { {P \parallel Q} \trans{{\mathsf{tick}}} { P' \parallel Q'} } \end{array}$$ As said before, sensors serve to observe the evolution of the physical state of an IoT system. However, sensors are usually affected by an *error/noise* that we represent in our measurement maps by means of discrete probability distributions. For this reason, we equip our calculus with a probabilistic labelled transition system. In the following, the symbol $\epsilon$ ranges over distributions on physical environments, whereas $\pi$ ranges over distributions on (logical) processes. Thus, $\confCPS {\epsilon} {\pi}$ denotes the distribution over IoT systems defined by $(\confCPS {\epsilon} {\pi})(\confCPS{\xi}{P})= {\epsilon}(\xi) \cdot \pi(P)$. The symbol $\gamma$ ranges over distributions on IoT systems. In , we give a standard labelled transition system for logical components (timed processes), whereas in we rely on the LTS of to define a simple pLTS for IoT systems by lifting transition rules from processes to systems. In , the meta-variable $\lambda$ ranges over labels in the set $\{\tau, {\mathsf{tick}}, {\snda o v}, {{o?(z)}} \}$. Rule serve to model synchronisation and value passing, on some name (for channel or sensor) $o$: if $o$ is a channel then we have standard point-to-point communication, whereas if $o$ is a sensor then this rule models an *integrity attack* on sensor $s$, as the controller is provided with a fake value $v$. The remaining rules are standard. The symmetric counterparts of rules and are omitted. According to , IoT systems may fire four possible actions ranged over by $\alpha$. These actions represent: internal activities ($\tau$), the passage of time (${\mathsf{tick}}$), channel transmission (${\out c v}$) and channel reception (${\inp c v}$). Rules and model transmission and reception on a channel $c$ with an external system, respectively. Rule models the reading of the value detected at a *sensor* $s$ according to the current physical environment $\xi = \langle {\xi_{\mathrm{x}}}{}, {\xi_{\mathrm{m}}}{} \rangle$. In particular, this rule says that if a process $P$ in a system $\confCPS \xi P$ reads a sensor $s$ defined in $\xi$ then it will get a value that may vary according to the probability distribution resulting by providing the state function ${\xi_{\mathrm{x}}}{}$ and the sensor $s$ to the measurement map ${\xi_{\mathrm{m}}}{}$. Rule lifts internal actions from processes to systems. This includes communications on channels and malicious accesses to sensors’ controllers. According to Definition \[def:composing-systems\], rule models also channel communication between two parallel IoT systems sharing the same physical environment. A second lifting occurs in rule for timed actions ${\mathsf{tick}}$. Here, $\xi'$ denotes an admissible physical environment for the next time slot, nondeterministically chosen from the *finite* set $\mathit{next}(\langle {\xi_{\mathrm{x}}}{} , {\xi_{\mathrm{m}}}{} \rangle)$. This set is defined as $ \{ \langle {\xi_{\mathrm{x}}}'{}, {\xi_{\mathrm{m}}}{} \rangle : {\xi_{\mathrm{x}}}'{}(x) \in \mathit{range}(x) \textrm{ for any } x \in \mathcal X\}$.[^2] As a consequence, the rules in define an *image-finite* pLTS. For simplicity, we abstract from the *physical process* behind our IoT systems. $$\begin{array}{c} \Txiom{Snd} {P \trans{\snda c v} P' } {\confCPS \xi P \trans{\out c v} \confCPS {{\overline{\xi}}} {{\overline{P'}} }} \Q\Q\Q\Q \Txiom{Rcv} {P \trans{{c?(z)}} P' } {\confCPS \xi P \trans{\inp c v} \confCPS {{\overline{\xi}}} {{\overline{P'{\subst v z}}} }} \\[17pt] \Txiom{SensRead}{P \trans{{s?(z)}} P' \Q \mbox{\small{${\xi_{\mathrm{m}}}{}({\xi_{\mathrm{x}}}{})(s) = \sum_{i \in I} p_i \cdot {\overline{v_i}}$}} } {\confCPS \xi P \trans{\tau} \confCPS {{\overline{\xi}}} {\sum_{i \in I}p_i \cdot {\overline{P' \subst{v_i}{z}}}}} \\[17pt] \Txiom{Tau}{P \trans{\tau} P' } { \confCPS \xi P \trans{\tau} \confCPS {{\overline{\xi}}} {{\overline{P'}}}} \Q \Txiom{Time}{ P \trans{{\mathsf{tick}}} {P'} \Q \confCPS \xi P \ntrans{\tau} \Q \xi' \in \mathit{next}(\xi) } {\confCPS \xi P \trans{{\mathsf{tick}}} \confCPS {{\overline{\xi'}}} {{\overline{P'}}}} \end{array}$$ Cyber-physical attacks on sensor devices {#sec:cyber-physical-attackers} ======================================== In this section, we consider attacks tampering with sensors by eavesdropping and possibly modifying the sensor measurements provided to the corresponding controllers. These attacks may affect both the *integrity* and the *availability* of the system under attack. We do not represent (well-known) attacks on communication channels as our focus is on attacks to physical devices and the consequent impact on the physical state. However, our technique can be easily generalised to deal with attacks on channels as well. A (pure) cyber-physical attack $A$ is a process derivable from the grammar of such that: - $A$ writes on at least one sensor; - $A$ never uses communication channels. In order to make security assessments on our IoT systems, we adapt a well-known approach called *Generalized Non Deducibility on Composition (GNDC)* [@FM99]. Intuitively, an attack $A$ affects an honest IoT system $M$ if the execution of the composed system $M \parallel A$ differs from that of the original system $M$ in an observable manner. Basically, a cyber-physical attack can influence the system under attack in at least two different ways: - The system $M \parallel A$ might have non-genuine execution traces containing observables that cannot be reproduced by $M$; here the attack affects the *integrity* of the system behaviour (*integrity attack*). - The system $M$ might have execution traces containing observables that cannot be reproduced by the system under attack $M \parallel A$ (because they are prevented by the attack); this is an attack against the *availability* of the system (*DoS attack*). Now, everything is in place to provide a formal definition of *system tolerance* and *system vulnerability* with respect to a given attack. Intuitively, a system $M$ tolerates an attack $A$ if the presence of the attack does not affect the behaviour of $M$; on the other hand $M$ is vulnerable to $A$ in a certain time interval if the attack has an *impact* on the behaviour of $M$ in that time interval. \[def:tolerance\] Let $M$ be a honest IoT system. We say that $M$ *tolerates an attack $A$* if $ M \parallel A {\mathrel{\approx^{\infty}_{0}}} M $. \[def:vulnerability\] Let $M$ be a honest IoT system. We say that $M$ is *vulnerable to an attack $A$ in the time interval $m..n$ with *impact* $p \in [0,1]$*, for $m\in \mathbb{N}^+$ and $n \in \mathbb{N}^+ \cup \{ \infty \} $, if $m..n$ is the smallest time interval such that: (i) $ M \parallel A {\mathrel{\approx^{m-1}_{0}}} M$, (ii) $M \parallel A {\mathrel{\approx^{n}_{p}}} M$, (iii) $M \parallel A {\mathrel{\approx^{\infty}_{p}}} M$.[^3] Basically, the definition above says that if a system is vulnerable to an attack in the time interval $m..n$ then the perturbation introduced by the attack starts in the $m$-th time slot and reaches the maximum impact in the $n$-th time slot. The following result says that both notions of tolerance and vulnerability are suitable for *compositional reasonings*. More precisely, we prove that they are both preserved by parallel composition and channel restriction. Actually, channel restriction may obviously make a system less vulnerable by hiding channels. \[thm:attack-tolerance-gen\] Let $M_1 = \confCPS \xi P_1$ and $M_2 = \confCPS \xi P_2$ be two honest IoT systems with the same physical environment $\xi$, $A$ an arbitrary attack, and $\tilde{c}$ a set of channels. - If both $M_1$ and $M_2$ tolerate $A$ then $(M_1 \parallel M_2) {\setminus} \tilde{c}$ tolerates $A$. - If $M_1$ is vulnerable to $A$ in the time interval $m_1..n_1$ with impact $p_1$, and $M_2$ is vulnerable to $A$ in the time interval $m_2..n_2$ with impact $p_2$, then $M_1 \parallel M_2$ is vulnerable to $A$ in a the time interval $\min(m_1,m_2)..\max(n_1,n_2)$ with an impact $p' \leq (p_1+p_2 - p_1 p_2)$. - If $M_1$ is vulnerable to $A$ in the interval $m_1..n_1$ with impact $p_1$ then $M_1 {\setminus} \tilde{c}$ is vulnerable to $A$ in a time interval $m'..n' \subseteq m_1..n_1$ with an impact $p' \le p_1$. Note that if an attack $A$ is tolerated by a system $M$ and can interact with a honest process $P$ then the compound system $M \parallel P$ may be vulnerable to $A$. However, if $A$ does not write on the sensors of $P$ then it is tolerated by $M \parallel P$ as well. The bound $p' \leq (p_1+p_2 - p_1 p_2)$ can be explained as follows. The likelihood that the attack does not impact on $M_i$ is $(1-p_i)$, for $i \in \{ 1,2 \}$. Thus, the likelihood that the attack impacts neither on $M_1$ nor on $M_2$ is at least $(1-p_1) (1-p_2)$. Summarising, the likelihood that the attack impacts on at least one of the two systems $M_1$ and $M_2$ is at most $1- (1-p_1) (1-p_2) = p_1+p_2 - p_1 p_2$. An easy corollary of allows us to lift the notions of tolerance and vulnerability from a honest system $M$ to the compound systems $M \parallel P$, for a honest process $P$. \[thm:attack-tolerance\] Let $M$ be a honest system, $A$ an attack, $\tilde{c}$ a set of channels, and $P$ a honest process that reads sensors defined in $M$ but not those written by $A$. - If $M$ tolerates $A$ then $(M\parallel P) {\setminus} \tilde{c}$ tolerates $A$. - If $M$ is vulnerable to $A$ in the interval $m..n$ with impact $p$, then $(M\parallel P) {\setminus} \tilde{c}$ is vulnerable to $A$ in a time interval $m'..n' \subseteq m..n$, with an impact $p' \leq p$. Attacking a smart surveillance system: A case study {#sec:case} =================================================== Consider an alarmed ambient consisting of three rooms, $r_i$ for $i \in \{ 1, 2, 3 \}$, each of which equipped with a sensor $s_i$ to detect unauthorised accesses. The alarm goes off if at least one of the three sensors detects an intrusion. The logics of the system can be easily specified in our language as follows: $${\small \begin{array}{rcl} \mathit{Sys} & = & \left( \mathit{Mng} \parallel \mathit{Ctrl_1}\parallel \mathit{Ctrl_2} \parallel \mathit{Ctrl_3}\right){\setminus}\{c_1,c_2, c_3\} \\[1pt] \mathit{Mng} & = & {c_1?(z_1)}.{c_2?(z_2)}.{c_3?(z_3)} . \mathsf{if} \, (\bigvee_{i=1}^3 z_i{=} \mathsf{on}) \, \{ \snda{\mathit{alarm}}{\mathsf{on}} .{\mathsf{tick}}.\mathit{Check_{k}} \} \, \mathsf{else} \, \{ {\mathsf{tick}}.\mathit{Mng}\} \\[1pt] \mathit{Check_{0}} & = & \mathit{Mng} \\[1pt] \mathit{Check_{j}} & = & \snda{\mathit{alarm}}{\mathsf{on}} . {c_1?(z_1)}.{c_2?(z_2)}.{c_3?(z_3)} . \mathsf{if} \, (\bigvee_{i=1}^3 z_i= \mathsf{on}) \, \{ \mathit{{\mathsf{tick}}.Check_{k}} \} \: \\ && \mathsf{else} \: \{ {\mathsf{tick}}.\mathit{Check_{j{-}1}} \} \Q \textrm{for } j>0 \\[1pt] \mathit{Ctrl_i} & = & {s_i?(z_i)} . \mathsf{if} \, (z_i{=}\mathsf{presence}) \, \{ \snda{c_i}{\mathsf{on}} .{\mathsf{tick}}. \mathit{Ctrl_i} \} \, \mathsf{else} \, \{ \snda{c_i}{\mathsf{off}} .{\mathsf{tick}}. \mathit{Ctrl_i} \} \textrm{ for }i {\in} \{ 1, 2, 3 \}. \end{array} }$$ Intuitively, the process $\mathit{Sys}$ is composed by three controllers, $\mathit{Ctrl_i}$, one for each sensor $s_i$, and a manager $\mathit{Mng}$ that interacts with the controllers via private channels $c_i$. The process $\mathit{Mng}$ fires an alarm if at least one of the controllers signals an intrusion. As usual in this kind of surveillance systems, the alarm will keep going off for $k$ instants of time after the last detected intrusion. As regards the physical environment, the physical state ${\xi_{\mathrm{x}}}{} : \{ r_1, r_2, r_3 \} \rightarrow \{ \mathsf{presence} , \mathsf{absence} \} $ is set to ${\xi_{\mathrm{x}}}{}(r_i)=\mathsf{absence}$, for any $i \in \{ 1, 2, 3\}$. Furthermore, let $p_i^+$ and $p_i^-$ be the probabilities of having *false positives* (erroneously detected intrusion) and *false negatives* (erroneously missed intrusion) at sensor $s_i$[^4], respectively, for $i \in \{ 1 , 2, 3 \}$, the measurement function ${\xi_{\mathrm{m}}}{}$ is defined as follows: $ {\xi_{\mathrm{m}}}{}({\xi_{\mathrm{x}}}{})(s_i)=(1{-}p_i^-) \, {\overline{\mathsf{presence}}} + p_i^- {\overline{\mathsf{absence}}}$, if ${\xi_{\mathrm{x}}}{}(r_i)=\mathsf{presence}$; $ {\xi_{\mathrm{m}}}{}({\xi_{\mathrm{x}}}{})(s_i)=(1{-}p_i^+)\, {\overline{\mathsf{absence}}} + p_i^+ {\overline{\mathsf{presence}}}$, otherwise. Thus, the whole IoT system has the form $\confCPS \xi {\mathit{Sys}}$, with $\xi = \langle {\xi_{\mathrm{x}}}{} , {\xi_{\mathrm{m}}}{} \rangle $. We start our analysis studying the impact of a simple cyber-physical attack that provides fake *false positives* to the controller of one of the sensors $s_i$. This attack affects the *integrity* of the system behaviour as the system under attack will fire alarms without any physical intrusion. In this example, we provide an attack that tries to increase the number of false positives detected by the controller of some sensor $s_i$ during a specific time interval $m..n$, with $m, n \in \mathbb{N}$, $n \geq m > 0$. Intuitively, the attack waits for $m-1$ time slots, then, during the time interval $m..n$, it provides the controller of sensor $s_i$ with a fake intrusion signal. Formally, $$\begin{array}{rcl} A_{\mathsf{fp}}(i,m,n) & = & {\mathsf{tick}}^{m-1} . B\langle i, n-m+1 \rangle \\[2pt] B(i, j) & = & \ifelse {j = 0} {\nil} {{\lfloor \snda {s_i} {\mathsf{presence}} . {\mathsf{tick}}. B \langle i , j-1 \rangle \rfloor B \langle i, j-1 \rangle }} \, . \end{array}$$ In the following proposition, we use our metric to measure the perturbation introduced by the attack to the controller of a sensor $s_i$ by varying the time of observation of the system under attack. \[prop:case1\] Let $\xi$ be an arbitrary physical state for the systems $M_i = \confCPS \xi \mathit{Ctrl}_i$, for $i \in \{ 1, 2 , 3 \}$. Then, - $ M_i \parallel A_{\mathsf{fp}} \langle i , m, n \rangle \, {\mathrel{\approx^{j}_{0}}} \, M_i$, for $j \in 1 .. m{-}1$; - $ M_i \parallel A_{\mathsf{fp}} \langle i , m, n \rangle \, {\mathrel{\approx^{j}_{h}}} \, M_i$, with $h=1-(p_i^+)^{j-m+1} $, for $j \in m .. n $; - $ M_i \parallel A_{\mathsf{fp}} \langle i , m, n \rangle \, {\mathrel{\approx^{j}_{r}}} \, M_i$, with $r=1-(p_i^+)^{n-m+1} $, for $j > n $ or $j=\infty$. By an application of we can measure the impact of the attack $A_{\mathsf{fp}}$ to the (sub)systems $ \confCPS {\xi } { \mathit{Ctrl_i}}$. The IoT systems $ \confCPS {\xi } { \mathit{Ctrl_i}}$ are vulnerable to the attack $ A_{\mathsf{fp}} \langle i , m, n \rangle $ in the time interval $m..n$ with impact $ 1-(p_i^+)^{ n - m +1 } $. Note that the vulnerability window $m..n$ coincides with the activity period of the attack $A_{\mathsf{fp}}$. This means that the system under attack recovers its normal behaviour immediately after the termination of the attack. However, in general, an attack may impact the behaviour of the target system long after its termination. Note also that the attack $ A_{\mathsf{fp}}\langle i , m, n \rangle$ has an impact not only on the controller $\mathit{Ctrl}_i$ but also on the whole system $\confCPS \xi \mathit{Sys}$. This because the process $\mathit{Mng}$ will surely fire the alarm as it will receive at least one intrusion detection from $\mathit{Ctrl}_i$. However, by an application of we can prove that the impact on the whole system will not get amplified. The system $ \confCPS \xi \mathit{Sys} $ is vulnerable to the attack $ A_{\mathsf{fp}} \langle i , m, n \rangle $ in a time interval $m'..n' \subseteq m..n$ with impact $p' \leq 1-(p_i^+)^{ n - m +1}$. Now, the reader may wonder what happens if we consider a complementary attack that provides fake *false negatives* to the controller of one of the sensors $s_i$. In this case, the attack affects the *availability* of the system behaviour as the system will no fire the alarm in the presence of a real intrusion. This because a real intrusion will be somehow “hidden” by the attack. The goal of the following attack is to increase the number of false negatives during the time interval $m..n$, with $n \geq m > 0$. Formally, the attack is defined as follows: $$\begin{array}{rcl} A_{\mathsf{fn}}(i,m,n) & = & {\mathsf{tick}}^{m-1} . C \langle i, n-m+1 \rangle \\[2pt] C(i, j) & = & \ifelse {j = 0} {\nil} {{\lfloor \snda {s_i} {\mathsf{absence}} . {\mathsf{tick}}. C \langle i , j-1 \rangle \rfloor C \langle i, j-1 \rangle }} \, . \end{array}$$ In the following proposition, we use our metric to measure the deviation introduced by the attack $ A_{\mathsf{fn}}$ to the controller of a sensor $s_i$. With no surprise we get a result that is the symmetric version of . \[prop:case2\] Let $\xi$ be an arbitrary physical state for the system $M_i = \confCPS \xi \mathit{Ctrl}_i$, for $i \in \{ 1, 2 , 3 \}$. Then, - $ M_i \parallel A_{\mathsf{fn}} \langle i , m, n \rangle \, {\mathrel{\approx^{j}_{0}}} \, M_i$, for $j \in 1 .. m{-}1$; - $ M_i \parallel A_{\mathsf{fn}} \langle i , m, n \rangle \, {\mathrel{\approx^{j}_{h}}} \, M_i$, with $h=1-(p_i^-)^{j-m+1} $, for $j \in m .. n $; - $ M_i \parallel A_{\mathsf{fn}} \langle i , m, n \rangle \, {\mathrel{\approx^{j}_{r}}} \, M_i$, with $r=1-(p_i^-)^{n-m+1} $, for $j > n $ or $j=\infty$. Again, by an application of we can measure the impact of the attack $A_{\mathsf{fn}}$ to the (sub)systems $ \confCPS {\xi } { \mathit{Ctrl_i}}$. The IoT systems $ \confCPS {\xi } { \mathit{Ctrl_i}}$ are vulnerable to the attack $ A_{\mathsf{fn}} \langle i , m, n \rangle $ in the time interval $m..n$ with impact $ 1-(p_i^-)^{ n - m +1 } $. As our timed metric is compositional, by an application of we can estimate the impact of the attack $A_{\mathsf{fn}}$ to the whole system $\confCPS \xi \mathit{Sys}$. The system $ \confCPS \xi \mathit{Sys} $ is vulnerable to the attack $ A_{\mathsf{fn}} \langle i , m, n \rangle $ in a time interval $m'..n' \subseteq m..n$ with impact $ p' \leq 1-(p_i^-)^{ n - m +1}$. Conclusions, related and future work {#sec:conclusions} ==================================== We have proposed a timed generalisation of the $n$-bisimulation metric [@vB12], called *timed bisimulation metric*, obtained by defining two functionals over the complete lattice of the functions assigning a distance in $[0,1]$ to each pair of systems: the former deals with the distance accumulated when executing untimed steps, the latter with the distance introduced by timed actions. We have used our timed bisimulation metrics to provide a formal and *compositional* notion of *impact metric* for *cyber-physical attacks* on *IoT systems* specified in a simple timed process calculus. In particular, we have focussed on cyber-physical attacks targeting sensor devices (attack on sensors are by far the most studied cyber-physical attacks [@survey-CPS-security-2016]). We have used our timed weak bisimulation with tolerance to formalise the notions of *attack tolerance* and *attack vulnerability with a given impact $p$*. In particular, a system $M$ is said to be vulnerable to an attack $A$ in the time interval $m..n$ with impact $p$ if the perturbation introduced by $A$ becomes observable in the $m$-th time slot and yields the maximum impact $p$ in the $n$-th time slot. Here, we wish to stress that the *vulnerability window* $m..n$ is quite informative. In practise, this interval says when an attack will produce observable effects on the system under attack. Thus, if $n$ is finite we have an attack with *temporary effects*, otherwise we have an attack with *permanent effects*. Furthermore, if the attack is quick enough, and terminates well before the time instant $m$, then we have a *stealthy attack* that affects the system late enough to allow *attack camouflages* [@GGIKLW2015]. On the other hand, if at time $m$ the attack is far from termination, then the IoT system under attack has good chances of undertaking countermeasures to stop the attack. As a case study, we have estimated the impact of two cyber-physical attacks on sensors that introduce *false positives* and *false negatives*, respectively, into a simple surveillance system, affecting the *integrity* and the *availability* of the IoT system. Although our attacks are quite simple, the specification language and the corresponding metric semantics presented in the paper allow us to deal with smarter attacks, such as *periodic attacks* with constant or variable period of attack. Moreover, we can easily extend our threat model to recover (well-known) attacks on communication channels. ### Related work. {#related-work. .unnumbered} We are aware of a number of works using formal methods for [CPS]{} security, although they apply methods, and most of the time have goals, that are quite different from ours. Burmester et al. [@BuMaCh2012] employed *hybrid timed automata* to give a threat model based on the traditional Byzantine fault model for crypto-security. However, as remarked in [@TeShSaJo2015], cyber-physical attacks and faults have inherently distinct characteristics. In fact, unlike faults, cyber-physical attacks may be performed over a significant number of attack points and in a coordinated way. In [@Vig2012], Vigo presented an attack scenario that addresses some of the peculiarities of a cyber-physical adversary, and discussed how this scenario relates to other attack models popular in the security protocol literature. Then, in [@Vigo2015; @VNN2013] Vigo et al. proposed an untimed calculus of broadcasting processes equipped with notions of failed and unwanted communication. They focus on DoS attacks without taking into consideration timing aspects or attack impact. Bodei et al. [@BDFG16; @BDFG17] proposed an untimed process calculus, IoT-LySa, supporting a control flow analysis that safely approximates the abstract behaviour of IoT systems. Essentially, they track how data spread from sensors to the logics of the network, and how physical data are manipulated. Rocchetto and Tippenhaur [@RocchettoTippenhauer2016a] introduced a taxonomy of the diverse attacker models proposed for [CPS]{} security and outline requirements for generalised attacker models; in [@RocchettoTippenhauer2016b], they then proposed an extended Dolev-Yao attacker model suitable for [CPS]{}[s]{}. In their approach, physical layer interactions are modelled as abstract interactions between logical components to support reasoning on the physical-layer security of [CPS]{}[s]{}. This is done by introducing additional orthogonal channels. Time is not represented. Nigam et al. [@Nigam-Esorics2016] worked around the notion of Timed Dolev-Yao Intruder Models for Cyber-Physical Security Protocols by bounding the number of intruders required for the automated verification of such protocols. Following a tradition in security protocol analysis, they provide an answer to the question: How many intruders are enough for verification and where should they be placed? Their notion of time is somehow different from ours, as they focus on the time a message needs to travel from an agent to another. The paper does not mention physical devices, such as sensors and/or actuators. Finally, Lanotte et al. [@paperCSF2017] defined a hybrid process calculus to model both [CPS]{}[s]{} and cyber-physical attacks; they defined a threat model for cyber-physical attacks to physical devices and provided a proof methods to assess attack tolerance/vulnerability with respect to a timed trace semantics (no tolerance allowed). ### Future work. {#future-work. .unnumbered} Recent works [@LM11; @GLT16; @LMT17b; @LMT17; @GT18] have shown that bisimulation metrics are suitable for compositional reasoning, as the distance between two complex systems can be often derived in terms of the distance between their components. In this respect, and allows us compositional reasonings when computing the impact of attacks on a target system, in terms of the impact on its sub-systems. We believe that this result can be generalised to estimate the impact of parallel attacks of the form $A = A_1 \parallel \ldots \parallel A_k$ in terms of the impacts of each malicious module $A_i$. As future work, we also intend to adopt our impact metric in more involved languages for *cyber-physical systems and attacks*, such as the language developed in [@paperCSF2017], with an explicit representation of physical processes via differential equations or their discrete counterpart, difference equations. #### Acknowledgements. {#acknowledgements. .unnumbered} We thank the anonymous reviewers for valuable comments. This work has been partially supported by the project “Dipartimenti di Eccellenza 2018-2022”, funded by the Italian Ministry of Education, Universities and Research (MIUR), and by the Joint Project 2017 “Security Static Analysis for Android Things”, funded by the University of Verona and JuliaSoft Srl. [^1]: An extended abstract will appear in the Proc. of the *14th International Conference on integrated Formal Methods* (iFM 2018), 5th-7th September 2018, Maynooth University, Ireland, and published in a volume of *Lecture Notes in Computer Science*. [^2]: The finiteness follows from the finiteness of $\mathcal V$, and hence of $\mathit{range}(x)$, for any $x \in \mathcal X$. [^3]: By , at all time instants greater than $n$ the impact remains $p$. [^4]: These probabilities are usually very small; we assume them smaller than $\frac{1}{2}$.
--- abstract: 'The effects of a submarine canyon on the propagation of ocean surface waves are examined with a three-dimensional coupled-mode model for wave propagation over steep topography. Whereas the classical geometrical optics approximation predicts an abrupt transition from complete transmission at small incidence angles to no transmission at large angles, the full model predicts a more gradual transition with partial reflection/transmission that is sensitive to the canyon geometry and controlled by evanescent modes for small incidence angles and relatively short waves. Model results for large incidence angles are compared with data from directional wave buoys deployed around the rim and over Scripps Canyon, near San Diego, California, during the Nearshore Canyon Experiment (NCEX). Wave heights are observed to decay across the canyon by about a factor 5 over a distance shorter than a wavelength. Yet, a spectral refraction model predicts an even larger reduction by about a factor 10, because low frequency components cannot cross the canyon in the geometrical optics approximation. The coupled-mode model yields accurate results over and behind the canyon. These results show that although most of the wave energy is refractively trapped on the offshore rim of the canyon, a small fraction of the wave energy ‘tunnels’ across the canyon. Simplifications of the model that reduce it to the standard and modified mild slope equations also yield good results, indicating that evanescent modes and high order bottom slope effects are of minor importance for the energy transformation of waves propagating across depth contours at large oblique angles.' bibliography: - 'wave.bib' nocite: - '[@Berkhoff1972]' - '[@Radder1979]' - '[@Kirby1986c]' - '[@Longuet-Higgins1957]' - '[@Booij1983]' - '[@Berkhoff1972]' - '[@Peak2004]' - '[@Dobson1967]' - '[@Rey1992]' - '[@Takano1960]' - '[@Miles1967]' - '[@Longuet-Higgins1957]' - '[@Ardhuin2006a]' - '[@Peak2004]' - '[@Mei1989]' title: Evolution of surface gravity waves over a submarine canyon --- Introduction ============ Waves are strongly influenced by the bathymetry when they reach shallow water areas. *Munk and Traylor* \[1947\] conducted a first quantitative study of the effects of bottom topography on wave energy transformation over Scripps and La Jolla Canyons, near San Diego, California. Wave refraction diagrams were constructed using a manual method, and compared to visual observations. Fairly good agreement was found between predicted and observed wave heights. Other effects such as diffraction were found to be important elsewhere, for sharp bathymetric features (e.g. harbour structures or coral reefs), prompting *Berkhoff* \[1972\] to introduce an equation that represents both refraction and diffraction. Berkhoff’s equation is based on a vertical integration of Laplace’s equation and is valid in the limit of small bottom slopes. It is widely known as the mild slope equation (MSE). A parabolic approximation of this equation was proposed by *Radder* \[1979\], and further refined by *Kirby* \[1986\] and . *O’Reilly and Guza* \[1991, 1993\] compared *Kirby*’s \[1986\] refraction-diffraction model to a spectral geometrical optics refraction model based on the theory of *Longuet-Higgins* \[1957\]. The two models generally agreed in simulations of realistic swell propagation in the Southern California Bight. However, both models assume a gently sloping bottom, and their limitations in regions with steep topography are not well understood. *Booij* \[1983\], showed that the MSE is valid for bottom slopes as large as $1/3$ for normal wave incidence. To extend its application to steeper slopes, *Massel* \[1993 ; see also *Chamberlain and Porter*, 1995\] modified the MSE by including terms of second order in the bottom slope, that were neglected by *Berkhoff* \[1972\]. This modified mild slope equation (MMSE) includes terms proportional to the bottom curvature and the square of the bottom slope. *Chandrasekera and Cheung* \[1997\] observed that the curvature terms significantly change the wave height behind a shoal, whereas the slope-squared terms have a weaker influence. *Lee and Yoon* \[2004\] noted that the higher order bottom slope terms change the wavelength, which in turn affects the refraction. In spite of these improvements, an important restriction of these equations is that the vertical structure of the wave field is described by the Airy solution of waves over a horizontal bottom. Hence the MMSE cannot describe the wave field accurately over steep bottom topography. Thus, *Massel* \[1993\] introduced an additional infinite series of local modes (’evanescent modes’ or ’decaying waves’), that allows a local adaptation of the wave field \[see also *Porter and Staziker*, 1995\], and converges to the exact solution of Laplace’s equation, except at the bottom interface. Indeed, the vertical velocity at the bottom is still zero, and is discontinuous in the limit of an infinite number of modes. Recently, *Athanassoulis and Belibassakis* \[1999\] added a ’sloping bottom mode’ to the local mode series expansion, which properly satisfies the Neuman bottom boundary condition. This approach was further explored by *Chandrasekera and Cheung*, \[2001\] and *Kim and Bai*, \[2004\]. Although the sloping-bottom mode yields only small corrections for the wave height, it significantly improves the accuracy of the velocity field close to the bottom. Moreover, this mode enables a faster convergence of the series of evanescent modes, by making the convergence mathematically uniform. As these steep topography models are becoming available, one may wonder if this level of sophistication is necessary to accurately describe the transformation of ocean waves over natural continental shelf topography. It is expected that if such models are to be useful anywhere, it should be around steep submarine canyons. Surprisingly, a geometrical optics refraction model that assumes weak amplitude gradients on the scale of the wavelength, usually corresponding to gentle bottom slopes, was found to yield accurate predictions of swell transformation over Scripps canyon \[*Peak*, 2004\]. The practical limitations of mild slope approximations for natural seafloor topography are clearly not well established. The goal of the present paper is to understand the propagation of waves over a submarine canyon, including the practical imitations of geometrical optics theory for the associated large bottom slopes. Numerical models will be used to sort out the relative importance of refraction, and diffraction effects. Observations of ocean swell transformation over Scripps and La Jolla Canyons, collected during the Nearshore Canyon Experiment (NCEX), are compared with predictions of the three-dimensional (3D) coupled-mode model. This model is called NTUA5 because its present implementation will be limited to a total of 5 modes \[*Belibassakis et al.*, 2001\]. This is the first verification of a NTUA-type model with field observations, as previous model validations were done with laboratory data. This application of NTUA5 to submarine canyons is not straightforward since the model is based on the extension of the two-dimensional (2D) model of , and requires special care in the position of the offshore boundary and the numerical damping of scattered waves along the boundary. Further details on these and software developments, and a comparison with results of the SWAN model \[*Booij et al.*, 1999\] for the same NCEX case are given by . Here, model results are compared with two earlier models which assume a gently sloping bottom. These are the parabolic refraction/diffraction model REF/DIF1 (V2.5) \[*Kirby*, 1986\], applied in a spectral sense, and a spectral refraction model based on backward ray tracing \[*Dobson*, 1967 ; *O’Reilly and Guza*, 1993\]. A brief description of the coupled-mode model and the problems posed by its implementation in the NCEX area is given in section 2. Although our objective is the understanding of complex 3D bottom topography effects in the NCEX observations, this requires some prior analysis, performed in section 3, of reflection and refraction patterns over idealized 2D canyons. Results are presented for realistic transverse canyon profiles, including a comparison with the 2D analysis of infragravity wave observations reported by . Comparisons of 3D models with field data are presented in section 4 for representative swell events observed during NCEX. Conclusions follow in section 5. Numerical Models ================ The fully elliptic 3D model developed by *Belibassakis et al.* \[2001\] is based on the 2D model of *Athanassoulis and Belibassakis* \[1999\]. These authors formulate the problem as a transmission problem in a finite subdomain of variable depth $h_2(x)$ (uniform in the lateral y-direction), closed by the appropriate matching conditions at the offshore and inshore boundaries. The offshore and inshore areas are considered as incidence and transmission regions respectively, with uniform but different depths ($h_1$, $h_3$), where complex wave potential amplitudes $\varphi_1$ and $\varphi_3$ are represented by complete normal-mode series containing the propagating and evanescent modes. The wave potential $\varphi_2$ associated with $h_2$ (region $2$), is given by the following local mode series expansion: $$\begin{aligned} \varphi_2(x,z) &= & \varphi_{-1}(x)Z_{-1}(z;x)+\varphi_{0}(x)Z_{0}(z;x)\nonumber\\ & & + \sum_{n=1}^{\infty}\varphi_n(x)Z_n(z;x), \label{phi}\end{aligned}$$ where $\varphi_{0}(x)Z_{0}(z;x)$ is the propagating mode and $\varphi_n(x)Z_n(z;x)$ are the evanescent modes. The additional term $\varphi_{-1}(x)Z_{-1}(z;x)$ is the sloping-bottom mode, which permits the consistent satisfaction of the bottom boundary condition on a sloping bottom. The modes allow for the local adaptation of the wave potential. The functions $Z_n(z;x)$ which represent the vertical structure of the $n^{\mathrm{th}}$ mode are given by, $$\label{Z0} Z_0(z,x)=\frac{\cosh[k_0(x)(z+h(x))]}{\cosh(k_0(x)h(x))},$$ $$\label{Zn} Z_n(z,x)=\frac{\cos[k_n(x)(z+h(x))]}{\cos(k_n(x)h(x))},~~~n=1,2,...,$$ $$\label{Z-1} Z_{-1}(z,x)=h(x)\left[\left(\frac{z}{h(x)}\right)^3 + \left(\frac{z}{h(x)}\right)^2 \right],$$ where $k_0$ and $k_n$ are the wavenumbers obtained from the dispersion relation (for propagating and evanescent modes), evaluated for the local depth $h=h(x)$: $$\label{disprelpropa} \omega^2=gk_0\tanh{k_0h}=-gk_n\tan{k_nh},$$ with $\omega$ the angular frequency As discussed in *Athanassoulis and Belibassakis* \[1999\], alternative formulations of $Z_{-1}$ exist, and the extra sloping-bottom mode controls only the rate of convergence of the expansion (\[phi\]) to a solution that is indeed unique. The modal amplitudes $\varphi_n$ are obtained by a variational principle, equivalent to the combination of Laplace’s equation, the bottom and surface boundary conditions, and the matching conditions at the side boundaries, leading to the coupled-mode system, $$\begin{aligned} \label{coupled-mode system} \sum_{n=-1}^{\infty}a_{mn}(x)\varphi''_n(x)&+&b_{mn}(x)\varphi'_n(x)+ c_{mn}(x)\varphi_n(x)=0,\nonumber \\ & & \mathrm{for}\quad(m=-1,0,1,...)\end{aligned}$$ where $a_{mn}$, $b_{mn}$ and $c_{mn}$ are defined in terms of the $Z_n$ functions, and the appropriate end-conditions for the mode amplitudes $\varphi_n$ ; for further details, see *see Athanassoulis and Belibassakis* \[1999\]. The sloping-bottom mode ensures absolute and uniform convergence of the modal series. The rate of decay for the modal function amplitude is proportional to ($n^{-4}$). Here, the number of evanescent modes is truncated at $n=3$, which ensures satisfactory convergence, even for bottom slopes exceeding 1. This 2D solution is further extended to realistic 3D bottom topographies by *Belibassakis et al.* \[2001\]. In 3D, the depth $h_2$ is decomposed into a background parallel-contour surface $h_i(x)$ and a scattering topography $h_d(x,y)$. The 3D solution is then obtained as the linear superposition of appropriate harmonic functions corresponding to these two topographies. There is no limitation on the shape and amplitude of the bottom represented by $h_d(x,y)$ except that $h_d > 0$, which can always be enforced by a proper choice of $h_i$, for further details see *Belibassakis et al.* \[1999\]. The wave potential solution over the 2D topography ($h_i$) is governed by the equations described previously. The wave potential associated with the scatterers ($h_d$) is obtained as the solution of a 3D scattering problem. The decomposition of the topography in $h_d$ and $h_i$ is not uniquely defined by the constraints that $h_i$ is invariant along $y$ and $h_d > 0$, and there is thus no simple physical interpretation of the scattered field which corresponds to both reflection and refraction effects. The main benefit of this decomposition is that the scattered wave field propagates out of the model domain along the entire boundary, which greatly simplifies the specification of the horizontal boundary conditions. In practice we chose $$h_i(x)= \min \left\{ h(x,y) \quad \mathrm{for} \quad y \in \left[ y_{\mathrm{min}} , y_{\mathrm{max}} \right] \right\}.$$ Further, the bathymetry $h_i+h_d$ is modified by including a transition region for $y< y_{\mathrm{min}}$ and $y > y_{\mathrm{max}}$ in which $h_d$ goes to zero at the model boundary, so that no scattering sources are on the boundary and waves actually propagate out of the domain. This modification of the bathymetry does not change the propagation of the incoming waves, provided that the offshore boundary is in uniform water depth, as in the cases described by , or in deep enough water so that a uniform water depth can be prescribed without having an effect on the waves. Solutions are obtained by solving a coupled-mode system, similar to Eq.(\[coupled-mode system\]), but extended to two horizontal dimensions $(x,y)$, and coupled with the boundary conditions ensuring outgoing radiation. The spatial grid for the scattered field is extended with a damping layer all around the boundary \[*Belibassakis et al.*, 2001\]. Both $2$D and $3$D implementations of this NTUA5 model are used here to investigate wave propagation over a submarine canyon. If we neglect the sloping-bottom mode and the evanescent modes, and retain in the local-mode series only the propagating mode $\varphi_0(x,y)$, this model (NTUA5) exactly reduces to MMSE \[e.g. *Chandrasekara and Cheung*, 1997\], $$\begin{aligned} \label{MMS} \nabla^2\varphi_0(x,y)&+&\frac{\nabla(CC_g)}{CC_g}{{\mathbf \cdot}}\nabla\varphi_0(x,y)\nonumber \\ &+&\left[k_0^2+f_1 \nabla^2 h+f_2 (\nabla h)^2\right] \varphi_0(x,y) =0,\end{aligned}$$ where $f_1=f_1(x,y)$ and $f_2=f_2(x,y)$ are respectively functions dependent on the bottom curvature and slope-squared terms. From Eq.(\[MMS\]), the MSE is obtained by further neglecting the curvature and slope-squared terms. In the following sections, these two formulations (MSE and MMSE) will be compared to the full 5-mode model to examine the importance of steep bottom slope effects, which are fully accounted for in this model. The MSE and MMSE solutions are obtained by exactly the same scattering method described above with the same computer code in which the high order bottom slope terms and/or evanescent modes are turned off. For 3D calculations, our use of a regular grid sets important constraints on the model implementation due to the requirements to have the offshore boundary in deep water and sufficient resolution to resolve the wavelength of waves in the shallowest parts of the model domain. These constraints put practical limits on the domain size for a given wave period and range of water depths. Here a minimum of 7 points per wavelength in 10 m depth was enforced, in a domain that extends 4–6 km offshore. Such a large domain with a high resolution leads to memory intensive inversion of large sparse matrices. However, the NTUA, MSE and MMSE models are linear, and thus the propagation of the different offshore wave components can be performed separately, sequentially or in parallel. Before considering the full complexity of the 3D Scripps-La Jolla Canyon system, we first examine the behavior of these models in the case of monochromatic waves propagating over $2$D idealized canyon profiles (transverse sections of the actual canyons). We consider both the relatively wide La Jolla Canyon where infragravity wave reflection was reported recently \[*Thomson et al.* 2005\], and the narrow Scripps Canyon, that was the focus of the NCEX swell propagation study. Idealized 2D canyon profiles ============================ Transverse section of La Jolla Canyon ------------------------------------- We investigate monochromatic waves propagating at normal incidence over a transverse section of the La Jolla Canyon (Figures \[2sections\],\[depthsouth\]), which is relatively deep (120 m) and wide (350 m). Oblique incidence will not be considered for this canyon because the results are similar to those obtained for Scripps Canyon (discussed below). ![Bathymetry around La Jolla and Scripps canyons, and definition of transverse sections for idealized calculations.[]{data-label="2sections"}](2005jc003035-f01_orig.eps){width="20pc"} ![Water depth across the La Jolla canyon section.[]{data-label="depthsouth"}](2005jc003035-f02_orig.eps){width="20pc"} Reflection coefficients $R$ for the wave amplitude are computed using the MSE, the MMSE, and the full coupled-mode model NTUA5. $R$ is easily obtained using the natural decomposition provided by the scattering method, and is defined as the ratio between the scattered wave potential amplitude, up-wave of the topography, and the amplitude of the imposed propagating wave. In addition, a stepwise bottom approximation model developed by *Rey* \[1992\], based on the matching of integral quantities at the boundaries of adjacent steps, is used to evaluate $R$ \[see *Takano*, 1960; *Miles*, 1967; *Kirby and Dalrymple*, 1983\]. This model is known to converge to the exact value of $R$, and will be used as a benchmark for this study. ![Amplitude reflection coefficient $R$ for waves propagating at normal incidence over the La Jolla canyon section (figure \[depthsouth\]) using several numerical models, and observed infragravity reflections for near-normal incidence angles \[*Thomson et al.*, 2005\]](2005jc003035-f03_orig.eps){width="20pc"} . \[KRRprofilesouth\] ![Water depth across the Scripps canyon section.[]{data-label="depth3ral"}](2005jc003035-f04_orig.eps){width="20pc"} The canyon profile is resolved with 70 steps which was found to be sufficient to obtain a converging result. The predicted values of $R$ as a function of wave frequency $f$ (Figure \[KRRprofilesouth\]), are characterized by maxima and minima, which are similar to the rectangular step response shown in *Mei and Black* \[1969\], *Kirby and Dalrymple* \[1983a\], and *Rey et al.* \[1992\]. The spacing between the minima or maxima is defined by the width of the step or trench, which imposes resonance conditions, leading to constructive or destructive interferences. Both the MSE and MMSE models are found to generally overestimate the reflection at high frequencies, whereas the NTUA5 model is in good agreement with the benchmark solution. The sloping-bottom mode included in NTUA5 has a negligible impact on the wave reflection in this and other cases discussed below. The only other difference between the NTUA5 and the MMSE models is the addition of the evanescent modes which, through their effect on the near wave field solution modify significantly the far field, including the overall reflection and transmission over the canyon. investigated the transmission of infra-gravity waves with frequencies in the range $0.006$–$0.05$ Hz across this same canyon. Based on pressure and velocity time series at two points located approximately at the ends of the La Jolla section these authors estimated energy reflection coefficients as a function of frequency. In a case of near-normal incidence they observed a minimum of wave reflection at about 0.04 Hz, generally consistent with the present results (figure \[KRRprofilesouth\]). further found a good fit of their observations to the theoretical reflection across a rectangular trench as given by in the limit of long waves, and neglecting evanescent modes. This approximation is appropriate for the long infragravity band for which the effects of evanescent modes are relatively weak. The observations of also agree well with the various models applied here to the actual canyon profile (figure \[KRRprofilesouth\]). At higher swell frequencies ($f > 0.05$ Hz), the MSE, MMSE and NTUA model results diverge for normal incidence (figure \[KRRprofilesouth\]). However, contrary to the beach-generated infragravity waves, swell arrives from the open ocean and thus always reaches this canyon with a large oblique angle, for which the differences between these models are small (not shown). Transverse section of Scripps Canyon ------------------------------------ ### Normal incidence The north branch of the canyon system, Scripps Canyon, provides a very different effect due to a larger depth ($145$ m) and a smaller width ($250$ m). Scripps Canyon is also markedly asymmetric with different depths on either side. A representative section of this canyon is chosen here (Figure \[depth3ral\]). The bottom bottom slope locally exceeds 3, i.e. the bottom makes an angle up to $70^{\circ}$ with the vertical. Reflection coefficient predictions for waves propagating at normal incidence over the canyon section are shown in Figure \[KRRprofile3ral\]. $R$ decreases with increasing frequency without the pronounced side lobe pattern predicted for the La Jolla Canyon section. Again, the NTUA5 results are in excellent agreement with the exact solution. The MSE dramatically underestimates $R$ at low frequencies, and overestimates $R$ at high frequencies. However, the MMSE is in fairly good agreement with the benchmark solution in this case, suggesting that the higher order bottom slope terms are important for the steep Scripps Canyon profile reflection, while the evanescent modes play only a minor role. ### Oblique incidence {#Oblique incidence} The swell observed near Scripps Canyon generally arrives at a large oblique angle at the offshore canyon rim. To examine the influence of the incidence angle $\theta_i$, a representative swell frequency $f=0.067$ Hz was selected, and the reflection coefficient was evaluated as a function of $\theta_i$. The amplitude reflection coefficient $R$ is very weak when $\theta_i$ is small, and as $\theta_i$ increases, $R$ jumps to near-total reflection within a narrow band of direction around $35^\circ$ (Figure 6). ![Reflection coefficient for the Scripps Canyon section as a function of frequency predicted by various models. (a) normal incidence $\theta_i=0^{\circ}$, (b) $\theta_i=45^{\circ}$. All models collapse on the same curve in (b).[]{data-label="KRRprofile3ral"}](2005jc003035-f05_orig.eps){width="19pc"} ![Reflection coefficient for waves of period $T=16$ s propagating over the Scripps Canyon section as a function of the wave incidence angle $\theta_i$ ($0$ corresponds to waves travelling perpendicular to the canyon axis).[]{data-label="KRR_vs_theta_f_0067"}](2005jc003035-f06_orig.eps){width="20pc"} Indeed, for a wave train propagating through a medium with phase speed gradient in one dimension only, geometrical optics predicts that beyond a threshold (Brewster) angle $\theta_B$, all the wave energy is trapped, and no energy goes through the canyon. This sharp transition does not depend on the magnitude of the gradient which may even be infinite. For a shelf depth $H_1$ and maximum canyon depth $H_{\mathrm max}$, this threshold angle is given by $$\theta_B = \arcsin \left(\frac{C_{1}}{C_{\mathrm{max}}}\right)$$ where $C_1$ and $C_{\mathrm{max}}$ are the phase speeds for a given frequency corresponding to the depths $H_1$ and $H_{\mathrm{max}}$. Thus $\theta_B$ increases with increasing frequency as the phase speed difference diminishes at high frequencies. For Scripps Canyon, $H_1=24$ m, and $H_{\mathrm{max}}=145$ m. At $f=0.067$ Hz this gives $\theta_B=38^\circ$. As a result, for $\theta_i<\theta_B$, no reflection is predicted by refraction theory (dashed line), and all the wave energy is transmitted through the canyon. This threshold value separates distinct reflection and refraction (trapping) phenomena, respectively occurring for $\theta_i<\theta_B$ and $\theta_i>\theta_B$. The elliptic models that account for diffraction predict a smoother transition. For $\theta_i<\theta_B$, weak reflection is predicted. For $\theta_i>\theta_B$, a fraction of the energy is still transmitted through the canyon. This transmission of wave energy across a deep region where $\sin \theta_i /C_1$ exceeds $1/C_{\mathrm{max}} $, violates the geometrical optics approximation. This transmission is similar to the tunnelling of quantum particles through a barrier of potential in the case where the barrier thickness is of the order of the wavelength or less \[*Thomson et al.*, 2005\]. The wave field near the turning point of wave rays in the canyon decays exponentially in space on the scale of the wavelength \[e.g. *Chao and Pierson*, 1972\], and that decaying wave excites a propagating wave on the other side of the canyon. This coupling of both canyon sides generally decreases as the canyon width or the incidence angle increase \[*Kirby and Dalrymple*, 1983; *Thomson at al.*, 2005\]. The significant differences between MSE and MMSE at small angles $\theta_i<\theta_B$ are less pronounced for $\theta_i>\theta_B$. These two regimes are illustrated by the evolution of the wave potential amplitude over the Scripps canyon section. In figure \[phiprofile3ralT\], results of various elliptic models (MSE, MMSE and NTUA5) are compared with a parabolic approximation of the MSE (the REF/DIF1 model of *Dalrymple and Kirby* \[1988\]). It should be noted that the model grid orientation is chosen with the main axis along the incident wave propagation direction, in order to minimize large angle errors in the parabolic approximation. In that configuration, the parabolic approximation (REF/DIF1\_a) does not predict any reflection, but gives an indication of the expected shoaling of the incident waves across the canyon. For $\theta_i=30^\circ<\theta_B$, weak reflection (about 10%) is predicted by the MMSE and NTUA5 (figure \[phiprofile3ralT\].a). MSE considerably overestimates the reflection, and thus underestimates the transmitted energy down-wave of the canyon section. A partial standing wave pattern is predicted up-wave of the canyon as a result of the interference of incident and reflected waves. The largest amplitudes, about $20\%$ larger than the incident wave amplitude, occur in the first antinode near the canyon wall. For a larger wave incidence angle (e.g. 45$^\circ > \theta_B$), an almost complete standing wave pattern is predicted by the elliptic models up-wave of the canyon, with an exponential tail that extends across the canyon to a weak transmitted component (see also Figure \[KRRprofile3ral\].b for the reflection coefficient pattern). Finally, transmission is extremely weak for $\theta_i=70^\circ$ (figure \[phiprofile3ralT\].c). A good estimate of the reflection coefficient can also be obtained with the parabolic model REF/DIF1\_b by choosing the x-axis to be aligned with the canyon trench (figure 7b,c thick dashed lines). ![Wave amplitude over the Scripps Canyon section, for $T=16$ s and different incident angles (a) $\theta_i=30^{\circ}$, (b) $\theta_i=45^{\circ}$, and (c) $\theta_i=70^{\circ}$. The canyon depth profile is indicated with a thin dashed line. The MMSE result is in distinguishable from that of NTUA5 in all panels, and all models except for REF/DIF$1$ give the same results in (b) and (c).[]{data-label="phiprofile3ralT"}](2005jc003035-f07_orig.eps){width="18pc"} West Swell Over Scripps Canyon ============================== The models used in the previous section (MSE, MMSE, NTUA5, REF/DIF$1$, refraction) are now applied to the real $3$D bottom topography of the Scripps-La Jolla Canyon system, and compared with field data from directional wave buoys deployed around the rim and over Scripps Canyon during NCEX. Models Set-up ------------- The implementations of MSE, MMSE, NTUA5, and REF/DIF$1$ use two computational domains with grids of 275 by 275 points (Figure \[Newdom2\]). The larger domain with a grid resolution of 21 m is used for wave periods longer than 15 s. The smaller domain, with a higher resolution of about 15 m, is used for 15 s and shorter waves. The $y$-axis of the grid is rotated 45$^{\circ}$ relative to North to place the offshore boundary in the deepest region of the domain. Models were run for many sets of incident wave frequency and direction ($f$, $\theta$). ![Computational domain for (a) $T>15$ s, and (b) $T\leq 15$ s. Also shown are the NTUA5 solutions for the real part of the wave potential amplitude for waves arriving from 270$^\circ$ with periods (a) $T=16$ s, and (b) $T=15$ s, superimposed on the 10, 30, 100, 200, and 300 m depth contours.[]{data-label="Newdom2"}](2005jc003035-f08_small.eps){width="19pc"} The CPU time required for one ($f,\theta$) wave component calculation with the NTUA5 model (with $3$ evanescent modes) is about $120$ s on a Linux computer with $2$Gb of memory and a 3 GHz processor. The wave periods and offshore directions used in the computation range from $12$ to $22$ s and $255$ to $340$ degrees respectively, with $0.2$ s and $2^\circ$ increments. The minimum period $12$ s corresponds to the shortest waves that can be resolved with 7 points per wavelength in 10 m depth. Shorter waves are not considered here because they may be affected by local wind generation, not represented in the models used here, and are also generally less affected by the bottom topography. ![Location of directional wave buoys at the head of the Scripps canyon, and wave rays for an offshore direction of 272$^\circ$ and a period of 15.4 s, corresponding to a frequency just below the peak of the observed swell on November 30. Contrary to the backward ray tracing model used for estimating the wave spectrum at nearshore sites, rays were integrated forward from parallel directions and equally spaced positions at 15 m intervals along the offshore boundary at $x=0$, 10 km to the West of the buoys, practically in deep water. []{data-label="wbuoysloc"}](2005jc003035-f10_orig.eps){width="20pc"} Transfer functions between the local and offshore wave amplitudes were evaluated at each of the buoy locations and used to transform the offshore spectrum. The backward ray-tracing refraction model directly evaluates energy spectral transfer functions between deep water, where the wave spectrum is assumed to be uniform, and each of the buoys located close to the canyon, based on the invariance of the wavenumber spectrum along a ray \[*Longuet-Higgins*, 1957\]. A minimum of $50$ rays was used for each frequency-direction bin (bandwidth 0.005 Hz by 5 degrees), computed over the finest available bathymetry grid, with 4 m resolution. The model is identical to the CREST model described by , and validated by on the U.S. East coast. The energy source term set to zero here. This propagation-only version of the model is also called CRESTp, and is similar to the model used by and [@Peak2004]. It was further validated on the West coast of France \[*Ardhuin*, 2006\]. Model-Data Comparison --------------------- Long swell from the west was observed on $30$ November $2003$, in the absence of significant local winds. In the present analysis we use only data from Datawell Directional Waverider buoys. The Torrey Pines Outer Buoy is permanently deployed by the Coastal Data Information Program (CDIP), and located about $15$ km offshore of Scripps Canyon. That buoy provided the deep water observations necessary to drive the wave models. The directional distribution of energy for each frequency was estimated from buoy measurements of displacement cross-spectra using the Maximum Entropy Method \[*Lygre and Krogstad*, 1986\]. The NCEX observations were made at six sites around the head of Scripps Canyon (figure \[wbuoysloc\]). All spectra used in the comparison, including the offshore boundary condition, were averaged from 13:30 to 16:30 UTC, so that the almost continuous record yields about 100 degrees of freedom for each frequency band with a width of 0.005 Hz. On that day the wind speed close to the coast did not exceed 3 m s$^{-1}$, as measured by the CDIP Torrey Pines Glider port anemometer, and the National Data Buoy Center (NDBC) buoy 46086, located 70 km West of San Diego and representative of the entire modelled area. The observed narrow offshore spectrum has a single peak with a period of $14.5$ s, and a mean direction of $272$ degrees, corresponding to an incidence angle $\theta_i$ (relative to the Scripps Canyon axis) of $65^\circ$ (Figure\[Sp15h00\]). ![Directional wave spectrum at Torrey Pines Outer Buoy at 15:00 UTC on 30 November 2003. []{data-label="Sp15h00"}](2005jc003035-f09_orig.eps){width="20pc"} The model hindcasts are compared with observations in Figure \[wbuoysHs\]. While the local amplification of the wave height at the head of canyon varies with the incident wave direction, a dramatic reduction of the wave height downwave of the rim of this canyon is predicted for all directions. Thus the selected west swell case ($T_p=14.5s$, $\theta=272^{\circ}$) is representative of the general wave transformation in this area, for low frequency swells arriving a large range of directions. Significant wave heights $H_s$ were computed from the measured and predicted wave spectra at each instrument location, including only the commonly modelled frequency range ($f_1=0.05$ Hz, $f_2=0.08$ Hz). The predicted $H_s$ is given by $$H_s=4\left(\int_{f_1}^{f_2}\int_{\theta_1}^{\theta_2}M(f,\theta) E(f,\theta)dfd\theta\right)^{1/2},$$ where $E(f,\theta)$ is the observed offshore frequency-directional spectrum and $M(f,\theta)$ is the model prediction of the ratio between the local and offshore wave energies for the frequency $f$ and offshore direction $\theta$, obtained by squaring the sea surface elevation transfer function. Observations show a dramatic variation in wave height across the canyon (figure \[wbuoysHs\]). ![Comparison of predicted and observed significant wave height ($12s<T<22s$) for the 30 November 2003 swell event. Instrument locations are shown in figure \[wbuoysloc\].[]{data-label="wbuoysHs"}](2005jc003035-f11_orig.eps){width="20pc"} The offshore wave height is slightly enhanced at sites 33 and 34, in water depths of 34 and 23 m respectively, along the north side of the canyon, and slightly reduced on the shelf north of the canyon at site 35, in 34 m depth. A dramatic reduction in wave heights is observed at sites 36, 37 and 32, over the Canyon and on the south side, where the water depths are 111, 49 and 24 m, respectively. Between buoys 34 and 36 the wave height drops by a factor 5 over a distance of only 150 m, that is less than the 216 m wavelength at the peak frequency (at the shallowest of the two sites). Such a pattern is generally consistent with refraction theory as illustrated by forward ray-tracing in figure \[wbuoysloc\]. Whereas rays crossing the shelf north of the canyon show the expected gradual bending towards the shore, rays that reach the canyon northern wall are trapped on the shelf, and reach the shore in a focusing region north of the canyon (Black’s beach). From that offshore direction, and an offshore ray spacing of 15 m, no rays are predicted to cross the canyon, so that the south side of the canyon is effectively sheltered from 16 s Westerly swells, in agreement with the observed extremely low wave heights (figure \[wbuoysHs\], see also [@Peak2004]). The amplitude transfer functions $(M(f,\theta)^{1/2}$) are not overly sensitive to the wave frequency and direction, as illustrated in figure \[TF3437\].a-b with NTUA5 predictions at sites $34$ at the head of the canyon, and $37$ behind the canyon. ![Amplitude transfer functions at site $34$ (a) and site $37$ (b), defined as the ratio of the local and offshore wave amplitude modulus and computed with NTUA $5$.[]{data-label="TF3437"}](2005jc003035-f12_small.eps){width="20pc"} Up-wave of the canyon (instruments $33$, $34$, $35$), all models are found to be in fairly good agreement with the observations. However, REF/DIF$1$ underestimates the wave height at site $34$. At this site, wave energy is strongly focused by refraction, with rays turning by more that 90$^\circ$ (figure \[wbuoysloc\]). The parabolic approximation does not allow such a large variation in wave direction. Over and down-wave of the canyon (instruments $32$, $36$, $37$), the wave heights predicted by MSE, MMSE and NTUA5 agree reasonably well with the observations, whereas REF/DIF$1$ slightly overestimates the wave height. For $f<0.06$ Hz few rays cross the canyon and the energy predicted by the refraction model is extremely low, about $5$% of the offshore energy the total energy. This strong variation in wave energy across the canyon is reduced by diffraction, which is not taken into account in this refraction model, resulting in an under-prediction of the wave height at the sheltered sites $32$, $36$, and $37$. The sea state at that time also include an important contribution from higher frequencies (figure 13). Significant wave heights computed over a wider frequency range ($0.05<f<0.2$Hz), by adding the refraction model results to the low-frequency results of other models, vary little between the models, now dominated by short wave energy. ![Comparison of predicted and observed frequency spectra at (a) site 35, and (b) site 37, for the 30 November 2003 swell event.[]{data-label="spectra35&37"}](2005jc003035-f13_orig.eps){width="18pc"} ![Directional wave spectrum at Torrey Pines Outer Buoy at 12:00 UTC on 12 December 2003.[]{data-label="Sp_28"}](2005jc003035-f14_orig.eps){width="20pc"} However, wave heights are still markedly different between the buoys. It thus appears that refraction plays an important role for frequencies up to 0.14 Hz (see the difference in offshore and local spectra on figure 13), while diffraction effects are significant, in that area, only up to 0.07 Hz. Further confirmation of the trapping of low frequency waves is provided by another case observed on $12$ December $2003$ (Figure \[Sp\_28\]), which we analyze with the same method. The observed spectra are averaged from 12:00 UTC to 15:00 UTC.The observed spectrum has three peaks with a period of $20$, $12.5$ and $9$ s, a mean direction of $270$, $270$ and $285$ degrees respectively and a significant wave height of $1.9$m. ![Comparison of predicted and observed significant wave height ($12s<T<22s$) for the 12 December 2003 swell event. Instrument locations are shown in figure \[wbuoysloc\].[]{data-label="wbuoysHs_12_12"}](2005jc003035-f15_orig.eps){width="20pc"} The model hindcasts are compared with observations in Figure \[wbuoysHs\_12\_12\]. Significant wave heights $H_s$ were computed from the measured and predicted wave spectra at each instrument location, including only the commonly modelled frequency range ($f_1=0.05$Hz, $f_2=0.08$Hz). On that day the wind speed did not exceed 7 m s$^{-1}$, as measured by the CDIP Torrey Pines Glider port anemometer, but reached 13.5 m, blowing from the North West, at NDBC buoy 46086. Such a wind is capable of generating a local wave field with frequencies down to 0.095 Hz for fully-developed wave conditions. As in the previous case, a large variation in wave height was observed across the Canyon (Figure \[wbuoysHs\_12\_12\]). Again, that variation remains limited to a factor 10 difference for any wave frequency (compare Figure \[spectra35&37\_28\]a and b), whereas the geometrical optics approximation predicts much larger gradients. We note a general agreement of the predicted wave height by the models, with an underestimation of the refraction model for sites located down-wave of the Canyon. The predicted frequency spectra are represented on Figure \[spectra35&37\_28\]a,b at sites $35$ and $37$. At site $35$, located up-wave of the Canyon wall, NTUA5 and REF/DIF1 models are in a good agreement with the measurement for the low frequency peak ($0.05$ Hz), but underestimate the $0.08$Hz peak. The refraction model overestimates the low frequency peak, but is in good agreement with the $0.08$Hz peak. At site $37$, located down-wave of the Canyon, NTUA5 and REF/DIF1 predict a strongly attenuated low frequency peak, as is observed, whereas the refraction model predicts no energy transmission across the canyon. Below a cut-off frequency of about $0.065$ Hz, the canyon acts as a complete barrier in the geometrical optics approximation. The energy in the second peak at $0.08$ Hz is only reduced by a factor 4 across the canyon, an effect well described by all models, and thus attributable to refraction. All models generally agree with the observations for $0.07<f<0.2$, within the spectrum measurement confidence interval, except for an overestimation of the refraction model for the high frequency peaks ($0.11$ and $0.14$ Hz) of the spectrum. However, due to the local wind sea generation between the offshore buoy and locations around the canyon, these propagation models are not reliable for $f>0.095$ Hz. ![Comparison of predicted and observed frequency spectra at (a) site 35, and (b) site 37, 12 December 2003 swell event.[]{data-label="spectra35&37_28"}](2005jc003035-f16_orig.eps){width="20pc"} In the two events most of the wave evolution is accounted for by refraction. However, diffraction is included in the models based on the MSE and its extensions, and this effect allows for a tunnelling of wave energy across the canyon. In these models, wave heights across the canyon are thus larger, in better agreement with observed wave heights and wave spectra at the sheltered sites 32, 36 and 37 (figures \[wbuoysHs\], \[spectra35&37\], \[wbuoysHs\_12\_12\]). The differences between NTUA5, MSE and MMSE model predictions are very small and thus only NTUA5 results are shown in figure \[spectra35&37\]. It may appear surprising that the wave height behind the canyon is still 20% of the offshore wave height whereas the 2D simulations with comparable incidence angles yield wave heights much less than 5%. However, the Scripps Canyon is neither infinitely long nor uniform along its axis. The three-dimensional topography apparently reduces the blocking effect of long period swells that was found over two-dimensional canyons. Summary ======= Observations of the evolution of swell across a submarine canyon obtained in the nearshore canyon experiment (NCEX), were compared with predictions of refraction and combined refraction-diffraction models including the coupled-mode model NTUA5 valid for arbitrary bottom slope \[*Athanassoulis and Belibassakis*, 1999; *Belibassakis et al.*, 2001\]. Predictions of a spectral refraction model are in good agreement with observations \[see also *Peak*, 2004 for the entire experiment\], demonstrating that refraction is the dominant process in swell transformation across Scripps Canyon. The geometrical optics approximation, on which the refraction model is based, turned out to be very robust. Accurate spectral predictions were obtained with taht model even in cases where the wave energy changes by a factor of 10 over three quarters of a wavelength. For waves longer than 12 s, even larger gradients are predicted by the refraction model, but these gradients are not observed. At those frequencies, accurate results were obtained with the NTUA5 model and elliptic mild slope equation models that include diffraction, which acts as a limiter on the wave energy gradients. Differences between the models were clarified with 2D simulations using representative transverse profiles of La Jolla and Scripps Canyons, showing the behavior of the far wave field as a function of the incidence angle. The underestimation by the refraction model may be interpreted as the result of wave tunnelling, i.e. a transmission of waves to water depths greater than allowed by Snel’s law, for obliquely incident waves \[see also, *Thomson et al., 2005*\]. This tunnelling effect cannot be represented in the geometrical optics approximation, and thus the refraction model predicts that all wave energy is trapped for large incidence angles relative to the depth contours, while a small fraction of the wave energy is in fact transmitted across the canyon. Although different from the classical diffraction effect behind a breakwater \[e.g. [*Mei*]{} 1989\], this tunnelling is a form of diffraction in the sense that it prevents a sharp spatial variation of wave amplitude, and induces a leakage of wave energy in areas forbidden by geometrical optics. Observations were also compared with a parabolic refraction-diffraction model that is known to be inaccurate for large oblique wave directions relative to the numerical grid, and is shown here to overestimate the amplitude of waves transmitted across the canyon and underestimate the amplitude of waves focused at the head of the canyon. Finally, depending on the bottom profile and incidence angle, higher order bottom slope and curvature terms (incorporated in modified mild slope equations and NTUA5), as well as evanescent and sloping-bottom modes (included in NTUA5) can be important for an accurate representation of wave propagation over a canyon at small incidence angles. For large incidence angles, that are more common for natural canyons across the shelf break, the standard mild slope equation (MSE) gives an accurate representation of the variations in surface elevation spectra that is similar to that of the full NTUA$5$ model. Yet, further analysis of NCEX bottom velocity and pressure measurements may show that the MSE or other mild slope models may not accurately represent near bottom wave properties, as also discussed by . The authors acknowledge the Office of Naval Research (Coastal Geosciences Program) and the National Science Foundation (Physical Oceanography Program) for their financial support of the Nearshore Canyon Experiment. Steve Elgar provided bathymetry data, Julie Thomas and the staff of the Scripps Institution of Oceanography deployed the wave buoys, and Paul Jessen, Scott Peak, and Mark Orzech assisted with the data processing. Analysis results of the infragravity wave reflections across La Jolla Canyon were kindly provided by Jim Thomson. The authors also acknowledge anonymous referees for their useful comments and suggestions.
--- author: - 'Yang Li, John Klingner, and Nikolaus Correll' bibliography: - 'reference.bib' title: Distributed Camouflage for Swarm Robotics and Smart Materials --- Introduction ============ We wish to design artificial camouflage systems that can quickly adapt to a large variety of environments. Inspired by the capabilities of cephalopods, which tightly integrate sensing, actuation (color change), neural computation and communication, we are interested in a distributed artificial approach that mimics this tight integration [@yu2014adaptive; @mcevoy2015materials]. While animals employ camouflage mostly for escaping predators, camouflage in an engineering context is typically motivated by clandestine military operations. More broadly, everything from small robots to buildings could use these techniques to more seamlessly be a part of their environment. Nature employs a large variety of techniques to achieve these goals. For example, moths mimic patterns that they would expect in their environments, sea animals use mottle patterns to soften their contours, and other animals decorate their body with artifacts from the environment [@stevens2009animal]. Two animals with notable camouflage abilities are the cuttlefish and octopus, who can dramatically alter the coloration and patterning of their skin and switch between different environments in a matter of seconds [@hanlon1988adaptive]. These creature’s camouflage behavior is not only driven by the animals’ visual system (which is color-blind [@messenger1977evidence]) or brain [@messenger2001cephalopod], but has also been shown to rely on local sensing and control [@ramirez2015eye]. There have been multiple attempts to achieve active camouflage using a combination of cameras and projection [@inami2003optical; @lin2009framework]. Although such systems provide “perfect” camouflage, they are highly dependent on the observer’s viewpoint. Mimicking the background exactly is rarely employed in the animal kingdom, where a few simple families of patterns — mottled, striped, or simply uniform [@hanlon2007cephalopod] — dominate. Creating such patterns requires only local coordination [@meinhardt1982models], suggesting a combination of high-level selection of appropriate motor programs [@messenger2001cephalopod] and self-organization [@meinhardt1982models]. Here, we are not concerned with perfectly matching the background, but rather aim to replicate the pattern matching ability of natural systems, which are able to fool sophisticated predators. Distributing the sensing and actuation for camouflage generation makes an implementation scalable for a variety of factors, such as resolution of the camouflage pattern, the size of the area being camouflaged, and robustness against the failure of individual units. Further, a distributed camouflage system could respond to local changes in the environment, in particular when deployed on non-trivial 3D surfaces. In this paper, we present a fully distributed approach, which we implement on a swarm of Droplets [@farrow2014miniature], each equipped with the ability to sense and emit color as well as communicate with its local neighbors. Although there exist multiple attempts to design artificial chromatophores, most work focuses on component technology, i.e. the ability to color change in a soft substrate [@rossiter2012biomimetic; @morin2012camouflage], but very few works articulate the systems challenges that require not only local color changes, but also local sensing and computation [@yu2014adaptive], or investigate the ability to co-locate simple signal processing with the sensors themselves [@fekete2009distributed]. Our algorithm can be broken into three phases, each described in detail below. First, we estimate a color and gradient histogram with a consensus algorithm among the particles. This information is then used to determine the parameters of a pattern formation algorithm. Finally, the pattern is formed using a reaction-diffusion process. The “background” in to which the swarm is trying to camouflage is projected on to the particles from above, requiring them to have color sensors. To simplify the color-identification process, the paper focuses on two-tone patterns. Finally, we arrange the particles in a grid pattern, which allows us to implement a discrete convolution operation and simplifies debugging the pattern at the low resolution that swarms in the order of tens of particles can afford. Distributed Camouflage Algorithm {#sec:camouflage} ================================ In this section, we describe the distributed camouflage algorithm. Fig. \[fig:pipeline\] illustrates the steps of the algorithm in broad strokes. First, each robot measures the color projected on it. Then, it exchanges the measured color with neighboring robots. Once received, neighbors’ color information is used to compute an estimated probability for the various pattern types based on local information (Section \[sec:descriptor\]). Next, the swarm communicates their local pattern probabilities, using a weighted-average consensus algorithm to compute the most likely global pattern (Section \[sec:consensus\]). Once consensus has been achieved, the swarm reproduces the pattern collaboratively with a reaction-diffusion process (Section \[sec:generator\]). ![Pipeline of the Distributed Camouflage Algorithm []{data-label="fig:pipeline"}](pics/pipeline.png){width="\columnwidth"} Pattern Descriptor {#sec:descriptor} ------------------ Once each robot has measured the local environment’s color, and communicated that information, they apply a filter mask (see Fig. \[fig:filter\]) to compute a discrete approximation of the second-order color derivative in both the horizontal and vertical directions. This is quite similar in concept to kernels used in edge detection and other computer-vision tasks [@dalal2005histograms]. Indeed, if the grid of robots is viewed as an image with each robot a pixel, these two pattern descriptors are simply the value the pixel would have after each of the two convolutions. These second-order derivatives are the *Pattern Descriptors* – denoted $P_x$ and $P_y$ for the horizontal and vertical directions respectively – and are used to calculate the most probable local pattern. ![Illustration of applying the two second order derivative masks.[]{data-label="fig:filter"}](pics/filter.png){width="0.5\columnwidth"} To be specific, with $M$ denoting my local color and $T$, $R$, $B$, and $L$ denoting the color of my top, right, bottom, and left neighbors, $P_x$ and $P_y$ are given by: $$\begin{aligned} P_x &= L + R - 2M \\ P_y &= T + B - 2M\end{aligned}$$ A pattern-probability array $p = [p_h, p_v, p_m]$ is used to record each robot’s pattern, where $p_h$ represents the probability of a pattern-type is selected and given a probability of $1$ based on our local *Pattern Descriptors*, and the other probabilities are all 0. This is shown in the equation below, where $T$ is some threshold value and $\left|val\right|$ is used to indicate $\texttt{abs}\left(val\right)$. $$\begin{aligned} \label{eq:pattern_descriptor} p = [p_h, p_v, p_m] = \begin{cases} [1, 0, 0] & \text{if } \left|P_y\right|-\left|P_x\right|>T \\ [0, 1, 0] & \text{if } \left|P_x\right|-\left|P_y\right|>T \\ [0, 0, 1] & \text{otherwise} \end{cases}\end{aligned}$$ Note that a grid representation has only been chosen for the simplicity of performing (and explaining) the mathematical operations, but one could equally well perform the described convolutions using continuous representations and local range and bearing information. Distributed Average Consensus Scheme {#sec:consensus} ------------------------------------ Once each robot has computed the most likely local pattern (ie, computed $p = [p_h, p_v, p_m]$), they need to achieve consensus on the global pattern. We use the distributed average consensus scheme [@xiao2005scheme] for this purpose. In each step of this scheme, the robot updates its local $p$ to be a weighted average of its own and its neighbors’. This step is repeated many times, allowing information to diffuse through the swarm. Since the weighted average just uses local information, each step takes the same amount of time regardless of the number of robots in the swarm. The number of steps needed was determined experimentally. The weighted-average calculation uses Metropolis weights, defined as: $$\label{eq: weights} W_{i,j} = \left\{ \begin{array}{ll} \frac{1}{1+max\{ d_i, d_j \}}& \text{if } (i, j) \in E,\\ 1-\sum_{(i,k) \in E} {W_{i,k}}& \text{if } i = j,\\ 0& \text{ otherwise.} \end{array} \right.$$ The Metropolis weights are well-suited for distributed algorithms, since weight-calculation requires only local knowledge. Further, it is proven in [@xiao2005scheme] that Metropolis weights guarantee convergence of the average consensus provided that the infinitely occurring communication graphs are jointly connected. Once the robots have converged, the largest value in $p$ represents the most likely global pattern. For example, $p_h > p_v$ and $p_h > p_m$ indicates that the most likely global pattern is horizontal stripes. Pattern Generator {#sec:generator} ----------------- In this section, we describe the distributed pattern formation algorithm to generate a proper pattern to match the environment. ![Illustration of local activator-inhibitor model: on the left, the activation region (orange) is defined by $A_x$ and $A_y$ while the inhibition region (gray) is defined by $I_x$ and $I_y$; on the right, $W_1$ and $W_2$ are the two field values. $R1$ is related to $A_x$ and $A_y$ and $R2$ is related to $I_x$ and $I_y$, []{data-label="fig:morphogen"}](pics/morphogen.png){width="0.8\columnwidth"} Now that a global pattern has been selected, the robots next need to generate an appropriate camouflage pattern. We use the pattern-formation algorithm presented by Young [@young1984local]: a local activator-inhibitor model. In this model, each cell (robot) is either ‘on’ or ‘off’, and can generate two kinds of morphogens: activator morphogen and inhibitor morphogen. Together, these form a “morphogenetic field”. Note that the activator should be inside of the inhibitor (see left of Figure \[fig:morphogen\]). The cells (robots) in the activator morphogen contribute to stimulate change for nearby ‘on’ cells, and cells in the inhibitor morphogen contribute to stimulate change for nearby ‘off’ cells. ![Activator (orange) and Inhibitor (gray) Regions for each of the three patterns.[]{data-label="fig:droplet_morphogen"}](pics/droplet_morphogen.png){width="0.95\columnwidth"} During each step of this algorithm, each cell changes its ‘off’/‘on’ status based on the combined effect of all nearby morphogenetic fields. More specifically, a ‘strength’ is calculated with each ‘on’ robot contributing a positive value ($W_1$) if in the activator region or contributing a negative value ($W_2$) if outside the activator region and inside the inhibitor region. The robot then changes its state to ‘on’ if the strength is greater than 0, and to ‘off’ otherwise. This step is repeated until the states converge to a stable pattern. In [@young1984local] the author observes that convergence typically takes around five steps. This was consistent with our observations. In this framework, the different types of patterns are represented with differently shaped activator and inhibitor regions. The regions for each pattern are shown in Figure \[fig:droplet\_morphogen\]. Note that the region sizes mean that each robot only requires information from robots within two hops of it. Simulated Results {#sec:simulation} ================= We implemented the algorithm introduced above on a centralized system for testing. By presenting some simulated results here, we hope to demonstrate the algorithm’s functionality and add clarity to the explanation above. We run these tests with three images, one for each of the pattern types. Each image is $128 \times 128$ pixels, and gray scale. We simulate $64$ ($8 \times 8$) robots. Note that this grid of $8 \times 8$ robots is in many ways analogous to the sensor of a digital camera, albeit a camera with only $8 \times 8$ sensors and thus with very low resolution. If you were to recapture our test images with such a low resolution camera, many different pixels in the test image would contribute to the camera’s output, resulting in a very blurry image. We therefore downsample the input image by taking the average of $16 \times 16$ pixel blocks. This blurred image is used as the color sensed by each robot for selecting the most likely pattern. For pattern generation, the initial on/off state is determined by making the blurred image binary (ie. white and black). Figure \[fig:8simResults\] shows the entire process for each of the three input images. ; Once the Droplets calculate the local pattern based on their sensed color and that of their neighbors, they need to achieve consensus on the global pattern. As has been discussed in Section \[sec:consensus\], convergence of this value is guaranteed. Next, the pattern generator described above is used (Section \[sec:generator\]) with the activator and inhibitor regions seen in Figure \[fig:droplet\_morphogen\]. The activator field value of $W_1 = 1$ was used, as suggested in paper [@young1984local]. The inhibitor field value, $W_2$, is a parameter which gives rough control over what proportion of the robots are ‘on’ in the final pattern. We found that $W_2 = -0.75$ gave qualitatively good results for all three of the pattern generators. Finally, we start pattern generation with each robot’s initial state to be ‘on’ or ‘off’ status based by the sensed value. If the value is less than $127$ we set it black, otherwise we set it white. The pattern generator runs for ten iterations. Robots on the image boundary use a reflection of their neighbors. A robot on the top row, for example, would count its bottom neighbor twice, as the top row is empty. To further test the simulated algorithm, we added a simple noise model. For measurement error, instead of always assigning the appropriate color to a robot based on its position, we assign a uniformly random color with probability $\rho_{meas}$. For communication error, at each step in the algorithm where information from a neighbor is shared with a robot for a calculation (including the step where a robot’s neighbors are calculated in the first place), are robot does not share this information with probability $\rho_{comm}$. ![The y-axis is the pixel difference from the ‘correct’ pattern and the x-axis is the error probability. The red line shows the effect of measurement errors ($\rho_{meas}$). The blue line shows the effect of communication errors ($\rho_{comm}$). The green line shows the effect of both measurement and communication errors ($\rho_{comm}=\rho_{meas}$). Each data point reflects the mean result over 10 trials of the forest image.[]{data-label="fig:errorAnalysis"}](pics/error_analysis.png){width="0.6\linewidth"} For a quantitative measure of the effects of error, we calculated the total absolute difference between the final generated pattern in the presence of error, and the final generated pattern without any error (as visible in the bottom row of Figure \[fig:8simResults\]). These results are charted in Figure \[fig:errorAnalysis\]. Note that, with the $8x8$ images used, a purely random image should give us a difference of $32$, on average. The algorithm seems quite robust to errors of up to $0.15--0.2$. After these thresholds, the error increases sharply. (Results shown here are for the forest image, with other images yielding similar results.) Qualitatively, we observe that even as errors started to appear, many of the resulting patterns still looked ‘good’, i.e., still had prominent vertical stripes. The main determining factor as the probability of error increased seemed to be in the global pattern detection. If the correct pattern (vertical stripes in this case) is selected, the resulting pattern will fit well even with large errors. Correct pattern selection grows increasingly infrequent, however. Hardware Implementation {#sec:hardware_imp} ======================= To validate the proposed algorithm and to understand the sorts of errors that real hardware introduces, we implemented the algorithm described above on a swarm of “Droplets”  [@farrow2014miniature; @klingner2014]. The Droplets are an open-source platform, with source code and manufacturing information available online[^1]. Each Droplet is roughly cylindrical with a radius of $2.2\,\mathrm{cm}$ and a height of $2.4\,\mathrm{cm}$. The Droplets use an Atmel xMega128A3U micro-controller, and receive power via their legs through a floor with alternating strips of $+5V$ and $GND$. Each Droplet has six infrared emitters, receivers, and sensors, which are used for communication and for the range and bearing system [@farrow2014miniature]. The top of each board has sensors to detect the color and brightness of ambient light, and an RGB LED. Each droplet has a 16-bit unique ID. In our implementation, each Droplet maintains an array of neighbor’s IDs. Messages are labeled with phase flags and attached with Droplets’ IDs. The Droplets are synchronized using a firefly synchronization algorithm [@mirollo1990synchronization; @werner2005firefly]. A simple TDMA protocol is used with $37$ slots, each $350$ms long. Each frame is thus $12.95$s long. Each robot is assigned a slot based on its unique id modulo 37. The number of slots (37) was chosen to be large enough that the probability of two adjacent robots sharing a slot is low, but small enough that the algorithm runs quickly. ![Neighbor array. The orange neighbors(0-3) are used for pattern recognition; the green neighbors(4-7) are used in addition to the orange for pattern consensus.All pictured neighbors (orange, green and gray) are used for pattern formation.[]{data-label="fig:droplet_neighbors"}](pics/droplet_neighbors.png){width="0.4\columnwidth"} In Phase 0 (neighbor identification), we initialize and configure the neighbor ID arrays which store neighboring Droplets’ IDs. Range and bearing information is used to calculate positions for each Droplet’s immediate neighbors, and neighbors-of-neighbors are learned by listening to the messages sent by Droplets each slot, which contain that Droplet’s neighbors. The positions of Droplets and their indices in the array are illustrated in Fig. \[fig:droplet\_neighbors\]. We allot $20$ frames for Phase 0, since the neighbor information is critical to the three phases. In Phase 1 (color sensing and recognition), each Droplet communicates the color it senses, and stores the colors its neighbors sense, as learned through communications. Once this is complete, each (non-boundary) Droplet should know the ID and position of 12 neighbors, as well as those neighbor’s sensed colors. With this information, the Droplets calculate an pattern probability array $p$ as described in Section \[sec:descriptor\]. This phase is allotted $10$ frames. In Phase 2 (pattern consensus), each Droplet communicates its pattern probability array $p$ and receives pattern probability arrays from its neighbors. At the end of each frame, each Droplet updates its pattern probability array according to the weighted-average consensus algorithm, as described in Section \[sec:consensus\]. Each ‘step’ of the consensus algorithm spans one frame. This phase is allotted $35$ frames. In Phase 3 (pattern formation), each Droplet communicates its intended color for the generated pattern, and receives that information from neighboring Droplets. At the end of each frame, each Droplet updates its color in the generated pattern from corresponding Droplets. Each Droplet exchanges pattern color message with neighbors. At the end of each frame, each Droplet updates its pattern color according to the pattern generation algorithm described in Section \[sec:generator\]. This phase is allotted $20$ frames. Hardware Results {#sec:hardware_results} ================ ![Initial condition (left), final pattern with projected pattern (middle) and final pattern (right) for camouflaging the tiger stripe pattern[]{data-label="fig:t_7_8_11"}](pics/t_7_8_11_init.png "fig:"){width="0.3\columnwidth"} ![Initial condition (left), final pattern with projected pattern (middle) and final pattern (right) for camouflaging the tiger stripe pattern[]{data-label="fig:t_7_8_11"}](pics/t_7_8_11_final_patternOn.png "fig:"){width="0.3\columnwidth"} ![Initial condition (left), final pattern with projected pattern (middle) and final pattern (right) for camouflaging the tiger stripe pattern[]{data-label="fig:t_7_8_11"}](pics/t_7_8_11_final.png "fig:"){width="0.3\columnwidth"} ![Convergence of pattern probabilities of randomly chosen Droplets camouflaging tiger stripe pattern[]{data-label="fig:t_7_8_11_pp"}](pics/t_7_8_11_pp.png){width="0.6\columnwidth"} A hardware implementation of the swarm camouflage algorithm is shown in Figure \[fig:t\_7\_8\_11\]. For this test, the projected image for the Droplets to sense is a tiger stripe pattern. The results of this test are interesting because a striped pattern is maintained, despite the failure of two units. This, in addition to the more-difficult-to-count failures in communication and color sensing. Figure \[fig:t\_7\_8\_11\_pp\] shows the pattern probability convergence for a random sampling of Droplets, when run with a simple horizontal stripe pattern projected on them. The swarm reaches consensus on a horizontal pattern, converging to $p_h=0.61$. Conclusion ========== We present a distributed camouflage system in which a robot swarm can sense the environment color, recognize the local pattern, achieve consensus on the global pattern, and generate a camouflage pattern consistent with the environment the robots are in. In our design, pattern descriptors are proposed for recognizing local patterns. A weighted-average consensus scheme is then utilized, allowing the swarm to converge to a common global pattern. Finally, a pattern formation model is applied to each robot which generates a pattern appropriate for the background. This is accomplished using local communication and simple mathematical operations. We simulated the proposed algorithm on a couple of patterns from nature: a desert, a forest, and leopard skin. After going through all the phases in the algorithm, and successfully agreeing on a global pattern, the simulation results show that robots with wrong color reading can correct themselves to match the global pattern. This is especially obvious for the horizontal and vertical patterns. We also tried to test the distributed algorithm by applying it on the Droplet swarm robotics platform. The results from the Droplets is promising since the robots can agree on the global pattern and show a proper matching pattern even if individual Droplets stop working. As communication on the Droplets is not perfectly reliable, the resultant patterns exhibit some random variations; they do not perfectly match simulation. Even these variations, however, will roughly follow the desired background pattern, seeming to bend or twist around the erroneous robot. In the future, we wish to test the algorithm on a more purpose-built hardware platform which would allow for higher-resolution patterns, and extend the algorithm to include consensus on the dominant colors and patterns consisting of more than two colors. This research has been supported by NSF grant \#1150223. [^1]: <http://github.com/correlllab/cu-droplet>
--- abstract: 'A suggestion is made for mending multicore hardware, which has been diagnosed as broken.' author: - | János Végh\ \ \ \ title: 'Can Broken Multicore Hardware be Mended?' --- The multicore era is a consequence of the stalling of the single-thread performance {#sec:introduction} =================================================================================== The multi- and many-core (MC) era we have reached was triggered after the beginning of the century by the stalling of single-processor performance. Technology allowed more transistors to be placed on a die, but they could not reasonably be utilized to increase single-processor performance. Predictions about the number of cores has only partly been fulfilled: today’s processors have dozens rather than the predicted hundreds of cores (although the Chinese supercomputer [@FuSunwaySystem2016] announced in the middle of 2016 comprises 260 cores on a die). Despite this, the big players are optimistic. They expect that Moore-law persists, though based on presently unknown technologies. The effect of the stalled clock frequency is mitigated, and it is even predicted [@Intel10GHz:2014] that “*Now that there are multicore processors, there is no reason why computers shouldn’t begin to work faster, whether due to higher frequency or because of parallel task execution. And with parallel task execution it provides even greater functionality and flexibility!.*” Parallelism is usually considered in many forums [@ComputingPerformance:2011] to be the future, usually as the only hope, rather than as a panacea. People dealing with parallelism are less optimistic. In general, the technical development tends to reduce the human effort, but “*parallel programs ... are notoriously difficult to write, test, analyze, debug, and verify, much more so than the sequential versions*” [@ReliableParallel2014]. The problems have led researchers to the *ViewPoint* [@Vishkin:BrokenManycoreCACM], that *multicore hardware for general-purpose parallel processing is broken*. Manycore architectures could be fresh meat on the market of processors, but they are not {#sec:freshmeat} ======================================================================================== The essence of the present Viewpoint is that multicore hardware can perhaps be mended. Although one can profoundly agree with the arguments [@Vishkin:BrokenManycoreCACM] that using manycore chips cannot contribute much to using parallelism in general, and especially not in executing irregular programs, one has to realize also that this is not the optimal battlefield for the manycore chips, at least not in their present architecture. Present manycore systems comprise many segregated processors, which make no distinction between two processing units that are neighbours within the same chip or are located in the next rack. The close physical proximity of the processing units offers additional possibilities, and provides a chance to implement Amdahl’s dream [@AmdahlSingleProcessor67] of cooperating processors. Paradigms used presently, however, assume a private processor and a private address space for a running process, and no external world. In many-core systems, it is relatively simple to introduce signals, storages, communication, etc., and deploy them in reasonable times. They cannot, however, be utilized in a reasonable way, if one cannot provide compatibiliy facades providing the illusion of the private world. Cooperation must be implemented in a way which provides complete (upward) compatibility with the presently exclusively used Single-Processor Approach (SPA) [@AmdahlSingleProcessor67]. It means that on the one hand that new functionality must be formulated using the terms of conventional computing, while on the other, it provides considerably enhanced computing throughput and other advantages. It is well known, that general purpose processors have a huge handicap in performance when compared to special purpose chips, and that the presently used computing stack is the source of further serious inefficiencies. Proper utilization of available manycore processors can eliminate a lot of these performance losses, and in this way (keeping the same electronic and programming technology) can considerably enhance (apparently) the performance of the processor. Of course, there is no free lunch. Making these changes requires a *simultanous* change in nearly all elements of the present computing stack. Before making these changes, one should scrutinize the promised gain, and whether the required efforts will pay off. [cc]{} =\[scale=0.75\] (L1) [$L_1$]{}; (E1) \[ above of=L1\] [$B = (C*D)-(E*F)$]{}; (E2m) \[ above of=E1\] ; (E2) at ($(E1)!0.5!(E2m)$) [$A = (C*D)+(E*F)$]{}; (L2) \[ right of=L1\] [$L_2$]{}; (L3) \[ right of=L2\] [$L_3$]{}; (L4) \[ right of=L3\] [$L_4$]{}; (X1F) at ($(L1)!0.5!(L2)$) ; (X1) \[ below of=X1F\] [$X_1$]{}; (X2F) at ($(L3)!0.5!(L4)$) ; (X2) \[ below of=X2F\] [$X_2$]{}; (plus) \[ below of=X1\] [$+$]{}; (minus) \[ below of=X2\] [$-$]{}; \(A) \[ below of=plus\] [$A$]{}; (B) \[ below of=minus\] [$B$]{}; (L1) edge (X1) (L2) edge (X1) (L3) edge (X2) (L4) edge (X2); (X1) edge (plus) (X1) edge (minus) (X2) edge (plus) (X2) edge (minus); (plus) edge (A) (minus) edge (B); (C1) \[ left of=L1\] [$Cycle\ 1$]{}; [ @scan@one@point@firstofone(C1) ]{} [ @scan@one@point@firstofone(X1) ]{} [ @scan@one@point@firstofone(plus) ]{} (C2) at ($(\cx,\xy)$) [$Cycle\ 2$]{}; (C3) at ($(\cx,\py)$) [$Cycle\ 3$]{}; & =\[scale=0.75\] (O1) [$O_1$]{}; (H1) at ($(O1)+(\nd,-.2\nd)$) [$H_1$]{}; (H2) at ($(O1)+(4\nd,-.4\nd)$) [$H_2$]{}; (O1) edge \[bend left\] node (H1) (O1) edge \[bend left\] node (H2); (L11) at ($(H1)+(\nd,-.2\nd)$) [$L_{11}$]{}; (L12) at ($(H1)+(2\nd,-.4\nd)$) [$L_{12}$]{}; (H1) edge \[bend left\] node (L11) (H1) edge \[bend left\] node (L12); (L21) at ($(H2)+(\nd,-.2\nd)$) [$L_{21}$]{}; (L22) at ($(H2)+(2\nd,-.4\nd)$) [$L_{22}$]{}; (H2) edge \[bend left\] node (L21) (H2) edge \[bend left\] node (L22); (X1) at ($(H1)+(0,-1.4\nd)$) [$X_1$]{}; (L11) edge \[bend right\] node (X1) (L12) edge node (X1); (X2) at ($(H2)+(0,-1.4\nd)$) [$X_2$]{}; (L21) edge \[bend right\] node (X2) (L22) edge node (X2); [ @scan@one@point@firstofone(O1) ]{} [ @scan@one@point@firstofone(X2) ]{} (O2) at ($(\ox,\xy)-(0,\nd)$) [$O_2$]{}; (X1) edge node (O2) (X2) edge \[bend right\] node (O2); (plus) at ($(O2)+(\nd,-.2\nd)$) [$+$]{}; (minus) at ($(O2)+(2\nd,-.4\nd)$) [$-$]{}; (O2) edge \[bend left\] node (plus) (O2) edge \[bend left\] node (minus); [ @scan@one@point@firstofone(minus) ]{} (O3) at ($(\ox,\miny)-(0,\nd)$) [$A,B$]{}; (plus) edge node (O3) (minus) edge node (O3); (C1) \[ left of=O1\] [$Cycle\ 1$]{}; [ @scan@one@point@firstofone(C1) ]{} [ @scan@one@point@firstofone(X1) ]{} (C2) at ($(\cx,\xy)$) [$Cycle\ 2$]{}; [ @scan@one@point@firstofone(O2) ]{} (C3) at ($(\cx,\py)$) [$Cycle\ 3$]{}; \ Below, some easy-to follow case studies are presented, all of which lead to the same conclusion: we need a cooperative and flexible rather than rigid architecture comprising segregated MCs, and the 70-years-old von Neumann computing paradigms should be extended. At the end, the feasibility of implementing such an architecture is discussed. The recently introduced Explicitly Many-Processor Approach [@VeghDynamicParallelism:2016] seems to be quite promising: it not only provides higher computing throughput, but also offers advantageous changes in the behavior of computing systems. Is implementing mathematical parallelism just a dream? {#sec:manymany} ====================================================== Todays computing utilizes many forms of parallelism [@HwangParallelism:2016], both hardware (HW) and software (SW) facilities. The software is systematically discussed in [@Vishkin:BrokenManycoreCACM] and hardware methods are scrutinized in  [@HwangParallelism:2016]. A remarkable difference between the two approaches is, that while the SW methods tend to handle the parallel execution explicitly, the HW methods tend to create the illusion that only one processing unit can cope with the task, although some (from outside invisible) helper units are utilized in addition to the visible processing unit. Interestingly enough, both approaches arise from the von Neumann paradigms: the abstractions *process* and the *processor* require so. The inefficiency of using several processing units is nicely illustrated with a simple example in [@HwangParallelism:2016] (see also Fig \[fig:flexibleproc\], left side). A simple calculation comprising 4 operand loadings and 4 aritmetic operations, i.e. altogether 8 machine instructions, could be theoretically carried out in 3 clock cycles, provided that only dependencies restrict the execution of the instructions and an unlimited number of processing units (or at least 4 such units in the example) are available. It is shown that a single-issue processor needs 8 clock cycles to carry out the calculation example. Provided that memory access and instruction latency time cannot be further reduced, the only possibility to shorten execution time is to use more than one processing unit during the calculation. Obviously, a fixed architecture can only provide a fixed number of processing units. In the example [@HwangParallelism:2016] two such ideas are scrutinized: a dual-issue single processor, and a two-core single issue processor. The HW investment in both cases increases by a factor of two (not considering the shared memory here), while the performance increases only moderately: 7 clock cycles for the dual-issue processor and 6 clock cycles for the dual-core processor, versus the 8 clock cycles of the single-issue single core processor. The *obvious reasons here are the rigid architecture and the lack of communication possibilities*, respectively. Consider now a processor with flexible architecture, where the processor can outsource part of its job: it can rent processing units from a chip-level pool just in the time it takes to execute a few instructions. The cores are smart: they can communicate with each other, and even they know the task to be solved and are able to organize their own work while outsourcing part of the work to the rented cores. The sample calculation, borrowed from [@HwangParallelism:2016] as shown in Fig. \[fig:flexibleproc\], left side, can then be solved as shown on the right side of the figure. The core ; originally receives the complete task to make the calculation, as it would be calculated by a conventional single-issue, single core system, in 8 clock cycles. However, ; is more intelligent. Using the hints hidden in the object code, it notices that the task can be outsourced to another cores. For this purpose it rents, one by one, cores ; and ; to execute two multiplications. The rented ; cores are also intelligent, so they also outsource loading the operands to cores ; and ;. They execute the outsourced job: load the operands and return them to the requesting cores ;, which then can execute the multiplications (denoted by ;) and return the result to the requesting core, which can then rent another two cores ; and ; for the final operations. Two results are thus produced. This unusual kind of architecture must respond to some unusual requirements. First of all, the architecture must be able to organize itself as the received task requires it, and build the corresponding “processing graph”, see Fig. \[fig:DynPar\], for legend see [@VeghDynamicParallelism:2016]. Furthermore, it must provide a mechanism for mapping the virtually infinite number of processing nodes to the finite number of cores. Cores ; must receive the address of the operand, i.e. at least some information must be passed to the rented core. Similarly, the loaded operand must be returned to the renting core in a synchronized way. In the first case synchronization is not a problem: the rented core begins its independent life when it receives its operands. In the second case the rented core finishes its assigned operation and sends the result asyncronously, independently of the needs of the renting core. This means that the architecture must provide a mechanism for transferring some (limited amount of) data between cores, a signalization mechanism for renting and returning cores, as well as a latched intermediate data storage for passing data in a synchronized way. The empty circles are the theoretically needed operations, and the shaded ones are additional operations of the “smart” cores. The number of the cores being used changes continuously as they are rented and returned. Although *physically* they may be the same core, *logically* they are brand new. Note that the “smart” operations are much shorter – they comprise simple bit manipulations and multiplexing –, than the conventional ones that comprise complex machine instructions, and since the rented cores work in parallel (or at least mostly overlap), the calculation is carried out in 3 clock periods. The cycle period is somewhat longer, but the attainable parallelism approaches the theoretically possible one, and is more than twice as high as the one attainable using either two-issue or dual-core processors. Although the average need of cores is about 3, these cores can be the simplest processors, i.e. the decreasing complexity of the cores (over)compensates for the increasing complexity of the processor. In addition, as the control part of the processors increases, the need for the hidden parallelization (like out-of-order and speculation) can be replaced by the functionality of the flexible architecture, the calculational complexity can be decreased, and as a result, the clock speed can be increased. A processor with such an internal architecture appears to the external world as a “superprocessor”, having several times greater performance than could be extracted from a single-threaded processor. That processor can adapt itself to the task: unlike in the two issue processor, all (rented) units are permanently used. *The many-core systems with flexible architecture comprising cooperating cores can approach the theoretically possible maximum parallelism.* In addition, the number of the cores can be kept at a strict minimum, allowing reduction of the power consumption. How long can the parallelism of the many-many processor supercomputers still be enhanced, at a reasonable cost? {#sec:manymany} =============================================================================================================== In the many-many processor (supercomputer) systems the processing units are assembled using the SPA [@AmdahlSingleProcessor67], and so their maximum performance is bounded by Amdahl’s law. Although Amdahl’s original model [@AmdahlSingleProcessor67] is pretty outdated, its simple and clean interpretation allows us to derive meaningful results even for today’s computing systems. Amdahl assumed that in some $\alpha$ part of the total time the computing system engages in parallelized activity, in the remaining ($1-\alpha$) part it performs some (from the point of view of parallelization) non-payload activity, like sequential processing, networking delay, control or organizational operation, etc. The essential point here is that all these latter activities behave *as if they were sequential processing*. Under such conditions, the efficiency $E$ is calculated as the ratio of the total speedup $S$ and the number of processors $k$: $$E = \frac{S}{k}=\frac{1}{k(1-\alpha)+\alpha}\label{eq:soverk}$$ Although in the case of supercomputers ($1-\alpha$) comprises contributions of a technically different nature (it can be considered as the “imperfectness” of implementation of the supercomputer), it also behaves as if it were a sequentially processed code. Fig. \[SupercomputerTimeline\] shows how this “imperfectness” was decreased during the development of supercomputers, calculated from the actual data of the first three supercomputers in the year in question over a quarter of a century. As the figure shows, this parameter behaves similarly to the Moore-observation, but it is independent of that one (because the parameter is calculated from $\frac{R_{peak}}{R_{max}}$, any technology dependence is removed). At first glance, it seems to be at least surprising to look for any dependence in function of “imperfectness”. The key is Equ. (\[eq:soverk\]). Since the $\alpha$ approaches unity, the term $k(1-\alpha)$ determines the overall efficiency of the computing system. To *increase* $k$ by an order or magnitude alone is useless if not accompanied by an order of magnitude *decrease* in the value of ($1-\alpha$). However, while increasing $k$ is simply a linear function, decreasing ($1-\alpha$) as any kind of increasing perfectness, is exponentially more difficult. Fig. \[SupercomputerTimeline\] proves that today’s supercomputers are built in SPA, and makes it questionable whether further significant decrease of value ($1-\alpha$) could be reached at reasonable cost. This means that it is hopeless *to build exa-scale computers, using the principles drawn from the SPA*. Looking carefully at $k(1-\alpha)$, one can notice that the two terms describe two important behavioral features of the computing system. As already discussed, $(1-\alpha)$ decribes, how much the work of the many-processor system is *coordinated*. The factor $k$, on the other hand, describes, how much the processing units *cooperate*. In the case of using the SPA, the processing units are segregated entities, i.e. they do not cooperate at all. If we could make a system where the processing units behave differently in the presence of another processors, we could write $f(k)$ in Equ. (\[eq:soverk\]). Depending on how cores behave together in the presence of another cores when solving a computing task, the $f(k)$, the cooperation of the processing units can drastically increase the efficiency of the many-processor systems. In other words, to increase the performance of many-many-processor computers, *the cores must cooperate* (at least with some) other cores. *Using cooperating cores is inevitable for building supercomputers at a reasonable cost.* Can we eliminate non-payload calculations by replacing them with architectural changes? {#sec:manymany} ======================================================================================== A computer computes everything, because it cannot do any other type of operations. Computational density has reached its upper bound, so no further performance increase in that direction is possible. In addition to introducing different forms of HW and SW parallelism, it is possible to omit some non-payload, do-not-care calculations, through providing and utilizing special HW signals instead. The signals can be provided for the participating cores, and can be used to replace typical calculational instruction sequences by using special hardware signals. The compilation is simple: where the compiler should generate non-payload loop organization commands, it should give a hint about renting a core for executing non-payload instructions and providing external synchronization signals. A simple example: when summing up elements of a vector, the only payload instruction is the respective `add`. One has, however, to address the operand (which includes handling the index, calculating the offset and adding it to the base address), to advance the loop counter, to compare it to the loop bound, and to jump back conditionally. All those non-payload operations can be replaced by handling HW signals, if the cores can cooperate, resulting in a speed gain of about 3, using an extra core only. Even, since the intermediate sum is also a do-not-care value until the summing is finished, a different sumup method can be used, which may utilize dozens of cores and result in a speed gain of dozens. When organizing a loop, the partial sum is one of the operands, so it must be read before adding a new summand, and must be written back to its temporary storage, wasting instructions and memory cycles; in addition it excludes the possibility of parallelizing the sumup operation. For details and examples see [@VeghDynamicParallelism:2016]. This latter example also demonstrates that *the machine instruction is a too rigid atomic unit of processing*. *Utilizing HW signals from cooperating cores rather than providing some conditions through (otherwise don-not-care) calculations, allows us to eliminate obsolete calculational instructions, and thus apparently accelerate the computation by a factor of about ten.* Do we really need to pay with an indeterministic operation for multiprocessing? {#sec:multiproc} =============================================================================== The need for multi-processing (among others) forced to use exceptional instruction execution. I.e., a running process is *interrupt*ed, its HW and SW state is saved and restored, because the hard and soft parts of the *only* processor must be lent to another process. The code of the interrupting process is effectively inserted in the flow of executing the interrupted code. This maneuver causes an indeterministic behavior of the processor: the time when two consecutive machine instructions in a code flow are executed, becoming indeterminate. The above is due to the fact that during development, some of the really successful accelerators, like the internal registers and the highest level cache, became part of the architecture: the soft part of the processor. In order to change to a new thread, the current soft part must be saved in (and later restored from) the memory. Utilizing asynchronous interrupts as well as operating system services, implies a transition to new operating mode, which is a complex and very time-consuming process. All these extensions were first developed when the computer systems had only one processor, and the only way to provide the illusion of running several processes, each having its own processor, was to detach the soft part from the hard one. Because of the lack of proper hardware support, this illusion depended on using SW services and on the architectures being constructed with a SPA in mind, conditions that require rather expensive execution time: in modern systems a context change may require several thousands of clock cycles. As the hyper-threading proved, detaching soft and hard part of the processors results in considerable performance enhancement. By having more than one processor and the Explicitly Many-Processor Approach [@VeghDynamicParallelism:2016], the context change can be greatly symplified. For the new task, such as providing operating system services and servicing external interrupts a dedicated core can be reserved. The dedicated core can be prepared and held in supervisor mode. When the execution of the instruction flow follows, it is enough to clone the relevant part of the soft part: for interrupt servicing nothing is needed, for using OS services only the relevant registers and maybe cache. (The idea is somewhat similar to utilizing shadow registers for servicing an asynchronous interrupt.) If the processors can communicate among each other using HW signals rather than OS actions, and some communication mechanism, different from using (shared) memory is employed, the apparent performance of the computing systems becomes much faster. *For cooperating cores no machine instructions (that waste real time, machine and memory cycles) are needed for a context change, allowing for a several hundredfold more rapid execution in these spots.* The application can even run parallel with the system code, allowing further (apparent) speedup. Using the many-processor approach creates many advantageous changes in the real-time behavior of the computing systems. Since the processing units do not need to save or restore anything, the servicing can start immediately and is restricted to the actual payload instructions. The dedicated processing units cannot be addressed by non-legal processing units, so issues like exluding priority inversion are handled at HW level. And so on. The common part: implement supervised cooperating cores, handling extra signals and storages {#sec:manymany} ============================================================================================ From all points of view (the just-a-few and many-many processors, as well as utilizing kernel-mode or real-time services) we arrive at the same conclusion: segregated processors in the many-processor systems do not allow a greater increase in the performance of our computing systems, while cooperating processors can increase the attainable single-threaded performance. Amdahl contented this by a half century ago: “ *the organization of a single computer has reached its limits and that truly significant advances can be made only by interconnection of a multiplicity of computers in such a manner as to permit cooperative solution.*” [@AmdahlSingleProcessor67] At this point the many-core architectures have the advantage that they are in the close proximity to one another: there is no essential difference between that a core needing to reach its own register (or signal) or that of another core. The obstacle is actually the SPA: for a core and a process, there exists no other core. In the suggested new approach, which can be called *Explicitly Many-Processor Approach* (EMPA), the cores (through their supervisor) can know about their neighbours. Today, radical departures from conventional approaches (including rethinking the complete computing stack) are advanced [@Esmaeilzadeh:2015:AAP:2830689.2830693], but at the same time a smooth transition must be provided to that radically new technology. *To preserve compatibility with conventional computing, the EMPA approach [@VeghDynamicParallelism:2016] is phrased using the terms of conventional computing* (i.e. it contains SPA as a subset). How do algorithms benefit from the EMPA architecture? ===================================================== Some of the above-mentioned boosting principles are already implemented in the system. From the statistics one can see that in some spots, performance gain in the range 3-30 can be reached. The different algorithms need different new accelerator building stone solutions in frame of EMPA. For example, the gain 3 in an executing loop, when used in an image processing task where for edge detection a 2-dimensional matrix is utilized, means nearly an order of magnitude performance gain, using the same calculational architecture in calculating a new point. And, to consider all points of the picture another double loop is used. This means, that a 4-core EMPA processor can produce nearly 100 times more rapid processing (not considering that several points can be processed in parallel on processors with more cores). This is achieved not by increasing computing density, but by replacing certain non-payload calculations with HW signals, and so executing 100 times less machine instructions. How Amdahl’s dream can be implemented? {#sec:dream} ====================================== The MC architecture comprising segregated cores is indeed broken. It can, however, be mended, if the manycore chips are manufactured in the form using cooperating cores. As the first step toward implementing such a system, for simulating its sophisticated internal operation and providing tools for understanding and validating it, an EMPA development system [@VeghDynamicParallelism:2016] has been prepared. An extended assembler prepares EMPA-aware object code, while the simulator allows us to watch the internal operation of the EMPA processor. To illustrate the execution of programs using the EMPA method, a processing diagram is automatically prepared by the system, and different statistics are assembled. Fig. \[fig:DynPar\] shows the equivalent of Fig. \[fig:flexibleproc\], running on an 8-core and a 4-core processor, respectively (for legend see  [@VeghDynamicParallelism:2016]). The left hand figure depicts the case when “unlimited” number of processing units are available, the right hand one shows the case when the processor has a limited number of computing resources to implement the maximum possible parallelism. The code assembled by the compiler is the same in both cases. The supervisor logic detects if not enough cores are available (see right side), and delays the execution (outsourcing more code) of the program fragments until some cores are free again. The execution time gets longer if the processor cannot rent enough cores for the processing, but the same code will run in both cases, without deadlock and violating dependencies. For electronic implementation, some ideas may be borrowed from the technology of reconfigurable systems. There, in order to minimize the need for transferring data, some local storage (block-RAM) is located between the logical blocks, and a LOT of wires is available for connecting them. In analogy also with FPGAs, the cores can be implemented as mostly fixed functionality processing units, having multiplexed connecting wires to their supervisor with fixed routing. Some latch registers and non-stored program functionality gates can be placed near those blocks, which can be accessed by both cores and supervisor. The inter-core latch data can be reached from the cores using pseudo-registers (i.e. they have a register address, but are not part of the register file) and the functionality of the cores also depends on the inter-core signals. In the prefetch stage the cores can inform the supervisor about the presence of metainstruction in their object code, and in this way the mixed code instructions can be directed to the right destination. In order to be able to organize execution graphs, the cores (after renting) are in parent-child relation to unlimited depth. As was very correctly stated [@Vishkin:BrokenManycoreCACM], “due to its high level of risk, prototype development fits best within the research community.” The principles and practice of EMPA differ radically from those of SPA. To compare the performance of both, EMPA needs a range of development. Many of the present components, accelerators, compilers, etc., with SPA in mind, do not fit EMPA. The research community can accept (or reject) the idea, but it definitely warrants some cooperative work. [1]{} G. M. Amdahl. . In [*AFIPS Conference Proceedings*]{}, volume 30, pages 483–485, 1967. H. Esmaeilzadeh. Approximate acceleration: A path through the era of dark silicon and big data. In [*Proceedings of the 2015 International Conference on Compilers, Architecture and Synthesis for Embedded Systems*]{}, CASES ’15, pages 31–32, Piscataway, NJ, USA, 2015. IEEE Press. H. Fu, J. Liao, J. Yang, L. Wang, Z. Song, X. Huang, C. Yang, W. Xue, F. Liu, F. Qiao, W. Zhao, X. Yin, C. Hou, C. Zhang, W. Ge, J. Zhang, Y. Wang, C. Zhou, and G. Yang. . , 59(7):1–16, 2016. S. H. Fuller and L. I. Millett, editors. The National Academies Press, 2011. K. Hwang and N. Jotwani. . Mc Graw Hill, 3 edition, 2016. Intel. <https://software.intel.com/en-us/blogs/2014/02/19/why-has-cpu-frequency-ceased-to-grow>, 2014. J. [V[é]{}gh]{}. . , Aug. 2016. U. Vishkin. . , 57(4):35–39, April 2014. J. Yang, H. Cui, J. Wu, Y. Tang, and G. Hu. . , 57(3):58–69, 2014.
--- abstract: 'A [*superposition rule*]{} is a particular type of map that enables one to express the general solution of certain systems of first-order ordinary differential equations, the so-called [*Lie systems*]{}, out of generic families of particular solutions and a set of constants. The first aim of this work is to propose several generalisations of this notion to second-order differential equations. Next, several results on the existence of such generalisations are given and relations with the theories of Lie systems and quasi-Lie schemes are found. Finally, our methods are used to study second-order Riccati equations and other second-order differential equations of mathematical and physical interest.' --- **Superposition rules and second-order Riccati equations** <span style="font-variant:small-caps;">J.F. Cariñena$^*$ and J. de Lucas$^{**}$</span> $^*$Departamento de Física Teórica and IUMA, Facultad de Ciencias, Universidad de Zaragoza, Pedro Cerbuna 12, 50.009, Zaragoza, Spain. $^{**}$Institute of Mathematics, Polish Academy of Sciences, Śniadeckich 8, P.O. Box 21, 00-956, Warszawa, Poland. [[**Primary:**]{} [*34A26; Secondary: 34A34,53Z05.*]{}]{} [[**Keywords:**]{} [*Quasi-Lie scheme, Lie system, second-order system, second-order Riccati equation, superposition rule.*]{}]{} Introduction ============ The origin of the study of superposition rules can be traced back to 1893, when Lie, Guldberg, and Vessiot published a series of papers [@Gu93; @LS; @Ve93; @Ve94] characterising and analysing systems of first-order ordinary differential equations admitting a superposition rule. This theory, broadly analysed during its beginning, was rarely treated for the next eighty years. Nevertheless, the interest in this topic has revived in the last few decades, and many works have recently been devoted to the analysis of its properties and to the study of its applications and generalisations [@BecGagHusWin90; @CGL08; @CGM07; @CLR08; @CLR08c; @CarRam03; @GGL08; @Ib99; @LW96; @LP09; @RSW97; @Ve95; @PW]. Among these works, we can emphasise the results described in [@CGM07; @Ib99; @PW]. In spite of its interesting properties and applications in Mathematics, Control Theory, and Physics (see [@CGM07; @CL08Non; @CLR08; @CLR08c; @CarRam03; @GGL08; @OlmRodWin8687; @RSW97]), the theory of Lie systems possesses a certain lack of applicability in many other problems described by differential equations that are not Lie systems. This motivates the generalisation of the methods for analysing Lie systems carried out here, and in previous works, so as to investigate non-Lie systems. The theories of Lie systems [@CGM07; @LS; @PW] and quasi-Lie schemes [@CGL08; @CLL08Emd] mainly concern the investigation of systems of first-order differential equations. Nevertheless, various attempts have been carried out to apply their methods to study second-order differential equations (SODEs) [@CL08Non; @CLR08; @CLR08c; @GGL08; @RSW97; @Ve95]. These papers were rather more focused on applying the methods of the aforementioned theories to SODEs than on describing new theoretical results. This explains why, although these applications suggested the existence of a new type of (nonlinear) expressions describing general solutions of SODEs, no further discussion about such expressions and their properties was carried out. In view of the above comments, the first aim of this work is to define some new notions describing, as particular cases, those expressions for the study of SODEs appearing in the very recent literature, namely, time-dependent and time-independent superposition rules for systems of SODEs. Furthermore, some theoretical results concerning the existence of such superposition rules are demonstrated, and several relations with the theories of Lie systems and quasi-Lie schemes are shown. In this way, our achievements allow us to clarify diverse procedures and techniques used in previous works to investigate SODEs. After developing the theoretical part of our work, our results are applied to study families of SODEs appearing broadly in the Physics and Mathematics literature [@Ar97; @CGR09; @CRS05; @CLS05II; @Da62; @GGL08; @In86; @KL09; @BLPS09]. More specifically, time-independent and time-dependent superposition rules are derived for such families using the theory of Lie systems and quasi-Lie schemes. This reduces the determination of the general solution of any instance of that families to the determination of four of its particular solutions. On one hand, this provides a modern fully geometric demonstration for a result sketched in a Vessiot’s work [@Ve95] that was derived in [*ad hoc*]{} way. On the other hand, our work supplies a method to analyse the solutions of specific forms of time-dependent Liénard equations [@CLS05II; @GGL08; @BLPS09], various interesting equations studied by Ince and Davis [@Da62; @In86], some modified Emden equations [@BLPS09], certain second-order Riccati equations [@Ar97; @CGR09; @CRS05], etc. Moreover, particular forms of the analysed SODEs arise in the analysis of fusion pellets [@AAE84]. Among the aforementioned applications, particular attention is paid to the study of second-order Riccati equations. The interest is mainly due to two reasons. On one hand, these equations are widely studied in order to analyse their mathematical properties [@Ar97; @CRS05; @CGK08; @Er77; @FL66I; @GL99; @Da05]. On the other hand, second-order Riccati equations and some of their particular cases, e.g. certain modified Emden equations [@CLS05II; @CLS05; @BLPS09] or various Painlevé-Ince equations [@BFL91; @CRS05; @AAE84; @Go50; @KL09], appear in many physical problems, like the problem of waves on shallow water. Another application of second-order Riccati equations deserves a special attention: these equations are an element of the so-called [*Riccati chain*]{} describing the Bäcklund transformations for various partial differential equations [@CGR09; @GL99]. More specifically, for each one of such PDEs, a particular solution defines a one-parametric family of $n$-order Riccati equations. The solutions of the members of such a family allow one to build up a new family of solutions of the initial PDE [@GL99]. Obviously, the time-dependent superposition rule derived in this work for second-order Riccati equations simplifies the study of the solutions of such a family and, in consequence, the determination of new solutions for those PDEs whose Bäcklund transformations are determined by these equations. The plan of the paper is as follows. Section 2 mainly recalls the notions of the theory of Lie and quasi-Lie systems necessary to follow the results of our paper. Additionally, the usefulness of quasi-Lie schemes to explicitly determine time-dependent superposition rules for systems of first-order differential equations is discussed. Section 3 is concerned with one of the novelties of our paper: the definition and analysis of the time-independent and the time-dependent superposition rule notions for systems of SODEs. In Section 4 our previous theoretical results are employed to derive a superposition rule for all the elements of a family of SODEs with many applications to Physics and Mathematics. A quasi-Lie scheme is used in Section 5 to derive a time-dependent superposition rule for second-order Riccati equations. Finally, some calculations to prove various results of the paper are detailed in the Appendix. Lie systems and quasi-Lie schemes {#LSLS} ================================= Here we mainly review some features about Lie systems, quasi-Lie schemes and time-dependent superposition rules for systems of first-order differential equations [@CGL08; @CGL10; @CGM07; @CLL08Emd]. In addition, the use of quasi-Lie schemes for deriving time-dependent superposition rule is discussed. For simplicity, we restrict ourselves to analysing systems of differential equations on linear spaces and we skip various technical details that are not relevant to understand our procedures. Lie initiated in [@LS] the nowadays called theory of Lie systems when started investigating the conditions that ensure that a system of first-order ordinary differential equations of the form $$\label{FORD} \frac{dx^i}{dt}=X^i(t,x),\qquad i=1,\ldots,n,$$ admits a [*superposition rule*]{}, i.e. a $t$-independent function $\Phi:\mathbb{R}^{nm}\times\mathbb{R}^n\rightarrow {\mathbb{R}}^n$, $x=\Phi(x_{1}, \ldots,x_{m};\lambda_1,\ldots,\lambda_n)$, such that the general solution of the system can be written as $$\label{FirstSup} x(t)=\Phi(x_{1}(t), \ldots,x_{m}(t);\lambda_1,\ldots,\lambda_n),$$ where $\{x_{a}(t)\mid a=1,\ldots,m\}$ is any generic family of particular solutions and $\lambda_1,\ldots,\lambda_n$ is a set of $n$ constants. Note that the general solution of a linear homogeneous system of first-order differential equations in $\mathbb{R}$ cannot be cast into the form (\[FirstSup\]) for every set of $n$ particular solutions: they must be linearly independent. In a similar way, for all known Lie systems (cf. [@LW96; @PW]), their expressions (\[FirstSup\]) only hold for certain families of particular solutions. More specifically, it is said that expression (\[FirstSup\]) is valid for any ‘generic’ family of $m$ particular solutions if there exists an open dense subset $U\subset\mathbb{R}^{nm}$ such that expression (\[FirstSup\]) is satisfied for every set of particular solutions $x_{1}(t),\ldots,x_{m}(t)$ such that $(x_{1}(0),\ldots,x_{m}(0))$ lies in $U$. Obviously, ‘almost every’ set of $m$ particular solutions holds the above property and this is, indeed, the approximate meaning of the term ‘generic’ in the above definition. Lie found a characterisation describing systems of first-order differential equations admitting a superposition rule [@LS] and such a characterisation was recently reformulated in terms of the modern language of Differential Geometry [@CGM07]. In these terms, each system of the form (\[FORD\]) is described by means of a time-dependent vector field $X(t,x)=\sum_{i=1}^nX^i(t,x)\partial/\partial x^i$ on $\mathbb{R}^n$. This description is used to establish that a first-order system (\[FORD\]) admits a superposition rule if and only if its associated time-dependent vector field $X(t,x)$ can be written as a linear combination $$X(t,x)=\sum_{\alpha =1}^r b_\alpha(t)\, X_{\alpha}(x), \label{Lievf}$$ where the vector fields $$X_{\alpha}(x)=\sum_{i=1}^nX^i_{\alpha}(x)\frac{\partial}{\partial x^i},\qquad\qquad \alpha=1,\ldots,r,$$ are a basis for an $r$-dimensional real Lie algebra $V$ of vector fields, the so-called associated [*Vessiot–Guldberg Lie algebra*]{}. In other words, a time-dependent vector field $X(t,x)$ describes a Lie system if and only if the family of vector fields $\{X_t\}_{t\in\mathbb{R}}$, with $X_t:x\in\mathbb{R}^n\rightarrow X_t(x)\equiv X(t,x)\in {\rm T}\mathbb{R}^n$, holds $\{X_t\}_{t\in\mathbb{R}}\subset V$ for a certain finite-dimension Lie algebra of vector fields $V$. Various procedures are available to derive a superposition rule. Let us sketch here one of these methods to be used within this work (for a full description, see [@CGM07]). The key element of this procedure is the so-called [*diagonal prolongation*]{} of a vector field. Given a vector field $X(x)=\sum_{i=1}^nX^i(x)\partial/\partial x^i$ over $\mathbb{R}^n$, its diagonal prolongation to $(\mathbb{R}^n)^{p+1}$ is the vector field over this space $$\widehat X(x_{0}, \ldots,x_{m})=\sum_{a=0}^p\sum_{i=1}^nX^i(x_{a})\,\frac{\partial}{\partial x^i_{a}}.$$ Now, our approach to derive a superposition rule for system (\[FORD\]) starts by determining a decomposition (\[Lievf\]) for its associated time-dependent vector field. Next, it must be determined the natural number $m$ so that the [*diagonal prolongations*]{} to $(\mathbb{R}^n)^m$ of the vector fields $X_1,\ldots,X_r$ become linearly independent at each point of an open dense subset of this space. Note that the diagonal prolongations $\widehat X_1,\ldots,\widehat X_r$ to $\mathbb{R}^{n(m+1)}$ of the vector fields $X_1,\ldots,X_r$ satisfy their same commutation relations and they are linearly independent over an open dense set $U\subset \mathbb{R}^{n(m+1)}$ again. Consequently, they span an involutive distribution $$\mathcal{D}_{\bar x}=\langle \widehat X_1(\bar x),\ldots,\widehat X_r(\bar x)\rangle,\qquad \bar x\in U,$$ of rank $r$ over $U$. The vector fields of this distribution admit $n(m+1)-r$ common local first-integrals. Among them, one can choose $n$ first-integrals giving rise to a $n$-codimensional local foliation $\mathcal{F}$ horizontal with respect to the projection $\pi:(x_0,\ldots,x_m)\in(\mathbb{R}^{n})^{m+1}\mapsto (x_1,\ldots,x_m)\in(\mathbb{R}^{n})^m$. Roughly speaking, the former first-integrals enable us to express the coordinates $x_0^i$, with $i=1,\ldots,n,$ in terms of the other variables and the $n$ first-integrals, giving rise to the superposition rule for system (\[FORD\]). Since it would be interesting to find a way to apply the methods for analysing Lie systems to a broader set of differential equations, the theory of quasi-Lie schemes, whose basic features are described below, was developed [@CGL08]. Although such a theory applies to systems of first-order differential equations associated with complete and non-complete time-dependent vector fields, the forthcoming presentation will focus on studying systems associated with complete vector fields in order to simplify the presentation and underline the main results of the theory. Nevertheless, a careful analysis shows that our main claims remain valid for systems associated with non-complete vector fields, although with some technical minor modifications. Let us now turn to define the key notion of the theory of quasi-Lie schemes. A [*quasi-Lie scheme*]{} $S(W,V)$ is made up by two finite-dimensional vector spaces of vector fields $W,V$ satisfying the following conditions: - $W$ is a linear subspace of $V$. - $W$ is a Lie algebra of vector fields, that is, $[W,W]\subset W$. - $W$ normalises $V$, i.e. $[W,V]\subset V$. Roughly speaking, each quasi-Lie scheme $S(W,V)$ determines two families of time-dependent vector fields, $V(\mathbb{R})$ and $W(\mathbb{R})$, which are made of time-dependent vector fields taking values in $V$ and $W$, respectively. The first family, $V(\mathbb{R})$, is intended to describe the first-order systems which can be analysed by means of $S(W,V)$. The second one, $W(\mathbb{R})$ is used to define a group of time-dependent changes of variables: the [*group of the scheme*]{} $\mathcal{G}(W)$. Now, each system related to a time-dependent vector field of $V(\mathbb{R})$ transforms, under any element of $\mathcal{G}(W)$, into another system related to another element of $V(\mathbb{R})$. This is the most important property of quasi-Lie schemes [@CGL08 Proposition 1]. It has been widely used in recent works [@CGL08; @CLL08Emd; @CLR10Abel] to transform diverse types of differential equations, e.g. Abel equations [@CLR10Abel], into other differential equations of the same type and, more specifically, into Lie systems. In this way, the theory of Lie systems applies to investigate this last system and, by undoing the performed change of variables, multiple properties of the original system can be stated. Let us explain the above claims more carefully. Every time-dependent vector field $X$ over $\mathbb{R}^n$ gives rise to a [*generalised flow*]{} $g^X$, i.e. a map $g^X:(t,x)\in\mathbb{R}\times \mathbb{R}^n \mapsto g^X_t(x)\equiv g^X(t,x)\in \mathbb{R}^n$, with $g^X_0={\rm Id}_{\mathbb{R}^n}$, satisfying that $\gamma^X_{x_0}(t)=g^X_t(x_0)$ is the integral curve of the time-dependent vector field $X$ passing through the point $x_0\in\mathbb{R}^n$ at $t=0$. Now, since $W$ is a Lie algebra of vector fields, it can be proved that the generalised flows corresponding to time-dependent vector fields with values in $W$ form a group, the so-called group of the scheme $\mathcal{G}(W)$, with composition law $(g\star h)_t=g_t\circ h_t$ and $g,h\in\mathcal{G}(W)$. The neutral element is $e:(t,x)\in\mathbb{R}\times\mathbb{R}^n\mapsto x\in\mathbb{R}^n$ and every generalised flow $g$ admits an inverse $g^{-1}:(t,x)\in\mathbb{R}\times\mathbb{R}^n\mapsto (g_t)^{-1}(x)\in\mathbb{R}^n$. As every generalised flow can be considered as a time-dependent change of variables, the group $\mathcal{G}(W)$ can be regarded as a group of time-dependent changes of variables. Generalised flows can also act on time-dependent vector fields. Given a time-dependent vector field $Y$ and a generalised flow $h$, the action of $h$ over $X$, let us say $h_\bigstar X$, is the time-dependent vector field whose generalised flow is $h\star g^Y$. In this terminology, the main result of the theory of quasi-Lie schemes [@CGL08 Proposition 1] establishes that for every time-dependent vector field $Y$ of $V(\mathbb{R})$ and $g\in\mathcal{G}(W)$, the time-dependent vector field $g_\bigstar Y$ belongs to $V(\mathbb{R})$. In other words, the family of systems associated with the elements of $V(\mathbb{R})$ is stable under the time-dependent changes of variables of $\mathcal{G}(W)$. Among such systems, special relevance have those ones, the so-called [*Lie systems*]{}, that can be transformed into Lie systems. The precise definition of this notion is detailed below. A system of differential equations describing the integral curves of a time-dependent vector field $X$ is a [*quasi-Lie system*]{} with respect to a quasi-Lie scheme $S(W,V)$, if there exist a $g\in\mathcal{G}(W)$ and a Lie algebra $V_0\subset V$ such that $g_\bigstar X$ is a time-dependent vector field taking values in $V_0$. Quasi-Lie systems admit certain properties as a consequence of its relation to Lie systems. For instance, each quasi-Lie system admits its general solution to be described in terms of each generic family of particular solutions, a set of constants, and the time, i.e. it admits a [*time-dependent superposition rule*]{} [@CGL08]. Moreover, the theory of quasi-Lie systems provides powerful methods to explicitly determine such superpositions [@CGL08; @CLL08Emd; @CLR10Abel]. In order to understand the relevance of these facts, it is necessary to pay attention to the following remarks. Every system of first-order differential equations admits time-dependent superposition rules [@CGL08; @CGL10]. For instance, every first-order system describes the integral curves of a time-dependent vector field admitting a generalised flow which is indeed a particular type of time-dependent superposition rule for the system. In spite of this, determining such time-dependent superposition rules can be, like in the case of the aforementioned example, as difficult as solving the initial system [@CGL08; @CGL10]. Thus, what it really matters about time-dependent superposition rules is the description of procedures to determine them explicitly as, for instance, quasi-Lie schemes. Apart from the above remark, there is another reason to use quasi-Lie schemes to determine time-dependent superposition rules: every quasi-Lie scheme provides a family of first-order systems admitting the same time-dependent superposition rule as a bonus [@CGL10 Proposition 14]. Let us briefly analyse this fact. Given a quasi-Lie system with respect to a quasi-Lie scheme $S(W,V)$, there exists an element $g\in \mathcal{G}(W)$ and a Lie algebra $V_0\subset V$ such that the time-dependent vector field $X$ associated with the quasi-Lie system holds that $g_\bigstar X$ takes values in $V_0$. Therefore, the set $S_g(W,V;V_0)$ of quasi-Lie systems with respect to $S(W,V)$ satisfying that $g_\bigstar X$ takes values in $V_0$ is not empty. Moreover, it is made of time-dependent vector fields of the form $X'=(g^{-1})_\bigstar Y$, with $Y$ being any time-dependent vector field taking values in $V_0$. As all the elements of $S_g(W,V;V_0)$ admit a common time-dependent superposition rule (cf. [@CGL10 Proposition 14]), we have determined a family of systems admitting the same time-dependent superposition rule than $X$. Superposition rules for systems of SODEs {#SRSO} ======================================== Previous studies about second-order differential equations carried out by means of the theory of Lie systems and quasi-Lie schemes were lacking in a theoretical explanation of the methods and notions employed [@CGL08; @CL08Non; @CLR08; @RSW97; @Ve95]. The main aim of this section is to provide a definition for the new notions appearing, with no further explanation, within previous works along with a theoretical explanation of the methods there performed. In this way, a basic background for the posterior theoretical treatment of the subject is laid down. Recall that the theory of Lie systems was initiated as a result of the study of systems of first-order differential equations admitting their general solutions to be expressed in terms of each generic family of particular solutions and a set of constants. Nevertheless, not only such systems admit their general solutions to be described in this way. For instance, every second-order differential equation of the form $\ddot x=a(t)x$, with $a(t)$ any time-dependent function, satisfies that its general solution, $x(t),$ can be put into the form $$\label{LinearSup} x(t)=\lambda_1x_{1}(t)+\lambda_{2}x_{2}(t),$$ with $\lambda_1,\lambda_{2}$ being two real constants and $x_{1}(t),x_{2}(t)$ being any family of two particular solutions such that $(x_{1}(t),\dot x_{1}(t))$ and $(x_{2}(t),\dot x_{2}(t))$ are, for every $t\in\mathbb{R}$, two linearly independent elements of ${\rm T}\mathbb{R}\approx \mathbb{R}^2$. In a similar way, other expressions determining the general solution of certain SODEs have recently been described in the literature [@CL08Non; @CLR08c]. This suggests us to propose the following definition which covers, as particular cases, all the previous expressions occurring in the literature. \[SupRulSec\] We say that a system of second-order differential equations $$\label{SODE} \ddot x^i=F^i(t,x,\dot x), \qquad \,\, i=1,\ldots,n,$$ on $\mathbb{R}^n$ admits a [*superposition rule*]{} if there exists a map $\Psi:{\rm T}\mathbb{R}^{mn}\times\mathbb{R}^{2n}\rightarrow \mathbb{R}^n$ such that its general solution, $x(t)$, can be written as $$\label{super} x(t)=\Psi(x_{1}(t),\ldots,x_{m}(t),\dot x_{1}(t),\ldots,\dot x_{m}(t);\lambda_1,\ldots,\lambda_{2n}),$$ in terms of each generic family, $x_{1}(t),\ldots,x_{m}(t),$ of particular solutions, their derivatives, and a set of $2n$ constants. In order to grasp the previous definition, it is necessary to precisely establish the meaning of ‘generic’ in the above statement. Formally, it is said that expression (\[super\]) is valid for a generic family of particular solutions when it holds for every family of particular solutions, $x_{1}(t),\ldots,x_{m}(t),$ satisfying that $(x_{1}(0),\dot x_{1}(0),\ldots,x_{m}(0),\dot x_{m}(0))\in U$, with $U$ being an open dense subset of ${\rm T}\mathbb{R}^{nm}$. In this way, as in the case of superposition rules for Lie systems, the term ‘generic’ amounts to ‘almost every’. In order to note that all those aforementioned expressions studying the general solutions for certain SODEs are superposition rules in the above sense, it is necessary to explain an important detail. Some of such expressions, like the one for studying Milne–Pinney equations, depend on a generic set of $m$ particular solutions, a set of constants and a set of time-independent constants of the motion [@CL08Non]. These expressions seem to differ from the above definition. Nevertheless, if we take into account that such constants of the motion depend on the $m$ particular solutions, we can notice that, indeed, they are superposition rules in the above sense. Although there exists no characterisation for systems of SODEs of the form (\[SODE\]) admitting a superposition rule, there exists a special class of such systems, the so-called [*SODE Lie systems*]{}, accepting such a property. Even though this fact has been employed implicitly in the literature, it has never been proved explicitly. In view of these facts, we next furnish the definition of the SODE Lie system notion along with a proof showing that every SODE Lie system admits a superposition rule. In addition, some remarks about the interest of this notion and its main properties are discussed. \[DefSODE\] We say that the system of SODEs (\[SODE\]) is a [*SODE Lie system*]{} if the system of first-order differential equations $$\label{FOrder} \left\{ \begin{aligned} \dot x^i&=v^i,\\ \dot v^i&=F^i(t,x,v), \end{aligned}\right.\qquad i=1,\ldots,n,$$ obtained by adding the new variables $v^i=\dot x^i$, with $i=1,\ldots,n$, to system (\[SODE\]), is a Lie system. \[SR\] Every SODE Lie system (\[SODE\]) admits a superposition rule $\Psi:{\rm T}\mathbb{R}^{nm}\times\mathbb{R}^{2n}\rightarrow\mathbb{R}^n$ of the form $\Psi=\pi\circ\Phi$, where $\Phi:{\rm T}\mathbb{R}^{nm}\times\mathbb{R}^{2n}\rightarrow {\rm T}\mathbb{R}^n$ is a superposition rule for the system (\[FORD\]) and $\pi:{\rm T}\mathbb{R}^n\rightarrow\mathbb{R}^n$ is the projection associated with the tangent bundle ${\rm T}\mathbb{R}^n$. Each SODE Lie system of the form (\[SODE\]) is associated with a first-order system of differential equations (\[FOrder\]) admitting a superposition rule $\Phi:{\rm T}\mathbb{R}^{nm}\times \mathbb{R}^{2n}\rightarrow {\rm T}\mathbb{R}^n$. This allows us to describe the general solution $(x(t),v(t))$ of system (\[FOrder\]) in terms of each generic set $(x_{a}(t),v_{a}(t))$, with $a=1,\ldots,m$, of its particular solutions and a set of $2n$ constants, i.e. $$\label{SupRel2} \begin{aligned} (x(t), v(t))&=\Phi\left(x_{1}(t),\ldots, x_{m}(t),v_{1}(t),\ldots, v_{m}(t);\lambda_1,\ldots,\lambda_{2n}\right)\\ \end{aligned}.$$ Obviously, each solution, $x_p(t)$, of the second-order system (\[SODE\]) corresponds to one and only one solution $(x_p(t),v_p(t))$ of (\[FORD\]) and viceversa. Since $(x_p(t),v_p(t))=(x_p(t),\dot x_p(t))$, it follows that the general solution $x(t)$ of (\[SODE\]) can be put in the form $$\label{SupRel4} x(t)=\pi\circ\Phi\left(x_{1}(t),\ldots, x_{m}(t),\dot x_{1}(t),\ldots, \dot x_{m}(t);\lambda_1,\ldots,\lambda_{2n}\right),$$ where $x_{a}(t)$, with $a=1,\ldots,n$, is a generic family of particular solutions of (\[SODE\]). In other words, the map $\Psi=\pi\circ\Phi$ is a superposition rule for the system of second-order differential equations (\[SODE\]). Since every autonomous system is related to a one-dimensional Vessiot–Guldberg Lie algebra [@CGL08], it straightforwardly follows, from the above proposition, next corollary. Every autonomous system of second-order differential equations of the form $\ddot x^i=F^i(x,\dot x)$, with $i=1,\ldots,n$, admits a superposition rule. Despite its theoretical interest, the above result cannot straightforwardly be used to derive superposition rules most of the times. Actually, the superposition rule guaranteed by Proposition \[SR\] relies on obtaining a superposition rule for an autonomous first-order system of differential equations. Considering the method sketched during Section \[LSLS\], we find that determining this superposition rule demands the determination of all the integral curves of a vector field on $({\rm T}\mathbb{R}^n)^2$. Although the existence of the solution of this problem is known, its explicit description can be as difficult as solving the initial system (indeed, this is usually the case). Consequently, deriving explicitly a superposition rule for the above autonomous system frequently relies on searching an alternative superposition rule for the associated first-order system. Many superposition rules for systems of second-order differential equations do not show an explicit dependence on the derivatives of the particular solutions. Consider, for instance, either the linear superposition rule (\[LinearSup\]) for the equation $\ddot x=a(t)x$, or the affine one, $$x(t)=\lambda_1(x_{1}(t)-x_{2}(t))+\lambda_2(x_{2}(t)-x_{3}(t))+x_{3}(t),$$ for $\ddot x=a(t)x+b(t)$. Such superposition rules are called [*velocity free superposition rules*]{} or even [*free superposition rules*]{}. The conditions ensuring the existence of such superposition rules is an interesting open problem. Every system of SODEs (\[SODE\]) admitting a free superposition rule is a SODE Lie system. Suppose that system (\[SODE\]) admits a superposition rule of the special form $$\label{FreeSuperRule} \begin{aligned} x^i&=\Phi_x^i(x_{1},\ldots, x_{m};\lambda_1,\ldots,\lambda_{2n}), \end{aligned}\qquad i=1,\ldots,n.$$ In such a case, the general solution, $x(t)$, of the system could be expressed as $$\label{freeSup1} x^i(t)=\Phi^i_x(x_{1}(t),\ldots,x_{m}(t);\lambda_1,\ldots,\lambda_{2n}), \qquad i=1,\ldots,n.$$ If we set $p(t)=(x_{1}(t),\ldots,x_{m}(t),\dot x_{1}(t),\ldots,\dot x_{m}(t))$, $v^i=\dot x^i$, and $v_a^i=\dot x^i_a$ with $a=1,\ldots,m$, the derivative of the above expression with respect to $t$ reads $$\label{freeSup2} v^i(t)=\dot x^i(t)=\sum_{a=1}^m\sum_{j=1}^n\left(v_{a}^j(t)\frac{\partial\Phi_x^i}{\partial x^j_{a}}(p(t))\right),\qquad i=1,\ldots,n.$$ Consequently, there exists a function $$\Phi^i_v(x_1,\ldots,x_m,v_1,\ldots,v_m)=\sum_{a=1}^m\sum_{j=1}^n\left(v_{a}^j\frac{\partial\Phi_x^i}{\partial x^j_{a}}\right),\qquad i=1,\ldots,n,$$ such that $$\left\{ \begin{aligned} x^i(t)&=\Phi_x^i(x_{1}(t),\ldots, x_{m}(t);\lambda_1,\ldots,\lambda_{2n}),\\ v^i(t)&=\Phi_v^{i}(x_{1}(t),\ldots, x_{m}(t),v_{1}(t),\ldots, v_{m}(t);\lambda_1,\ldots,\lambda_{2n}), \end{aligned}\right.\qquad i=1,\ldots,n.$$ Therefore, system (\[FOrder\]) admits a superposition rule and (\[SODE\]) becomes a SODE Lie system. Apart from the SODE Lie system notion, there exists another method to study certain systems of second-order differential equations which admit a regular Lagrangian, like Caldirola–Kanai oscillators or Milne–Pinney equations [@CLR08; @Ru10]. Although this method cannot be used to study general second order systems, it provides us with some additional information that cannot be derived by means of SODE Lie systems when it applies, e.g. about the time-dependent constants of the motion of the system [@Ru10]. A possible generalisation of the concept of superposition rule appearing, for instance, in the theory of quasi-Lie schemes is the so-called [*time-dependent superposition rule*]{} [@CGL08]. This concept can be extended to the framework of systems of second-order differential equations as follows. We say that the map $\Psi:\mathbb{R}\times {\rm T}\mathbb{R}^{mn}\times\mathbb{R}^{2n}\rightarrow \mathbb{R}^n$ is a [*time-dependent superposition rule*]{} for the system of SODEs (\[SODE\]), if its general solution $x(t)$ can be written in terms of each generic family $x_{(1)}(t),\ldots,x_{(m)}(t)$ of particular solutions, their derivatives, a set of $2n$ constants, and the time as $$x(t)=\Psi(t,x_{1}(t),\ldots,x_{m}(t),\dot x_{1}(t),\ldots,\dot x_{m}(t);\lambda_1,\ldots,\lambda_{2n}).$$ It is essential to analyse the existence of time-dependent superposition rules in order to understand the relevance of the practical results carried out throughout this work. As it shall be shown soon, many of the properties of these superpositions are a consequence of the features of time-dependent superposition rules for first-order systems. For instance, let us prove the following results concerning the existence of time-dependent superposition rules for systems of SODEs. \[GP\] Every system of SODEs (\[SODE\]) admits a time-dependent superposition rule of the form $\Psi:\mathbb{R}\times\mathbb{R}^{2n}\rightarrow\mathbb{R}^n$. Every system (\[SODE\]) is related to a first-order system (\[FORD\]) admitting a flow $g:(t;\lambda)\in\mathbb{R}\times\mathbb{R}^{2n}\mapsto g_t(\lambda)\in{\rm T}\mathbb{R}^n$ which allows us to cast its general solution, $\xi(t)$, into the form $\xi(t)=g_t(k)$. Consequently, the general solution, $x(t)$, of system (\[SODE\]) can be written as $x(t)=\pi\circ g_t(k)$. In other words, system (\[SODE\]) admits a time-dependent superposition rule depending just on $2n$ constants. Despite the remarkable theoretical interest of the above result, it does not provide any additional method for the explicit derivation of solutions of systems of SODEs. Indeed, note that the derivation of the above time-dependent superposition rule amounts to working out the generalised flow for the first-order system (\[FORD\]). This involves solving the system for each initial condition. If this can explicitly be done, determining the above superposition rule becomes unnecessary; and, otherwise, the superposition rule, although interesting, cannot be provided. Apart from the time-dependent superposition rule guaranteed by Proposition \[GP\], other instances can be ensured to exist. Nevertheless, their explicit determination uses to be as difficult as solving the initial system. Let us illustrate this statement more carefully. Consider a system of SODEs (\[SODE\]) related to the system of first-order differential equations (\[FOrder\]), with general solution $p_x(t)=(x(t),v(t))$. Let $X$ be the time-dependent vector field associated with this system, and $Y$ any other time-dependent vector field on ${\rm T}\mathbb{R}^n\simeq\mathbb{R}^{2n}$. Their corresponding flows, $g^X,g^Y:\mathbb{R}\times{\mathbb{R}^{2n}}\rightarrow{\rm T}\mathbb{R}^n$, satisfy that $p_x(t)=g^X_t\circ (g^Y_t)^{-1}(p_y(t))$, where $p_y(t)$ is the general solution of the system describing the integral curves of $Y$. Therefore, $(g^X\circ (g^Y)^{-1})_\bigstar Y=X$ (cf. [@CGL08]). In particular, if $Y=0$, then $p_y(t)=\phi(\lambda_1,\ldots,\lambda_{2n})$, where $\phi:\mathbb{R}^{2n}\rightarrow{\rm T}\mathbb{R}^n$ is any diffeomorphism. Therefore, $$p_x(t)=g_t^X(\lambda_1,\ldots,\lambda_{2n})\Longrightarrow x(t)=\pi\circ g_t^X(\lambda_1,\ldots,\lambda_{2n}).$$ In other words, system (\[SODE\]) admits a time-dependent superposition rule depending just on a set of $2n$ constants. In a similar way, if we assume $Y$ to be any autonomous, non-null, vector field, the system describing its integral curves is a Lie system ($[Y,Y]=0$) and the straightforward application of the method developed in [@CGL08] shows that it admits a superposition rule depending on one particular solution $p_{y_1}(t)=(y_1(t),\dot y_1(t))$, i.e. $p_y(t)=\Phi(p_{y_{1}}(t);\lambda_1,\ldots,\lambda_{2n})$. Consequently, the general solution, $x(t)$, of the system (\[SODE\]) associated with $X$ now reads $$x(t)=\pi\circ g^X_t\circ(g^Y)^{-1}_t\circ \Phi(g^Y_t\circ(g^{X}_t)^{-1}(p_{x_{1}}(t));\lambda_1,\ldots,\lambda_{2n}),\\$$ where $p_{x_1}(t)=(x_1(t),\dot x_1(t))$ is a particular integral curve of $X$. That is, system (\[SODE\]) admits a time-dependent superposition rule depending on one particular solution. Note that the determination of the above superposition rules relies, among other things, on the determination of the flow of the initial time-dependent vector field. Therefore, obtaining the superposition rule for (\[SODE\]) is as difficult as solving the system (\[SODE\]). Other similar constructions can be found. Nevertheless, most of them share the same drawbacks. The above remarks make evident that a key point in the study of time-dependent superposition rules for systems of SODEs is the development of procedures to explicitly determine them. As it occurred in the explicit determination of time-dependent superposition rules for systems of first-order differential equations, quasi-Lie schemes play an important role in the description of superpositions for systems of SODEs. Indeed, the same commentaries pointed out at the end of Section \[LSLS\] can also be claimed here. On one hand, given a system of SODEs (\[SODE\]), quasi-Lie schemes provide a powerful tool of large applicability to derive time-dependent superposition rules for its associated first-order system (\[FOrder\]). From here, it is immediate to obtain a time-dependent superposition rule for (\[SODE\]). On the other hand, quasi-Lie schemes naturally provide a family of first-order systems, including (\[FOrder\]), whose elements admit the same obtained time-dependent superposition rule. It is easy to prove that all those systems of SODEs whose associated first-order systems are members of this family share the same time-dependent superposition rule. In Section \[QLSORE\] this fact will be clarified and illustrated through the study of second-order Riccati equations. A new superposition rule for a family of SODE Lie systems ========================================================= In this Section we derive a superposition rule for a family of second-order differential equations including, as particular instances, some Painlevé–Ince equations [@EEU07]. In the process of searching for such a superposition rule, we find a family of Lie systems which will be used posteriorly to describe time-dependent and time-independent superposition rules for other second-order differential equations studied in Physics and Mathematics. Consider the family of differential equations $$\label{MDPIeq} \ddot x+3x\dot x+x^3=f(t),$$ with $f(t)$ being any time-dependent function. The interest in these equations is motivated by their frequent appearance in the Physics and Mathematics literature [@CRS05; @CLS05II; @KL09]. The properties of these equations have been deeply analysed since their first analysis by Vessiot and Wallenberg [@Ve94; @Wall03] as a particular case of second-order Riccati equations. For instance, these equations appear in [@GL99] in the study of Riccati chain. In that work it is stated that the equations of the Riccati chain can be used to derive solutions for certain PDEs. In addition, equation (\[MDPIeq\]) also appears in the book by Davis [@Da62], and the particular case with $f(t)=0$ has recently been treated through geometric methods in [@CGR09; @CRS05]. The results described in previous sections can be used to study differential equations (\[MDPIeq\]). Let us first show that the above differential equations are SODE Lie systems and, in view of Proposition 1, they admit a superposition rule that is derived. According to definition \[SODE\], equation (\[MDPIeq\]) is a SODE Lie system if and only if the system $$\label{FO} \left\{\begin{aligned} \dot x&=v,\\ \dot v&=-3xv-x^3+f(t), \end{aligned}\right.$$ determining the integral curves of the time-dependent vector field of the form $$\label{Dec} X_{PI}(t,x,v)=X_1(x,v)+f(t)X_2(x,v),$$ with $$X_1=v\frac{\partial}{\partial x}-(3xv+x^3)\frac{\partial}{\partial v},\qquad X_2=\frac{\partial}{\partial v},$$ is a Lie system. In view of the decomposition (\[Dec\]), all equations (\[MDPIeq\]) are SODE Lie systems if the vector fields $X_1$ and $X_2$ are included in a finite-dimensional real Lie algebra of vector fields $V$. This happens if and only if $X_1$, $X_2$ and all their successive Lie brackets, i.e. the vector fields of the form $$\label{Envelope} [X_1,X_2], [X_1,[X_1,X_2]], [X_2,[X_1,X_2]], [X_1,[X_1,[X_1,X_2]]], \ldots$$ span a finite-dimensional Lie algebra. Consider the family of vector fields on ${\rm T}\mathbb{R}$ given by [$$\label{VF} \begin{aligned} X_1&=v\frac{\partial}{\partial x}-(3xv+x^3)\frac{\partial}{\partial v},\,\, &X_2&=\frac{\partial}{\partial v},\\ X_3&=-\frac{\partial}{\partial x}+3x\frac{\partial}{\partial v},\,\, &X_4&=x\frac{\partial}{\partial x}-2x^2\frac{\partial}{\partial v},\\ X_5&=(v+2x^2)\frac{\partial}{\partial x}-x(v+3x^2)\frac{\partial}{\partial v},\,\, &X_6&=2x(v+x^2)\frac{\partial}{\partial x}+2(v^2-x^4)\frac{\partial}{\partial v},\\ X_7&=\frac{\partial}{\partial x}-x\frac{\partial}{\partial v},\,\, &X_8&=2x\frac{\partial}{\partial x}+4v\frac{\partial}{\partial v}, \end{aligned}$$]{}where $X_3=[X_1,X_2]$, $-3 X_4=[X_1,X_3]$, $X_5=[X_1,X_4]$, $X_6=[X_1,X_5]$, $X_7=[X_2,X_5]$, $X_8=[X_2,X_6]$. Then, the vector fields $X_1,\ldots, X_8$ are linearly independent over $\mathbb{R}$. Additionally, in view of the previous commutation relations and [$$\label{Rel} \begin{array}{lllll} \left[X_1,X_6\right]=0, &[X_1,X_7]=\frac 12 X_8,&\left[X_1,X_8\right]=-2X_1,&[X_2,X_3]=0,\\ \left[X_2,X_4\right]=0, &[X_2,X_7]=0, &[X_2,X_8]=4X_2, &[X_3,X_4]=-X_7,\\ \left[X_3,X_5\right]=-\frac 12X_8,&[X_3,X_6]=-2X_1,&[X_3,X_7]=-2X_2,&[X_3,X_8]=2X_3,\\ \left[X_4,X_5\right]=-X_1,&[X_4,X_6]=0, &[X_4,X_7]=X_3, &[X_4,X_8]=0,\\ \left[X_5,X_6\right]=0, &[X_5,X_7]=-3X_4,&[X_5,X_8]=-2X_5,&[X_6,X_7]=-2X_5,\\ &\left[X_6,X_8\right]=-4X_6,&[X_7,X_8]=2X_7,&\\ \end{array}$$]{}it follows that the vector fields $X_1,\ldots,X_8$ span an eight-dimensional Lie algebra of vector fields $V$ containing $X_1$ and $X_2$. Therefore, equation (\[MDPIeq\]) is a SODE Lie system. Moreover, the elements of the following family of traceless real $3\times 3$ matrices $$\begin{gathered} M_1=-\left( \begin{array}{ccc} 0&1&0\\ 0&0&1\\ 0&0&0. \end{array}\right), M_2=-\left( \begin{array}{ccc} 0&0&0\\ 0&0&0\\ 1&0&0. \end{array}\right), M_3=\left( \begin{array}{ccc} 0&0&0\\ 1&0&0\\ 0&-1&0. \end{array}\right),\\ M_4=\frac{1}{3}\left( \begin{array}{ccc} 1&0&0\\ 0&-2&0\\ 0&0&1. \end{array}\right), M_5=\left( \begin{array}{ccc} 0&1&0\\ 0&0&-1\\ 0&0&0. \end{array}\right), M_6=\left( \begin{array}{ccc} 0&0&2\\ 0&0&0\\ 0&0&0. \end{array}\right),\\ M_7=-\left( \begin{array}{ccc} 0&0&0\\ 1&0&0\\ 0&1&0. \end{array}\right), M_8=\left( \begin{array}{ccc} 2&0&0\\ 0&0&0\\ 0&0&-2. \end{array}\right),\end{gathered}$$ satisfy the same commutation relations as the vector fields $X_1,\ldots,X_8$, i.e. the linear map $\rho:\mathfrak{sl}(3,\mathbb{R})\rightarrow V$, such that $\rho(M_\alpha)=X_\alpha$, with $\alpha=1,\ldots,8$, is a Lie algebra isomorphism. Consequently, the finite-dimensional Lie algebra of vector fields $V$ is isomorphic to $\mathfrak{sl}(3,\mathbb{R})$ and the systems of differential equations describing the integral curves for the time-dependent vector fields $$\label{family} X(t,x,v)=\sum_{\alpha=1}^8b_\alpha(t)X_\alpha(x,v),$$ are Lie systems related to a Vessiot–Guldberg Lie algebra isomorphic to $\mathfrak{sl}(3,\mathbb{R})$. Recall that, in view of Proposition \[SR\], a superposition rule for (\[MDPIeq\]) can be obtained by means of a superposition rule for the Lie system (\[FO\]). As we already stated in Section \[LSLS\], a superposition rule for a certain Lie system describing the integral curves of a time-dependent vector field $X$ admitting a decomposition of the form (\[family\]) can be obtained through the first-integrals for the distribution $\mathcal{D}$ spanned by the diagonal prolongations $\widehat X_1\ldots, \widehat X_8$ on a certain ${\rm T}\mathbb{R}^{n(m+1)}$ in such a way that their projections $\pi_*(\widehat X_\alpha)$, with $\alpha=1,\ldots,8$, are linearly independent on a dense open subset of ${\rm T}\mathbb{R}^{nm}$. Consider the functions $F_{abc}$, with $a,b,c=0,\ldots,4$, given by $$\label{Fabc} F_{abc}=v_a(x_c-x_b)+v_b(x_a-x_c)+v_c(x_b-x_a)+(x_a-x_b)(x_b-x_c)(x_c-x_a).$$ Such functions satisfy that $F_{abc}=F_{bca}=F_{cab}=-F_{bac}=-F_{cba}=-F_{acb}$, for all $a,b,c=0,\ldots,4$, and they are useful to determine the superposition rule we are looking for. If we take now the diagonal prolongations to ${\rm T}\mathbb{R}^{5}$ of the family of vector fields (\[VF\]), we can get, e.g. by means of any symbolic manipulation program, that the vector fields $\pi(\widetilde X_\alpha)$, with $\alpha=1,\ldots,8$, on ${\rm T}\mathbb{R}^4$ are linearly independent at those points $p\equiv(x_1,\ldots,x_4,v_1,\ldots,v_4)\in{\rm T}\mathbb{R}^4$ satisfying $F_{123}(p)F_{124}(p)F_{134}(p)F_{234}(p)\neq 0$. Such a set is dense in ${\rm T}\mathbb{R}^4$ and hence the involutive distribution $\mathcal{D}$ spanned by $\widehat X_1\ldots,\widehat X_8$ on ${\rm T}\mathbb{R}^5$ is eight-dimensional almost everywhere. Additionally, there exist, at least locally, two first-integrals for the vector fields of the distribution $\mathcal{D}$ which can be used to derive a superposition rule. As the vector fields $\widehat X_1$, $\widehat X_2$ and their successive Lie brackets, i.e. the vector fields (\[Envelope\]), span the distribution $\mathcal{D}$, it can be proved that giving a first-integral $F:{\rm T}\mathbb{R}^{5}\rightarrow\mathbb{R}$ for the vector fields $\widehat X_1$ and $\widehat X_2$, i.e. $\widehat X_1F=\widehat X_2F=0$, is equivalent to giving a first-integral for the distribution $\mathcal{D}$. These integrals can be obtained by applying the method of characteristics to the vector fields $\widehat X_1$ and $\widehat X_2$. The calculation of these first-integrals is long and it is detailed in the Appendix. As a result, we get that two first-integrals for $\widehat X_1$ and $\widehat X_2$ and hence for all the vector fields of the distribution $\mathcal{D}$ are $$\Lambda_1(p)=\frac{F_{431}(p)F_{210}(p)}{F_{421}(p)F_{310}(p)}\qquad {\rm and}\qquad \Lambda_2(p)=\frac{F_{431}(p)F_{420}(p)}{F_{421}(p)F_{430}(p)}$$ where now $p\equiv(x_0,x_1,\ldots,x_4,v_0,v_1,\ldots,v_4)\in{\rm T}\mathbb{R}^4$. We can obtain $v_0$ from the expression of $\Lambda_1$ as $$\begin{gathered} \label{vExp} v_0=\frac{(v_1(x_2-x_0)+v_2(x_0-x_1)+(x_1-x_0)(x_0-x_2)(x_2-x_1))F_{431}}{(x_2-x_1)F_{431}+(x_1-x_3)F_{421}\Lambda_1}+\\ \frac{(v_3(x_1-x_0)+v_1(x_0-x_3)+(x_0-x_1)(x_1-x_3)(x_3-x_0))F_{421}\Lambda_1}{(x_2-x_1)F_{431}+(x_1-x_3)F_{421}\Lambda_1},\end{gathered}$$ and if we substitute this value in the expression for $\Lambda_2$, we obtain the expression [$$\label{Sup} x_0=\frac{x_2F_{431}-G_{3124}\Lambda_2-G_{2134}\Lambda_1+x_3F_{421}\Lambda_1\Lambda_2}{F_{431}+(F_{124}-F_{324})\Lambda_1+(F_{412}-F_{312})\Lambda_2+\Lambda_1\Lambda_2F_{421}},$$ ]{}where $$\begin{gathered} G_{abcd}=x_a((v_d-v_c)x_b+(v_b-v_d)x_c+(x_b-x_c)x_bx_c+(x_c-x_b)x_ax_d)+\\x_d((v_c-v_a)x_b+(v_a-v_b)x_c+(x_c-x_b)x_bx_c+(x_b-x_c)x_ax_d).\end{gathered}$$ Note that the first-integrals $\Lambda_1$ and $\Lambda_2$ satisfy that $$\Lambda_j(x_{0}(t),\ldots,x_{4}(t),v_{0}(t),\ldots,v_{4}(t))={\rm cte.} \qquad j=1,2,$$ where $(x_{a}(t),v_{a}(t))$, with $a=0,\ldots,4$, are any family of five particular solutions of system (\[FO\]). Consequently, if we fix $x_{0}(t)=x(t)$, $v_{0}(t)=\dot x_{0}(t)$ and take into account expression (\[Sup\]), we see that the general solution of system (\[FO\]), namely, $x(t)$, can be written in terms of each generic family of four particular solutions $x_{a}(t)$, with $a=1,\ldots,4$, their derivatives in terms of $t$, and two real constants $\lambda_1$, $\lambda_2$ as follows [$$\label{FinSup} x(t)=\frac{x_2(t)F_{431}(t)-G_{3124}(t)\lambda_2-G_{2134}(t)\lambda_1+x_3(t)F_{421}(t)\lambda_1\lambda_2}{F_{431}(t)+(F_{124}(t)-F_{324}(t))\lambda_1+(F_{412}(t)-F_{312}(t))\lambda_2+\lambda_1\lambda_2F_{421}(t)},$$]{}where the time-dependent functions $F_{abc}(t)$ and $G_{abcd}(t)$ are obtained evaluating the functions $F_{abc}$ and $G_{abcd}$ on the curves $(x_{1}(t),\ldots,x_{4}(t),\dot x_{1}(t),\ldots,\dot x_{4}(t))$. In order to illustrate the previous result by means of a simple example, let us consider, for instance, equation (\[MDPIeq\]) with $f(t)=0$. By direct inspection, we can get the set of particular solutions $$\label{PS} x_{1}(t)=0,\qquad x_{2}(t)=\frac 2t,\qquad x_3(t)=\frac{2t}{2+t^2},\qquad x_4(t)=\frac{1+2t}{t+t^2}.$$ These particular solutions can be used to determine the functions appearing in the expression (\[Sup\]), i.e. $G_{2134}$, $F_{431}$, etc. More specifically, we get, in view of the particular solutions (\[PS\]), that $$G_{3124}(t)=\frac{2t^{-2}}{(t^2+1)(t+1)},\,\, F_{431}(t)=\frac{2t^{-1}}{(t^2+1)(t+1)},\,\, G_{2134}(t)=\frac{-4t^{-1}}{(t^2+1)(t+1)}$$ and $$F_{124}(t)=\frac{2}{t^2+t^3},\quad F_{324}(t)=\frac{2}{t^2+t^3+t^4+t^5},\quad F_{312}(t)=\frac{2}{2t+t^3}.$$ Finally, making use of expression (\[Sup\]) and the above functions, we get that the general solution for equation (\[MDPIeq\]) is $$x(t)=\frac{(1+2t\lambda_1)(-1+\lambda_2)}{t(-1+\lambda_2)+t^2\lambda_1(-1+\lambda_2)+(-1+\lambda_1)\lambda_2}.$$ In view of Proposition \[SR\], expression (\[FinSup\]) is not only a superposition rule for the equation (\[MDPIeq\]), but also for any SODE Lie system (\[SODE\]) satisfying that its corresponding system (\[FOrder\]) is related to a time-dependent vector field that put into a form similar to (\[family\]), i.e. it takes values in $V$. Many instances of the family of Lie systems (\[family\]) are associated with interesting SODE Lie systems with applications to Physics or related to interesting mathematical problems. In all these cases, the theory of Lie systems can be applied to investigate these second-order differential equations, recover some of their known properties, and, possibly, provide new results. Let us illustrate this assertion by means of a few examples. Another equation appearing in the Physics literature [@CLS05II; @CLS05; @TT07] which can be analysed by means of our methods is $$\label{Exam2} \ddot x+3x\dot x +x^3+\lambda_1x=0,$$ which is a special kind of Liénard equation $\ddot x+f(x)\dot x+g(x)=0$, with $f(x)=3x$ and $g(x)=x^3+\lambda_1x$. The above equation can also be related to a generalised form of an Emden equation occurring in the thermodynamical study of equilibrium configurations of spherical clouds of gas acting under the mutual attraction of their molecules [@DT90]. As in the study of equations (\[MDPIeq\]), by considering the new variable $v=\dot x$, equation (\[Exam2\]) becomes the system $$\left\{\begin{aligned} \dot x&=v,\\ \dot v&=-3xv-x^3-\lambda_1x, \end{aligned}\right.$$ describing the integral curves of the vector field $X=X_1-\lambda_1/2(X_7+X_3)$ included in the family (\[family\]). Consequently, the expression (\[Sup\]) can be used to derive the solution of equation (\[Exam2\]) in terms of a set of four particular solutions. Finally, we can also treat by our methods the equation $$\label{general} \ddot x+3x\dot x +x^3+f(t)(\dot x+x^2)+g(t)x+h(t)=0,$$ containing, as particular cases, all the previous examples [@KL09]. The system of first-order differential equations associated with this equation reads $$\label{FirstGeneral} \left\{\begin{aligned} \dot x&=v,\\ \dot v&=-3xv-x^3-f(t)(v+x^2)-g(t)x-h(t). \end{aligned}\right.$$ Hence, this system describes the integral curves of the time-dependent vector field $$X_t=X_1-h(t)X_2-\frac 14 f(t)\,(X_8-2X_4)-\frac 12 g(t)\,(X_7+X_3).$$ Therefore, equation (\[general\]) is a SODE Lie system and the theory of Lie systems can be used to analyse its properties. In particular, the expression (\[Sup\]) provides us with the general solutions for these equations out of each generic set of four particular solutions. Some particular cases of system (\[general\]) were pointed out in [@CLS05; @KL09]. Additionally, the case with $f(t)=0$, $g(t)=\omega^2(t)$ and $h(t)=0$ was studied in [@CLS05II] and it is related to harmonic oscillators. The case with $g(t)=0$ and $h(t)=0$ appears in the catalogue of equations possessing the Painlevé property [@In86]. Finally, our result generalises Vessiot’s result [@Ve95] describing the existence of an expression determining the general solution of system like (\[general\]) (but with constant coefficients) in terms of four of their particular solutions, their derivatives and two constants. Quasi-Lie schemes and second-order Riccati equation {#QLSORE} =================================================== In this Section we derive a time-dependent superposition rule for the second-order Riccati equation [@CRS05] $$\label{NLe} \ddot x+\left(b_0(t)+b_1(t)x\right)\dot x+a_0(t)+a_1(t)x+a_2(t)x^2+a_3(t)x^3=0,$$ with $a_3(t)>0$, $a_3(0)=1$, $b_0(t)=\frac{a_2(t)}{\sqrt{a_3(t)}}-\frac{\dot a_3(t)}{2a_3(t)}$ and $b_1(t)=3\sqrt{a_3(t)}$, by means of the theory of quasi-Lie schemes. We first introduce a new variable $v=\dot x$ and transform equation (\[NLe\]) into the system of first-order differential equations $$\label{FOSOR} \left\{ \begin{aligned} \dot x&=v,\\ \dot v&=-\left(b_0(t)+b_1(t)x\right)v-a_0(t)-a_1(t)x-a_2(t)x^2-a_3(t)x^3. \end{aligned}\right.$$ Consider the following set of vector fields $$\begin{aligned} Y_1=v\frac{\partial}{\partial x},\,\,\,\, Y_2=v\frac{\partial}{\partial v},\,\,\,\, Y_3=xv\frac{\partial}{\partial v},\,\,\,\, Y_4= \frac{\partial}{\partial v},\,\,\,\,\\ Y_5=x\frac{\partial}{\partial v},\,\,\,\, Y_6=x^2\frac{\partial}{\partial v},\,\, \,\, Y_7=x^3\frac{\partial}{\partial v},\,\,\,\, Y_8=x\frac{\partial}{\partial x},\end{aligned}$$ spanning a linear space of vector fields $V=\langle Y_1,\ldots,Y_8\rangle$. The system (\[FOSOR\]) describes the integral curves of the time-dependent vector field $$Y_t=Y_1-b_0(t)Y_2-b_1(t)Y_3-a_0(t)Y_4-a_1(t)Y_5-a_2(t)Y_6-a_3(t)Y_7,$$ and, therefore, as $Y_t\in V$, for every $t\in\mathbb{R}$, we get that $Y\in V(\mathbb{R})$. Let us denote ${\rm ad}_Z(X)=[Z,X]$, for every vector fields $X,Z$. Hence, as $${\rm ad}_{Y_3}^n(Y_6)=\overbrace{{\rm ad}_{Y_3}\circ\ldots\circ{\rm ad}_{Y_3}}^{n\,{\rm times}}(Y_6)=(-x)^{n+2}\frac{\partial}{\partial v},$$ each Lie algebra of vector fields $V'$ containing the vector fields $Y_3$ and $Y_6$ must include the above infinite family of linearly independent vector fields over $\mathbb{R}$. Consequently, as $Y_3$ and $Y_6$ are contained in $V$, there exists no finite-dimensional Lie algebra of vector fields $V'$ including $V$ and, therefore, equation (\[FOSOR\]) is not a Lie system. However, we can deal with such a system as a quasi-Lie system with respect to a quasi-Lie scheme [@CGL08 Theorem 4] and therefore this could be used to build up a time-dependent superposition rule for equation (\[NLe\]). In fact, define the linear space $W=\langle Y_2,Y_8\rangle$, which is a two-dimensional Abelian Lie algebra of vector fields and, in view of the commutation relations $$\begin{aligned} \left[Y_2,Y_1\right]&=Y_1,\,\, &[Y_2,Y_3]&=0,\,\, &[Y_2,Y_4]&=-Y_4,&\left[Y_2,Y_5\right]&=-Y_5,\\ \left[Y_2,Y_6\right]&=-Y_6,\,\, &[Y_2,Y_7]&=-Y_7,\,\,&[Y_8,Y_1]&=-Y_1,\,\, &[Y_8,Y_3]&=Y_3,\\ \left[Y_8,Y_4\right]&=0,&\left[Y_8,Y_5\right]&=Y_5,\,\, &[Y_8,Y_6]&=2Y_6,\,\, &[Y_8,Y_7]&=3Y_7,\\ \end{aligned}$$ we get that $[W,V]\subset V$. Hence, the pair $(W,V)$ becomes a quasi-Lie scheme $S(W,V)$. The theory of quasi-Lie schemes shows that the time-dependent vector field $Y$ can be transformed through any element $g$ of the group of transformations of the scheme $\mathcal{G}(W)$ into a new time-dependent vector field $g_\bigstar Y \in V(\mathbb{R})$, see [@CGL08 Proposition 1]. The time-dependent transformations associated with elements of $\mathcal{G}(W)$ are $$\label{change} \left\{\begin{aligned} x(t)&=\gamma(t)x'(t),\\ v(t)&=\beta(t)v'(t), \end{aligned}\right.$$ with $\gamma(t)>0,\beta(t)>0$ and $\gamma(0)=\beta(0)=1$. The above family of time-dependent changes of variables transforms system (\[FOSOR\]) into $$\left\{ \begin{aligned} \frac{dx'}{dt}&=\frac{\beta(t)}{\gamma(t)}v'-\frac{\dot\gamma(t)}{\gamma(t)}x',\\ \frac{dv'}{dt}&=-\frac{a_0(t)}{\beta(t)}-\gamma(t)\left(b_1(t)v'+\frac{a_1(t)}{\beta(t)}\right)x'-\frac{a_2(t)\gamma^2(t)}{\beta(t)}x'^2\\&\qquad\qquad\qquad\qquad\qquad\qquad-\frac{a_3(t)\gamma^3(t)}{\beta(t)}x'^3-\frac{b_0(t)\beta(t)+\dot \beta(t)}{\beta(t)}v'. \end{aligned}\right.$$ In order to relate this system of first-order differential equations to a SODE, we have to choose $\gamma(t)=K$ for a certain real constant $K$. As $\gamma(0)=1$, then we choose $\gamma(t)=1$ and the previous system becomes $$\label{Sys2} \left\{ \begin{aligned} \frac{dx'}{dt}&=\beta(t)v',\\ \frac{dv'}{dt}&=-\frac{a_0(t)}{\beta(t)}-\left(b_1(t)v'+\frac{a_1(t)}{\beta(t)}\right)x'-\frac{a_2(t)}{\beta(t)}x'^2\\ &\qquad\qquad\qquad\qquad\qquad\qquad -\frac{a_3(t)}{\beta(t)}x'^3-\frac{b_0(t)\beta(t)+\dot \beta(t)}{\beta(t)}v'. \end{aligned}\right.$$ Let us try to relate this system to one of the Lie systems of the family (\[family\]), e.g. the one describing the integral curves of a time-dependent vector field $X_t=f_1(t)X_1+f_2(t)X_2+(f_3(t)/2)(X_3+X_7)+(f_4(t)/4)(X_8-2X_4)$. If we fix $\beta(t)=\sqrt{a_3(t)}$ in (\[Sys2\]), we obtain $$\label{trasys2} \left\{ \begin{aligned} \frac{dx'}{dt}&=\sqrt{a_3(t)}v',\\ \frac{dv'}{dt}&=-\frac{a_0(t)}{\sqrt{a_3(t)}}-\sqrt{a_3(t)}(3v'x'+x'^3)-\frac{a_1(t)}{\sqrt{a_3(t)}}x'-\frac{a_2(t)}{\sqrt{a_3(t)}}(v'+x'^2), \end{aligned}\right.$$ and, consequently, the above system is a Lie system associated with the time-dependent vector field $$X_t=\sqrt{a_3(t)}X_1-\frac{a_0(t)}{\sqrt{a_3(t)}}X_2-\frac{a_1(t)}{2\sqrt{a_3(t)}}(X_3+X_7)-\frac{a_2(t)}{4\sqrt{a_3(t)}}(X_8-2X_4).$$ As the above time-dependent vector field belongs to the family (\[family\]), the first component of the general solution $(x'(t),v'(t))$ of system (\[trasys2\]) can be described by means of the expression (\[FinSup\]) in terms of four generic particular solutions $(x'_{a}(t),v'_{a}(t))$. Moreover, as $\gamma(t)=1$ and $\beta(t)=\sqrt{a_3(t)}$, then $x(t)=x'(t)$, $x_{a}(t)=x'_{a}(t)$ and $v'_{a}(t)=a_3(t)^{-1/2}dx_{a}/dt(t)$. Using these relations in expression (\[FinSup\]), we get a time-dependent superposition rule for second-order Riccati equations. More specifically, the general solution $x=x(t)$ for any instance of second-order Riccati equation of the form (\[NLe\]) can be written in terms of a set of four particular solutions $x_1=x_1(t)$, $x_2=x_2(t)$, $x_3=x_3(t)$ and $x_4=x_4(t)$ and its derivatives $\dot x_1=\dot x_1(t)$, $\dot x_2=\dot x_2(t)$, $\dot x_3=\dot x_3(t)$ and $\dot x_4=\dot x_4(t)$ as [$$\label{SupRicc} x=\frac{x_2\widetilde F_{431}-\widetilde G_{2134}\lambda_1-\widetilde G_{3124}\lambda_2-x_3\widetilde F_{412}\lambda_1\lambda_2}{\widetilde F_{431}+(\widetilde F_{124}-\widetilde F_{324})\lambda_1+(\widetilde F_{412}-\widetilde F_{312})\lambda_2+\lambda_1\lambda_2\widetilde F_{421}},$$ ]{}where $$\begin{gathered} \widetilde G_{abcd}=x_a(a^{-1/2}_3(t)(\dot x_d-\dot x_c)x_b+a^{-1/2}_3(t)(\dot x_b-\dot x_d)x_c+\\(x_b-x_c)x_bx_c+(x_c-x_b)x_ax_d)+x_d(a^{-1/2}_3(t)(\dot x_c-\dot x_a)x_b+\\a^{-1/2}_3(t)(\dot x_a-\dot x_b)x_c+(x_c-x_b)x_bx_c+(x_b-x_c)x_ax_d),\end{gathered}$$ and $$\begin{gathered} \widetilde F_{abc}=a^{-1/2}_3\dot x_a(x_c-x_b)+a^{-1/2}_3\dot x_b(x_a-x_c)+\\+a^{-1/2}_3\dot x_c(x_b-x_a)+(x_a-x_b)(x_b-x_c)(x_c-x_a),\end{gathered}$$ with $a,b,c,d=1,\ldots,4$. Finally, it is just worth noting that, as pointed out in Section \[SRSO\], time-dependent superposition rules for a system of SODEs appear naturally related to families of systems admitting the same superposition rule. Indeed, note that if we had initiated this section by analysing system (\[NLe\]) with $a_3(t)=1$, we could have proceeded exactly in the same way as before. Nevertheless, when reaching the system (\[trasys2\]) with $a_3(t)=1$, we could have noticed that all systems with a more general $a_3(t)$ are quasi-Lie systems admitting the same superposition rule as our particular instance. Consequently, this would have shown the existence of a more general system of SODEs, namely, (\[NLe\]), admitting the same superposition rule. Appendix ======== We have relegated to this appendix various calculations that, although necessary to obtain certain previously stated results, i.e. the common first-integrals for vector fields of $\mathcal{D}$, do not deserve to be detailed in the main body of the article as they do not provide any relevant knowledge. Recall that a function $F:{\rm T}\mathbb{R}^5\rightarrow \mathbb{R}$ is a common first-integral for every vector field in $\mathcal{D}$ if and only if $\widehat X_1 F=\widehat X_2 F=0$, where $\widehat X_1,\widehat X_2$ are the diagonal prolongations to ${\rm T}\mathbb{R}^5$ of the vector fields $X_1,X_2$. Therefore, such a function $F$ must be a solution of the equation $$\widehat X_2F=\sum_{a=0}^4\frac{\partial F}{\partial v_a}=0,$$ written using the coordinate system $\{x_0,\ldots, x_4,v_0,\ldots,v_4\}$. This equation can be solved using the method of characteristics. Such a method explains that the solutions of the above equation are constant along the solutions, the so-called [*characteristics curves*]{}, of the system $$dx_0=\ldots=dx_4=0,\qquad dv_0=\ldots=dv_4.$$ So, the characteristics of equation $\widehat X_2F=0$ are curves $(x_0,\ldots,x_4,v_0(s),\ldots,v_4(s))$ such that $\xi_0=v_0(s)-v_4(s)$, $\xi_1=v_1(s)-v_4(s)$, $\xi_2=v_2(s)-v_4(s)$ and $\xi_3=v_3(s)-v_4(s)$ for certain real constants $\xi_0,\ldots,\xi_3$. Thus, there exists a function $F_2:\mathbb{R}^9\rightarrow \mathbb{R}$ such that $F(x_0,\ldots,x_4,v_0,\ldots,v_4)=F_2(x_0,\ldots,x_4,\xi_0,\ldots,\xi_3)$. In other words, function $F$ depends only on the variables $x_0,\ldots,x_4,\xi_0,\ldots,\xi_3$. Consider the coordinate system $\{x_0,\ldots,x_4,\xi_0,\ldots,\xi_3,v_4\}$ on ${\rm T}\mathbb{R}^5$. In terms of the new coordinate system, the vector field $\widehat X_1$ reads $$\begin{gathered} \widehat X_1=\sum_{a=0}^3\left(\xi_a\frac{\partial}{\partial x_a}-\left(3x_a \xi_a-x_4^3+x_a^3\right)\frac{\partial}{\partial \xi_a}\right)-(x_4^3+3x_4v_4)\frac{\partial}{\partial v_4}+\\+v_4\left(\sum_{a=0}^4\frac{\partial}{\partial x_a}-3\sum_{a=0}^3(x_a-x_4)\frac{\partial}{\partial \xi_a}\right).\end{gathered}$$ As we assumed that $F$ is a first-integral of the distribution $\mathcal{D}$, it is a first-integral for the vector fields $\widehat X_1$ and $\widehat X_2$. Taking into account that $F$ is a solution of the equation $\widehat X_2F=0$ and, therefore, it depends only on the variables $x_0,\ldots,x_4,\xi_0,\ldots,\xi_3$, the equation $\widehat X_1F=0$ yields $$\sum_{a=0}^3\left(\xi_a\frac{\partial F_2}{\partial x_a}-\left(3x_a \xi_a-x_4^3+x_a^3\right)\frac{\partial F_2}{\partial \xi_a}\right)+v_4\left(\sum_{a=0}^4\frac{\partial F_2}{\partial x_a}-3\sum_{a=0}^3(x_a-x_4)\frac{\partial F_2}{\partial \xi_a}\right)=0,$$ and in view of the dependence of the function $F_2$, we get that there exist two vector fields $$Z_1=\sum_{a=0}^3\left(\xi_a\frac{\partial}{\partial x_a}-\left(3x_a \xi_a-x_4^3+x_a^3\right)\frac{\partial }{\partial \xi_a}\right),\, Z_2=\sum_{a=0}^4\frac{\partial}{\partial x_a}-3\sum_{a=0}^3(x_a-x_4)\frac{\partial}{\partial \xi_a},$$ such that $\widehat X_1F=Z_1F_2+v_4Z_2F_2=0$. Therefore, we have that $Z_1F_2=0$ and $Z_2F_2=0$. In consequence, the function $F$ is a first-integral of the distribution $\mathcal{D}$ if and only if it is a first-integral for the vector fields $Z_1$ and $Z_2$ depending on the variables $x_0,\ldots,x_4,\xi_0,\ldots,\xi_3$. The first-integrals of the vector field $Z_2$ depending just on the above variables can be determined by means of the characteristic curves given by the following system $$d(x_a-x_4)=0, \qquad dx_4=-\frac{d\xi_a}{3(x_a-x_4)},\qquad a=0,\ldots,3.$$ Hence, such first-integrals depend on the functions $$\left\{ \begin{aligned} \eta_a&=x_a-x_4,\\ \phi_a&=3\eta_ax_4+\xi_a,\\ \end{aligned}\right.\qquad a=0,\ldots,3,$$ Consequently, given a first-integral $F$ of the distribution $\mathcal{D}$, there exists a function $F_3:\mathbb{R}^8\rightarrow\mathbb{R}$ such that $F(x_0,\ldots,x_4,v_0,\ldots,v_4)=F_3(\eta_0,\ldots,\eta_3,\phi_0,\ldots,\phi_3)$, i.e. the function $F$ actually only depends on the variables $\eta_0,\ldots,\eta_3,\phi_0,\ldots,\phi_3$. Taking into account the dependence of the function $F$ in the above variables, the equation $Z_1F=0$ reads in the coordinate system $\{\eta_0,\ldots,\eta_3,\phi_0,\ldots,\phi_3,x_4,v_4\}$, $$\sum_{a=0}^3\left[(\phi_a-3\eta_a x_4)\frac{\partial F}{\partial \eta_a}-\left(3\eta_a\phi_a-6\eta_a^2x_4+3\eta_a x_4^2+\eta_a^3\right)\frac{\partial F}{\partial \phi_a}\right]=0.$$ Collecting terms with different powers of $x_4$, we obtain that $Z_2F=\Omega_0F_3+x_4\Omega_1F_3-3x_4^2\Omega_2F_3=0$, with $$\begin{gathered} \Omega_0=\sum_{a=0}^3\left(\phi_a\frac{\partial }{\partial \eta_a}-\left(3\eta_a\phi_a+\eta_a^3\right)\frac{\partial}{\partial \phi_a}\right),\qquad \Omega_1=\sum_{a=0}^3\left(-3\eta_a\frac{\partial}{\partial \eta_a}+6\eta_a^2\frac{\partial}{\partial \phi_a}\right),\\ \Omega_2=\sum_{a=0}^3\eta_a\frac{\partial}{\partial \phi_a}. \end{gathered}$$ As $F_3$ does not depend on $x_4$ in the set of coordinates we have chosen, then $\Omega_aF_3=\Omega_aF=0$, for $a=0,1,2$. The method of characteristics for the equation $\Omega_1F_3=0$ implies that there exists a function $F_4:\mathbb{R}^7\rightarrow\mathbb{R}$ such that $F(x_0,\ldots,x_4,v_0,\ldots,v_4)=F_4(\delta_0,\ldots,\delta_3,L_1,L_2,L_3)$ with $$\left\{ \begin{aligned} \delta_a&=\phi_a+\eta_a^2,\qquad &a&=0,\ldots,3,\\ L_a&=\frac{\eta_a}{\eta_0},\qquad &a&=1,2,3.\\ \end{aligned}\right.$$ Now, using the coordinate system $\{\delta_0,\ldots,\delta_3,L_1,L_2,L_3,\eta_0,x_4,v_4\}$, we get $$\Omega_2F=\Omega_2F_4=\eta_0\left(\frac{\partial F_4}{\partial \delta_0}+\sum_{a=1}^3 L_a\frac{\partial F_4}{\partial \delta_a}\right)=0,$$ and, repeating the previous procedure, we see that there exists a function $F_5:\mathbb{R}^6\rightarrow\mathbb{R}$ such that $F(x_0,\ldots,x_4,v_0,\ldots,v_4)=F_5(L_1,L_2,L_3,\Delta_1,\Delta_2,\Delta_3)$, where $\Delta_a=L_a\delta_0-\delta_a$ and $a=1,2,3$. As we have shown that finding a first-integral $F$ for the vector fields of the distribution $\mathcal{D}$ reduces to looking for a first-integral $F_5$ of the vector field $\Omega_0$ depending on the variables $L_1,L_2,L_3, \Delta_1,\Delta_2,\Delta_3$, we still have to analyse the condition $\Omega_0F_5=0$ to determine completely the form of the first-integrals for the distribution $\mathcal{D}$. By choosing the coordinate system $\{L_1,L_2,L_3,\Delta_1,\Delta_2,\Delta_3,\delta_0,\eta_0,x_4,v_4\}$, the equation $\Omega_0F_5=0$ reads $$\eta_0^2\sum_{a=1}^3\left((L_a-L_a^2)\frac{\partial F_5}{\partial L_a}-L_a\Delta_a\frac{\partial F_5}{\partial \Delta_a}\right)-\left(\sum_{a=1}^3\Delta_a\frac{\partial F_5}{\partial L_a}+\delta_0\Delta_a\frac{\partial F_5}{\partial \Delta_a}\right)=0,$$ and considering the vector fields $$\Xi_1=\sum_{a=1}^3(L_a-L_a^2)\frac{\partial }{\partial L_a}-L_a\Delta_a\frac{\partial}{\partial \Delta_a},\quad \Xi_2=\sum_{a=1}^3\Delta_a\frac{\partial }{\partial L_a},\quad \Xi_3=\sum_{a=1}^3\Delta_a\frac{\partial }{\partial \Delta_a},$$ and the form of the function $F_5$, the equation $\Omega_0F_5=0$ implies that $\Xi_1F_5=\Xi_2F_5=\Xi_3F_5=0$. If we apply the method of characteristics to the equation $\Xi_3F_5=0$, it yields that there exists a function $F_6:\mathbb{R}^5\rightarrow\mathbb{R}$ such that $F(x_0,\ldots,x_4,v_0,\ldots,v_4)=F_6(L_1,L_2,L_3,\pi_2,\pi_3)$, with $\pi_2=\Delta_2\Delta_1^{-1}$ and $\pi_3=\Delta_3\Delta_1^{-1}$. Moreover, as $F_5$ also satisfies the equation $\Xi_2F_5=\Xi_2F_6=0$, we obtain that $F_6$ only depends on the variables $\pi_2$, $\pi_3$, $\Gamma_2=\pi_2 L_1-L_2$ and $\Gamma_3=\pi_3L_1-L_3$, i.e. there exists a function $F_7:\mathbb{R}^4\rightarrow \mathbb{R}$ such that $F(x_0,\ldots,x_4,v_0,\ldots,v_4)=F_6(L_1,L_2,L_3,\pi_2,\pi_3)=F_7(\pi_2,\pi_3,\Gamma_2,\Gamma_3)$. Finally, the conditions $\Xi_1F=\Xi_1F_7=0$ imply that $\Xi_1F_7=\Upsilon_2F_7+L_1\Upsilon_1F_7=0$, where $$\Upsilon_1F_7=(\pi_2-\pi_2^2)\frac{\partial F_7}{\partial \pi_2}+(\pi_3-\pi_3^2)\frac{\partial F_7}{\partial \pi_3}-\pi_2\Gamma_2\frac{\partial F_7}{\partial \Gamma_2}-\pi_3\Gamma_3\frac{\partial F_7}{\partial \Gamma_3}=0$$ and $$\Upsilon_2F_7=(\Gamma_2+\Gamma_2^2)\frac{\partial F_7}{\partial \Gamma_2}+(\Gamma_3+\Gamma_3^2)\frac{\partial F_7}{\partial \Gamma_3}+\Gamma_2\pi_2\frac{\partial F_7}{\partial \pi_2}+\Gamma_3\pi_3\frac{\partial F_7}{\partial \pi_3}=0.$$ The first equation implies that there exists a function $F_8:\mathbb{R}^3\rightarrow \mathbb{R}$ which satisfies that $F(x_0,\ldots,x_4,v_0,\ldots,v_4)=F_7(\pi_2,\pi_3,\Gamma_2,\Gamma_3)=F_8(\Psi_0,\Psi_1,\Psi_2),$ where $\Psi_0=(\pi_2-1){\Gamma_2}^{-1}$, $\Psi_1=(\pi_3-1){\Gamma_3}^{-1}$ and $\Psi_2={\pi_2}{\pi_3}^{-1}(1-\pi_3)(1-\pi_2)^{-1}$. So, the equation $\Upsilon_2F_7=\Upsilon_2F_8=0$ can be cast into the form $$(1-\Psi_0)\frac{\partial F_7}{\partial \Psi_0}+(1-\Psi_1)\frac{\partial F_7}{\partial \Psi_1}-\Psi_2\frac{\Psi_1-\Psi_0}{\Psi_0\Psi_1}\frac{\partial F_7}{\partial \Psi_2}=0,$$ and we obtain that $F$ is an arbitrary function of the functions $\Lambda_1=\frac{1-\Psi_0}{1-\Psi_1}$ and $\Lambda_2=\frac{\Psi_0\Psi_2}{\Psi_1}$. In particular, undoing all the changes of variables performed along this Section, we get that $$\Lambda_1(p)=\frac{F_{431}(p)F_{210}(p)}{F_{421}(p)F_{310}(p)}\qquad {\rm and}\qquad \Lambda_2(p)=\frac{F_{431}(p)F_{420}(p)}{F_{421}(p)F_{430}(p)}$$ are two independent first-integrals for the vector fields of the distribution $\mathcal{D}$. Conclusions and outlook ======================= We have defined and analysed the concepts of superposition rule, time-dependent superposition rule, and free superposition rule for systems of SODEs. Several results concerning the existence of such superposition rules have been proved. Posteriorly, our theoretical achievements have been illustrated by means of the study of a number of SODEs appearing in the Physics and Mathematics literature. Several new SODE Lie systems have been described and a common superposition rule for all of them has been derived. Such a superposition rule has posteriorly been used, with the aid of the theory of quasi-Lie systems, to get a time-dependent superposition rule for second-order Riccati equations. In the future, we expect to continue the analysis the properties of superposition rules for systems of SODEs as well as to investigate the generalization of the techniques depicted throughout this work to systems of higher-order differential equations. As an application, we hope to apply the results obtained in order to study new systems of second- and higher-order differential equations. In particular, it is specially interesting analysing the higher members of the Riccati hierarchy in other to develop new methods to determine solutions for those PDEs whose Bäcklund transformations are described by members of such a hierarchy. Acknowledgments {#acknowledgments .unnumbered} =============== Partial financial support by research projects E24/1 (DGA), MTM2009-08166-E and MTM\ 2009-11154 is acknowledged. [99]{} C. Arnold, *Formal continued fractions solutions of the generalized second order Riccati equations, applications*, Numer. Algorithms, **15** (1997), 111–134. J. Beckers, L. Gagnon, V. Hussin and P. Winternitz, *Superposition formulas for nonlinear superequations*, J. Math. Phys., **31** (1990), 2528–2534. S.E. Bouquet, M.R. Feix and P.G.L. Leach, *Properties of second-order ordinary differential equations invariant under time translation and self-similar transformation*, J. Math. Phys., **32** (1991), 1480–1490. J.F. Cari[ñ]{}ena, J. Grabowski and J. de Lucas, *Quasi-Lie schemes: theory and applications*, J. Phys. A, **42** (2009), 335206. J.F. Cariñena, J. Grabowski and J. de Lucas, *Lie families: theory and applications*, J. Phys A, **43** (2010), 305201. J.F. Cari[ñ]{}ena, J. Grabowski and G. Marmo, *Superposition rules, Lie theorem, and partial differential equations*, Rep. Math. Phys., **60** (2007), 237–258. J.F. Cari[ñ]{}ena, P. Guha and M.F. Ra[ñ]{}ada, *A geometric approach to higher-order Riccati chain: Darboux polynomials and constants of the motion*, J. Phys.: Conf. Ser., **175** (2009), 012009. J.F. Cari[ñ]{}ena, P.G.L. Leach, and J. de Lucas, *Quasi-Lie systems and Emden–Fowler equations*, J. Math. Phys., **50** (2009), 103515. J.F. Cariñena and J. de Lucas, *A nonlinear superposition rule for solutions of the Milne–Pinney equation*, Phys. Lett. A, **372** (2008), 5385–5389. J.F. Cariñena, J. de Lucas and M.F. Rañada, *Integrability of Lie systems and some of its applications in physics*, J. Phys. A, **41** (2008), 304029. J.F. Cariñena, J. de Lucas and M.F. Rañada, *A geometric approach to integrability Abel equations*, published online 22 December 2010 J. Theoret. Phys. DOI: 10.1007/s10773-010-0624-7. J.F. Cari[ñ]{}ena, J. de Lucas and M.F. Ra[ñ]{}ada, *Recent applications of the theory of Lie systems in Ermakov systems*, SIGMA Symmetry Integrability Geom. Methods Appl., **4** (2008), 031. J.F. Cariñena and A. Ramos, *Applications of Lie systems in quantum mechanics and control theory*, in: “Classical and Quantum Integrability”, Banach Center Publ., 59, Polish Acad. Sci., Warsaw, (2003), 143–162. J.F. Cari[ñ]{}ena, M.F. Ra[ñ]{}ada and M. Santander, *Lagrangian formalism for nonlinear second-order Riccati systems: one-dimensional integrability and two-dimensional superintegrability*, J. Math. Phys., **46** (2005), 062703. V.K. Chandrasekar, M. Senthilvelan and M. Lakshmanan, *Unusual Li[é]{}nard-type nonlinear oscillator*, Phys. Rev. E, **72** (2005), 066203. V.K. Chandrasekar, M. Senthilvelan and M. Lakshmanan, *On the complete integrability and linearization of certain second-order nonlinear ordinary differential equations*, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., **461** (2005), 2451–2476. A.G. Choudhury, P. Guha and B. Khanra, *Solutions of some second order ODEs by the extended Prelle–Singer method and symmetries*, J. Nonlinear Math. Phys., **15** (2008), 365–382. H.T. Davis. “Introduction to Nonlinear Differential and Integral Equations”, Dover Publications, New York, 1962. J.M. Dixon and J.A. Tuszyński, *Solutions of a generalized Emden equation and their physical significance*, Phys. Rev. A, **41** (1990), 4166–4173. L. Erbe, *Comparison theorems for second order Riccati equations with applications*, SIAM J. Math. Anal., **8** (1977), 1032–1037. V.J. Ervin, W.F. Ames and E. Adams, *Nonlinear waves in the pellet fusion process*, in: “Wave Phenomena: Modern Theory and Applications”, North Holland Mathematics Studies, 97, Amsterdam, (1984), 199–210. M. Euler, N. Euler and P.G.L. Leach, *The Riccati and Ermakov–Pinney hierarchies*, J. Nonlinear Math. Phys., **14** (2007), 290–310. W. Fair and Y.L. Luke, *Rational approximations to the solution of the second order Riccati equation*, Math. Comp., **20** (1966), 602–606. R. Flores-Espinoza, *Periodic first integrals for Hamiltonian systems of Lie type,* . I.A. García, J. Giné, J. Llibre, *Liénard and Riccati differential equations related via Lie algebras*, Discrete Contin. Dyn. Syst. Ser. B **10** (2008), 485–494. V.V. Golubev, “Lectures on the Analytical Theory of Differential Equations”, Gosudarstv. Izdat. Tehn.-Teor. Lit., Moscow-Leningrad, 1950. A.M. Grundland and D. Levi, *On higher-order Riccati equations as Bäcklund transformations*, J. Phys. A, **32** (1999), 3931–3937. A. Guldberg, *Sur les équations différentielles ordinaires qui possèdent un système fondamental d’intégrales*, C.R. Math. Acad. Sci. Paris, **116** (1893), 964–965. N.H. Ibragimov, “Elementary Lie Group Analysis and Ordinary Differential Equations”, J. Wiley & Sons, Chichester, 1999. E.L. Ince, “Ordinary Differential Equations”, Dover Publications, New York, 1944. A. Karasu and P.G.L. Leach, *Nonlocal symmetries and integrable ordinary differential equations: $\ddot x + 3x\dot x + x^3 = 0$ and its generalizations*, J. Math. Phys., **50** (2009), 073509. S. Lafortune and P. Winternitz, *Superposition formulas for pseudounitary matrix Riccati equations*, J. Math. Phys., **37** (1996), 1539–1550. J.A. Lázaro-Camí and J.P. Ortega, *Superposition rules and stochastic Lie–Scheffers systems*, Ann. Inst. Henri Poincaré Probab. Stat., **45** (2009), 910–931. S. Lie and G. Scheffers, “Vorlesungen Über Continuierliche Gruppen mit Geometrischen und Anderen Anwendungen,” Teubner, Leipzig, 1893. A.B. Olde Daalhuis, *Hyperasymptotics for nonlinear ODEs (II). The first Painlevé equation and a second-order Riccati equation*, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., **461** (2005), 3005–3021. M.A. del Olmo, M.A. Rodríguez and P. Winternitz, *Simple subgroups of simple Lie groups and nonlinear differential equations with superposition principles*, J. Math. Phys., **27** (1986), 14–23. P. Painlevé, *Sur les équations différentielles du second ordre et d’ordre supérieur dont l’intégrale générale est uniforme*, Acta Math., **25** (1902), 1–85. S.N. Pandey, P.S. Bindu, M. Senthilvelan and M. Lakshmanan, *A group theoretical identification of integrable equations in the Liénard-type equation $\ddot x+f(x)+g(x)=0$. II. Equations having maximal Lie point symmetries*, J. Math. Phys., **50** (2009), 102701. C. Rogers, W.K. Schief and P. Winternitz, *Lie-theoretical generalization and discretization of the Pinney equation*, J. Math. Anal. Appl., **216** (1997), 246–264. C. Tunç and E. Tunç, *On the asymptotic behaviour of solutions of certain second-order differential equations*, J. Franklin Inst., **344** (2007), 391–398. M.E. Vessiot, *Sur une classe d’équations différentielles*, Ann. Sci. École Norm. Sup., **10** (1893), 53–64. M.E. Vessiot, *Sur les systèmes d’[é]{}quations diff[é]{}rentielles du premier ordre qui ont des syst[è]{}mes fondamentaux d’int[é]{}grales*, Ann. Fac. Sci. Toulouse Sci. Math. Sci. Phys., **8** (1894), H1–H33. M.E. Vessiot, *Sur quelques équations différentielles ordinaires du second ordre*, Ann. Fac. Sci. Toulouse Sci. Math. Sci. Phys., **9** (1895), F1–F26. G. Wallenberg, *Sur l’[é]{}quation diff[é]{}rentielle de Riccati du second ordre*, C.R. Math. Acad. Sci. Paris, **137** (1903), 1033–1035. P. Winternitz, *Lie groups and solutions of nonlinear differential equations*, Lecture Notes in Phys., **189** (1983), 263–305.
--- abstract: 'Astrophysical accretion is arguably the most prevalent physical process in the Universe; it occurs during the birth and death of individual stars and plays a pivotal role in the evolution of entire galaxies. Accretion onto a black hole, in particular, is also the most efficient mechanism known in nature, converting up to 40% of accreting rest mass energy into spectacular forms such as high-energy (X-ray and gamma-ray) emission and relativistic jets. Whilst magnetic fields are thought to be ultimately responsible for these phenomena, our understanding of the microphysics of MHD turbulence in accretion flows as well as large-scale MHD outflows remains far from complete. We present a new theoretical model for astrophysical disk accretion which considers enhanced vertical transport of momentum and energy by MHD winds and jets, as well as transport resulting from MHD turbulence. We also describe new global, 3D simulations that we are currently developing to investigate the extent to which non-ideal MHD effects may explain how small-scale, turbulent fields (generated by the magnetorotational instability – MRI) might evolve into large-scale, ordered fields that produce a magnetized corona and/or jets where the highest energy phenomena necessarily originate.' author: - 'Peter B. DOBBIE, Zdenka KUNCIC, Geoffrey V. BICKNELL and Raquel SALMERON' date:    title: 'Enhanced MHD Transport in Astrophysical Accretion Flows: Turbulence, Winds and Jets' --- \[sec:1\]Introduction ===================== It is widely accepted that high-energy astrophysical sources such as active galactic nuclei (AGN), gamma-ray bursts and some X-ray binaries are powered by accretion of matter onto a central black hole. Since the standard theory of astrophysical disk accretion was formulated over 30 years ago [@sha73; @novthorn73], arguably the most important advance in our understanding of the process by which matter in the disk can shed its angular momentum and release its gravitational binding energy has come from computational modelling. Numerical simulations demonstrate unequivocally that the magnetorotational instability (MRI, [@vel59; @cha60; @bal91; @bal98]) can produce magnetohydrodynamic (MHD) turbulence and enhanced angular momentum transport (see [@bal03] for a review). The presence of even a very weak magnetic field is the key ingredient: it completely changes the dynamics from a keplerian flow which is hydrodynamically stable even at high Reynolds numbers (as recently verified experimentally [@ji06]) to one which is unstable to the rapid growth of MHD modes leading to turbulence in the nonlinear regime. It is over 20 years since the first MHD simulations of astrophysical accretion flows were carried out [@uch85; @shi86]. Notwithstanding the important advances made to date [@bal91; @haw01; @ste01; @sto01; @haw02; @stepap02n; @katmin04n; @kig05; @mck06n; @macmat08n], numerical simulations have so far been unable to resolve two key outstanding issues: 1. How are the high rates of mass accretion inferred in the most powerful sources achieved? 2. How are the outflows and jets observed across the mass spectrum of accreting sources produced? In what follows, we briefly address each of these open questions and suggest how they may be connected and mutually resolved by a generalized model for MHD disk accretion. The most powerful accreting sources (i.e. quasars and other active galaxies) are fuelled by accretion onto a supermassive ($10^{6-9}\,M_\odot$) black hole. They produce radiative luminosities that can exceed those of normal galaxies by several orders of magnitude (e.g. up to $10^{48} \rm erg \, s^{-1}$), indicating mass accretion rates which can exceed 100 solar masses per year ($1 \, M_\odot \rm yr^{-1} \approx 6\times10^{25}\,\text{g}\,\text{s}^{-1}$). This is strictly a lower limit because accretion can also drive mechanical outflows, in some cases with inferred kinetic powers that are considerably greater than the observed radiative luminosity (e.g. the famous M87 jet – see [@jolkun07] and references therein). Indeed, the fact that collimated jets are observed across a wide range of accreting sources (see [@livio99] for a review), including those that are non-relativistic (e.g. protoplanetary systems, young stellar objects, see Fig. \[fig:jets\], and neutron stars, see especially [@fen04]), suggests that accretion provides an effective, generic mechanism for powering jets and other energetic plasma outflow phenomena. ![Hubble Space Telescope optical images of jets in young stellar objects. Photo credits: C. Burrows, J. Morse (STScI), J. Hester (AZ State U.), NASA.[]{data-label="fig:jets"}](jets_astroph.eps){width="8cm"} Magnetic fields are required to collimate jet plasma and to account for the observed radio synchrotron jet emission. MHD simulations have revealed that a net poloidal component of the magnetic field, $B_z$, is required to produce winds and jets from an accretion disk (see [@pud06] for a review). Interestingly, high mass accretion rates can only be achieved in the simulations if a large-scale MHD outflow is present (e.g. [@ste01; @kig05]). It is particularly noteworthy that comparably high accretion rates cannot be achieved through MRI-driven MHD turbulence alone, despite the fact that the effective turbulent viscosity is “enhanced” relative to kinematic fluid shear viscosity (e.g. [@sto01; @haw01; @haw02]). Therefore, magnetized jets (and by implication, MHD outflows in general) must be primarily responsible for truly enhanced transport in astrophysical accretion flows. This conclusion is consistent with early analytical models [@bla82; @pud83] proposing that accretion is facilitated by the vertical transport of angular momentum resulting from MHD torques on the disk surface. It is also consistent with non-ideal MHD simulations indicating that radial transport of angular momentum by turbulent stresses may be restricted to interior regions near the disk midplane only [@sal07]. Although MHD simulations indicate that a poloidal field component is essential for launching accretion disk jets [@beck08], it is not yet clear how such a field component arises in the case of accretion onto black holes, which have no intrinsic magnetic moment. The magnetic flux in the disk must originate from the random field in the interstellar medium (or from the companion star in the case of a black hole X-ray binary). This field is then amplified into a dominant toroidal configuration in the disk by the MRI. Simulations have yet to demonstrate the feasibility of creating large-scale fields via an inverse cascade process involving either stochastic reconnection of turbulent fields [@chris01] or reconnection of buoyant flux loops emerging from the disk surface [@touprin96]. Models which require [*a priori*]{} large-scale flux loops [@lovrom95n; @hayshi96n; @romust98n], a poloidal B field (e.g. [@katmin04n]) or a spinning black hole ([@mck06n; @mckbla09n]) to produce jets are too restrictive to explain the observed ubiquity of jets and outflows across the wide range of accreting systems. One of the most interesting numerical results to date is that of Machida and Matsumoto [@macmat08n]. Their global 3D simulations show the evolution of a large-scale poloidal field from an initially weak toroidal field. However, no simulation to date has shown an initial random field in a generic accreting system evolve into a configuration favourable for jet production. Similarly, while Machida [*et al.*]{} [@macnak06n] have studied the effect of radiative cooling on optically *thin* black hole accretion flows, no simulations to date have shown the effect of optically thick radiative cooling on the global, 3D evolution of the magnetic fields. We are developing new global 3D MHD simulations to directly confront this challenging problem and obtain new insights into the evolution of magnetic field topology in black hole accretion flows. We aim to test our hypothesis that turbulent reconnection and resistive dissipation, as well as radiative losses by the plasma, play pivotal roles in the evolution and steady-state properties of MHD accretion flows. We suggest that the microphysics of MHD accretion can govern large-scale, macrophysical phenomena that ultimately determine the observational appearance and hence, classification, of accreting black hole sources. Numerical approaches to date have been limited by one or more of the following drawbacks: the use of the shearing box approximation (see, e.g., [@reg08; @bod08] for some limitations of this approach); use of a non-conservative numerical scheme (e.g. [@hay06]) resulting in unphysical levels of numerical dissipation which prevent a quantitative analysis of energy transport; neglect of finite resistivity making it difficult to realistically model magnetic reconnection; and neglect of radiation, resulting in the unphysical situation where heat generated in the disk at the end of a turbulent cascade cannot be radiated away, so the disk puffs up. This also makes it impossible to compare model results against observations of luminosity and spectrum. To implement our model, we are using and extending FLASH[^1] [@fry00], the public MHD code developed at the University of Chicago. FLASH provides the following features which make it well suited to our model: it implements adaptive mesh refinement (AMR); it solves the equations of MHD in conservation form thus explicitly conserving energy; it uses a modified piecewise-parabolic method (PPM [@woo84; @col84]) which is significantly more accurate than some other widely used codes (e.g. [@hay06]); it uses the constrained transport method [@eva88] to enforce divergence free magnetic fields; and it is modular and extensible, allowing us to develop a radiation MHD module. As such it represents the next generation of MHD codes. ![image](bhaccretion_astroph.eps){width="90.00000%"} The organization of this paper is as follows. In §2, we briefly review an analytic model for turbulent MHD black hole disk accretion that forms the theoretical basis for our numerical investigations. In §3, we describe the simulations we are currently developing, including non-ideal MHD effects and radiation. We present some concluding remarks in §4. \[sec:2\]MHD Disk Accretion Theory ================================== The analytical foundation for our approach is based on the model of Kuncic and Bicknell [@kun04]. This model employs a mass-weighted statistical averaging of the MHD equations to obtain a mean-field description of turbulent MHD disk accretion that is steady-state and axisymmetric in the mean. Angular momentum and energy are transported radially outwards by turbulent Maxwell stresses and vertically outwards by a large-scale MHD wind and/or jet. The inner region of the black hole accretion disk is also surrounded by and magnetically coupled to a hot, diffuse corona, analogous to the solar corona. Two important observational predictions of the model are: 1. The disk emission spectrum is degraded by the electromagnetic extraction of gravitational binding energy from the accreting matter (see [@kunbick07a; @kunbick07b]); and 2. The presence of a magnetized jet substantially enhances the rate of mass accretion in the disk and hence, the rate of black hole growth, resulting in a correlation between black hole mass and radio emission (see [@jolkun08]). The model is schematically illustrated in Fig. \[fig:bhaccretion\]. The main results pertinent to our simulations are summarised below – more details can be found in the original paper [@kun04]. Statistical Averaging --------------------- In the mass-weighted statistical averaging approach, all variables are decomposed into mean and fluctuating parts, with intensive variables such as velocity ${\mathbf v}$ mass averaged according to $$v_i = \tilde{v}_i + v^{\prime}_i, \enspace \langle\rho v^{\prime}_i\rangle = 0 \qquad,$$ while extensive variables such as density $\rho$, pressure $p$ and magnetic field ${\mathbf B}$, are averaged the following way: $$\begin{aligned} \rho = \bar{\rho} + \rho^{\prime}, \enspace &\langle \rho^{\prime}\rangle = 0 \qquad ,\\ p = \bar{p} + p^{\prime}, \enspace &\langle p^{\prime}\rangle = 0 \qquad ,\\ B_i = \bar{B}_i + B^{\prime}_i, \enspace &\langle B^{\prime}_i\rangle = 0 \qquad.\end{aligned}$$ For clarity, in the following equations the tilde and bar have been omitted from the averaged intensive and extensive variables, respectively: averaged quantities are implicitly assumed. We consider below only the simplest case where $\bar B_i =0$, although it is straightforward to generalize to the case with a nonzero net mean magnetic field. We have also omitted negligible correlation terms.[^2] Mass Transfer ------------- Integration of the mean-field continuity equation gives $${\dot M_{\rm a}}(r) + {\dot M_{\rm w}}(r) = \text{constant} = \dot{M} \qquad ,$$ where ${\dot M_{\rm a}}(r)$ is the mass accretion rate and ${\dot M_{\rm w}}(r)$ is the mass outflow rate associated with a mean vertical velocity at the disk surface, i.e. at the base of a disk wind. Under steady-state conditions, the radial mass inflow decreases towards small $r$ at the same rate as the vertical mass outflow increases in order to maintain a constant net mass flux, $\dot{M}$, which is the net accretion rate at $r = \infty$. Momentum Transfer ----------------- Statistical averaging of the momentum equation yields $$\begin{aligned} \notag\frac{\partial(\rho v_i)}{\partial t} &+ \frac{\partial(\rho v_i v_j)}{\partial x_j} \\ &= -\rho\frac{\partial {\phi_{\scriptscriptstyle G}}}{\partial x_i} - \frac{\partial p}{\partial x_i} + \frac{\partial}{\partial x_j}\left( {t^{\scriptscriptstyle \rm R}_{ij}}+ \langle {t^{\scriptscriptstyle \rm B}_{ij}}\rangle \right) \; ,\end{aligned}$$ where ${\phi_{\scriptscriptstyle G}}= - GM ( r^2 + z^2)^{-1/2}$ is the gravitational potential of the central mass $M$, ${t^{\scriptscriptstyle \rm R}_{ij}}= - \langle \rho v^{\prime}_i v^{\prime}_j \rangle$ is the Reynolds stress, and the turbulent Maxwell stress is $$\langle {t^{\scriptscriptstyle \rm B}_{ij}}\rangle = \langle \frac{B_i B_j}{4\pi} \rangle - \delta_{ij}\langle \frac{B^2}{8\pi} \rangle \; .$$ Integration of the azimuthal component of the momentum equation yields $$\begin{aligned} \notag{\dot M_{\rm a}}v_\phi r &- {\dot M_{\rm a}}({r_{\rm i}}) v_\phi({r_{\rm i}}) {r_{\rm i}}\\ \notag= &-2 \pi r^2 T_{r\phi} + 2 \pi {r_{\rm i}}^2 T_{r\phi}({r_{\rm i}}) \\ &+ \int^r_{{r_{\rm i}}}\left[v_\phi r \frac{\text{d}{\dot M_{\rm a}}}{\text{d}r} - 4 \pi r^2 \langle t_{\phi z}\rangle^+\right] {{\rm d}r}\; , \label{eq:1}\end{aligned}$$ where quantities calculated at the disk surface are denoted by a ‘$+$’ superscript, ${r_{\rm i}}$ denotes the radius at the innermost stable circular orbit and $$T_{r\phi} = \int^{+h}_{-h} \langle t_{r\phi} \rangle \text{d}z$$ is the $r\phi$ component of the Maxwell stress integrated over the vertical scaleheight $h$. The left hand side of eqn. (\[eq:1\]) describes the change in angular momentum flux associated with inflow from an outer radius $r$ to ${r_{\rm i}}$. The first two terms on the right hand side describe the rate of radial transport of angular momentum due to MHD stresses in the disk. The terms in the integrand on the right hand side describe the [*vertical*]{} transport of angular momentum resulting from mass loss in a wind and an MHD torque on the disk surface, respectively. These effects are not modelled in standard accretion disk theory. In summary, the model we consider includes contributions from both radial and vertical transport of angular momentum to the overall mass accretion rate: angular momentum is transported radially outwards by internal MHD stresses and vertically outwards by both mass outflows and MHD stresses acting over the disk surface. Previous simulations [@ste01; @kig05; @sal07] as well as semi-analytic models (e.g., [@cam03]) show that in the presence of a large-scale, open, mean magnetic field, angular momentum is transported at small disk radii more efficiently in the vertical direction by large-scale magnetic torques (Poynting flux) than radially by MHD turbulence. Our simulations will compare the contribution from these processes as well as from the vertical transport of angular momentum by mass outflows. Energy Transfer --------------- Accretion extracts gravitational binding energy from the accreting matter and converts it into mechanical (e.g. kinetic, Poynting flux) and non-mechanical (e.g. radiative) forms. The rate at which this occurs is determined by the keplerian shear in the bulk flow, $s_{r\phi} = \frac{1}{2}r \partial\Omega/\partial r$, with $\partial\Omega/\partial r = -\frac{3}{2}\Omega / r$. The rate per unit disk surface area at which energy is emitted in the form of electromagentic radiation is determined by the internal energy equation: $$\begin{aligned} \notag\frac{\partial u}{\partial t} + &\frac{\partial}{\partial x_i}\left(uv_i + \langle uv^{\prime}_i \rangle\right) \\ \notag\approx &-pv_{i, i} - \langle pv^{\prime}_{i, i} \rangle - \langle F_{i, i} \rangle \\ &+ \langle\frac{J^2}{\sigma}\rangle + \langle {t^{\rm v}}_{ij}v^{\prime}_{i, j} \rangle \; , \label{eq:3}\end{aligned}$$ where $u$ is the gas plus radiation energy density, $\mathbf F$ is the radiative flux, $\mathbf J$ is the current density, and ${t^{\rm v}}_{ij}$ is the viscous stress tensor. The terms on the left hand side describe the total rate of change of gas plus radiation energy density and the terms on the right hand side describe work done by compression in the flow against the gas and radiation pressure, radiative losses, mean field ohmic heating, and viscous dissipation (heating). The last term requires some comment since the molecular viscosity in the [*mean flow*]{} is generally considered negligible in accretion disks. However, at the high-wavenumber end of a turbulent cascade, it can become important in converting the turbulent energy into heat. The viscous stress tensor is $${t^{\rm v}}_{ij} = 2\nu\rho s_{ij} \qquad ,$$ where $\nu$ is the coefficient of kinematic shear viscosity and $s_{ij}$ is the shear tensor: $$s_{ij} = \frac{1}{2}\left( v_{i, j} + v_{j, i} - \frac{2}{3}\delta_{ij}v_{k, k} \right) \; .$$ The source terms determine the rate at which energy is converted into random particle energy (some of which is then converted into radiation) and into bulk kinetic energy. If there are negligible changes in the internal energy of the gas and the turbulent energy is dissipated at the end of a turbulent cascade at a rate equivalent to its production, then eqn. (\[eq:3\]) implies that the disk radiative flux emerging from the disk surface is $$F^+_{\rm d} \approx \frac{1}{2}T_{r\phi}r\frac{\partial \Omega}{\partial r} = -\frac{3}{4}T_{r\phi}\Omega \; . \label{eq:2}$$ The level of these turbulent MHD stresses available to dissipate the internal energy in turn depends on how efficiently the gravitational binding energy extracted by accretion is converted into other forms (both mechanical and non-mechanical). That is, $T_{r\phi}$ is determined by the angular momentum conservation equation (c.f. \[eq:1\]): $$\begin{aligned} \notag-T_{r\phi}(r) &= \frac{{\dot M_{\rm a}}v_\phi r}{2\pi r^2} \left[ 1 - \frac{{\dot M_{\rm a}}({r_{\rm i}})}{{\dot M_{\rm a}}(r)} \left(\frac{{r_{\rm i}}}{r}\right)^{1/2} \right] \\ \notag&- \left(\frac{{r_{\rm i}}}{r}\right)^2 T_{r\phi}({r_{\rm i}}) \\ &- \frac{1}{2\pi r^2} \int^r_{{r_{\rm i}}}\left[v_\phi r \frac{\text{d}{\dot M_{\rm a}}}{\text{d}r} - 4 \pi r^2 \langle t_{\phi z}\rangle^+\right] \text{d}r.\end{aligned}$$ Substituting this into (\[eq:2\]) yields the following more general solution for the disk radiative flux: $$\begin{aligned} \notag F^+_d(r) &\approx \frac{3GM\dot{M}_a(r)}{8\pi r^3} \left[ 1 - \frac{\dot{M}_a(r_i)}{\dot{M}_a(r)} \left(\frac{r_i}{r}\right)^{1/2} \right] \\ \notag&- \frac{3}{4}\left(\frac{r_i}{r}\right)^2 T_{r\phi}(r_i)\Omega \\ &- \frac{3\Omega}{8\pi r^2} \int^r_{r_i}\left[v_\phi r \frac{\text{d}\dot{M}_a}{\text{d}r} - 4 \pi r^2 \langle t_{\phi z}\rangle^+\right] \text{d}r.\end{aligned}$$ This is the generalized solution for the radiative flux of a turbulent MHD accretion disk. It can be expressed as $$F^+_{\rm d} (r) \approx \frac{3GM{\dot M_{\rm a}}(r)}{8\pi r^3} [f_{\rm a}(r) - f_{\rm w}(r)] \; , \label{eq:4}$$ where $$f_{\rm a} (r) = \left[ 1 - \frac{{\dot M_{\rm a}}({r_{\rm i}})}{{\dot M_{\rm a}}(r)} \left(\frac{{r_{\rm i}}}{r}\right)^{1/2} \right] - \frac{2\pi {r_{\rm i}}^2 T_{r\phi}({r_{\rm i}})}{{\dot M_{\rm a}}(r) r^2\Omega} \label{eq:5}$$ is a dimensionless factor that parameterizes the available accretion energy flux, the last term describing the rate at which MHD stresses at the innermost stable circular orbit locally dissipate turbulent energy, and $$\begin{aligned} \notag f_{\rm w}(r) = &\frac{1}{\dot{M}_{\rm a}(r)r^2\Omega} \cdot \\ &\int^r_{r_i}\left[v_\phi r \frac{\text{d}\dot{M}_a}{\text{d}r} - 4 \pi r^2 \langle t_{\phi z}\rangle^+\right] \text{d}r\end{aligned}$$ is the fractional rate of vertical energy transport from the disk (the ‘w’ subscript denoting a wind). This is a correction factor which takes into account partitioning of accretion power into non-radiative forms. Note the difference between this model and standard disk accretion theory [@sha73]. In the latter, all the gravitational binding energy is locally dissipated and assumed to be converted to radiation: $\text{d}{\dot M_{\rm a}}/{{\rm d}r}= 0$ and $\langle t_{\phi z} \rangle^{+} = 0$ so that $f_{\rm w} = 0$. This difference will be manifested by a disk spectrum which differs from that predicted by the standard model since the local disk temperature $T(r)$ is reduced if energy is channelled away by outflows from the disk surface. This will affect the emission spectrum arising from the innermost regions of the disk, where the temperature is highest and where jets and outflows originate. Assuming local blackbody emission, the disk luminosity spectrum can be calculated by summing up the contributions from each annulus: $$L_{d, \nu} = 2\int^\infty_{r_i} \pi B_{\nu}[T(r)] \, 2\pi r \, \text{d}r \; ,$$ where $B_{\nu}$ is the Planck function, $T(r) = [F^+_{\rm d} (r)/\sigma]^{1/4}$ is the effective disk temperature of each annulus and $\sigma$ is the Stefan-Boltzmann constant. We expect the disk radiative efficiency to be lower than the canonical $\simeq 10\%$ predicted by the standard model when vertical transport of angular momentum is important. The radiative efficiency is given by the ratio of disk luminosity to accretion power: $L_{\rm d} / P_{\rm a}$. The total accretion power is calculated from $$P_{\rm a} = 2 \int_{{r_{\rm i}}}^\infty \frac{3GM{\dot M_{\rm a}}(r)}{8\pi r^3} f_{\rm a}(r) \, 2\pi r \, {{\rm d}r}\; .$$ If there is no wind mass loss from the disk, so that ${\dot M_{\rm a}}$ is constant, this reduces to the familiar result $P_{\rm a} = \frac{1}{2} GM {\dot M_{\rm a}}/ {r_{\rm i}}\approx \frac{1}{12}{\dot M_{\rm a}}c^2$, in the Newtonian approximation for a nonrotating black hole. MHD Accretion Simulations ========================= As described earlier, there is a real need for a new program of MHD simulations to advance our knowledge of accreting black hole systems. Our numerical work is motivated by the following: We need to explain the macrophysics of observed phenomena in AGN and other accreting systems, viz., high mass accretion rates and jets/winds, and test the hypothesis that they are related by large-scale MHD processes. We need to improve our understanding of the microphysics in order to explain how small-scale, local MHD processes can evolve into large-scale, global phenomena. We need to explicitly calculate the radiation emitted by a black hole accretion disk in order to directly compare against the observational data. FLASH solves the time-dependent equations of compressible non-ideal MHD. In non-dimensional conservation form these are: $$\begin{aligned} \frac{\partial \rho}{\partial t} &+ {\mathbf \nabla}\cdot (\rho{\bf v}) = 0 \\ \notag\frac{\partial (\rho{\bf v})}{\partial t} &+ {\mathbf \nabla}\cdot (\rho{\bf vv} - {\bf BB}) + {\mathbf \nabla}p + {\mathbf \nabla}\left(\frac{B^2}{2}\right)\\ &= \rho{\bf g} + {\mathbf \nabla}\cdot \bar\bar t^{\rm v} \\ \notag\frac{\partial (\rho E)}{\partial t} &+ {\mathbf \nabla}\cdot \left[{\bf v}\left( \rho E + p + \frac{B^2}{2} \right) - {\bf B}({\bf v} \cdot {\bf B})\right] \\ \notag&= \rho{\bf g} \cdot {\bf v} + {\mathbf \nabla}\cdot ({\bf v} \cdot {\bar\bar{t}}^{\rm v} + \kappa {\mathbf \nabla}T)\\ &+ \nabla \cdot [{\bf B} \times (\eta\nabla \times {\bf B})]\\ \notag\frac{\partial {\bf B}}{\partial t} &+ \nabla \cdot ({\bf vB} - {\bf Bv}) \\ &= -\nabla \times (\eta\nabla \times {\bf B})\end{aligned}$$ where $$\begin{aligned} E &= \frac{1}{2}v^2 + \epsilon + \frac{1}{2}\frac{B^2}{\rho}\end{aligned}$$ is the specific total energy, $\epsilon$ is the specific internal energy, $\bar\bar{t^{\rm v}}$ is the viscous stress tensor, ${\bf g}$ is the gravitational force per unit mass, $\kappa$ is the heat conductivity, and $\eta$ is the resistivity. FLASH implements a Direct Eulerian PPM solver [@woo84; @col84]. The constrained transport method [@eva88] is used to enforce divergence-free magnetic fields. Global 3D Simulations --------------------- MHD simulations of the MRI in accretion disks are often conducted in a shearing box approximation due to the high resolution required to model MHD turbulence. However, this approach can introduce complications and pitfalls including a limited spatial scale for the simulations, side-effects of the shearing box symmetry and artifacts from the application of periodic boundary conditions [@reg08] as well as an aspect ratio dependence for MRI channel solutions [@bod08]. We are now at a stage where high resolution global 3D simulations are possible and this is the approach we will take. Non-ideal MHD ------------- In the disk model of Kuncic and Bicknell [@kun04], jets and/or winds are primarily responsible for transporting the angular momentum necessary for accretion to proceed at a rate consistent with observations of the most powerful astrophysical sources. We will test the hypothesis that the large-scale poloidal magnetic fields necessary for these outflows may be self-consistently generated in the accretion flow. Recent simulations [@kig05] show that in the presence of an externally applied large-scale magnetic field, angular momentum transport by the vertical ($\phi z$) Maxwell stress is comparable to its radial ($r\phi$) component. Magnetic reconnections can have a significant influence on magnetic field topologies [@vas75; @bis93]. Notwithstanding the high degree of ionisation of plasmas in the accretion disks of AGN and X-ray binaries, we suggest that these reconnections and non-ideal MHD effects in general cannot be neglected. Even in numerical models that do not explicitly include non-ideal effects, they can appear in the form of numerical resistivity which is difficult to control and quantify. By explicitly modelling a finite resistivity, we will explore its effect on the evolution of the magnetic field topology, particularly the emergence of a significant $z$-component which is necessary to produce high mass accretion rates. Radiation --------- The inclusion of radiation in our simulations is imperative for directly comparing to the observational data, which is almost exclusively in the form of photons detected in various wavebands. Most of the emission that characterises quasars and other AGN is attributed to the putative accretion disk and peaks at optical–ultraviolet spectral energies. Radiative transfer will also be required to transport the internal energy dissipated in the disk plasma at the end of a turbulent cascade, i.e. to cool the disk. To date, no simulations have investigated the effect of optically thick radiative cooling on the MRI in full global 3D. Including radiation will also allow us to test whether the blackbody emission from the disk is modified by outflows. In addition, it may be that regions of the disk where radiation dominates may be thermally unstable [@sha76], thus affecting the dynamics. Implementing radiation is computationally very demanding; numerically solving the full radiative transfer problem in 3D is currently not feasible. Instead, a common approach is to average over frequency and solve the equations in the flux-limited diffusion (FLD) approximation [@alm73; @lev81], a technique which still allows one to approximate the emergent spectrum. A “flux limiter” is used to interpolate between the optically thin and optically thick cases, giving a reasonable measure of the energy carried away by radiation [@cas04]. FLD has previously been implemented in a shearing box in a reference frame co-moving with the fluid (e.g. [@tur01; @hay06]). Simulations show a stratified disk in contrast to the standard model [@tur04], and that radiative diffusion dominates Poynting flux throughout the disk and the upper layers are magnetically supported and inhomogeneous, likely affecting the emergent thermal spectrum [@hir06; @kro07; @bla07]. The implementation of MHD with FLD described in [@kru07] achieves energy conservation by using a mixed-frame numerical scheme, evaluating radiation quantities in the lab frame and fluid opacities in the co-moving frame, thus enabling us to address radiation in a quantitative way. The algorithm also provides improved speed and accuracy compared to [@tur01; @hay06]. Our goal is to calculate the steady-state emission spectrum of a turbulent MHD disk around a supermassive black hole in the nucleus of a galaxy and to compare the predicted spectrum with the observed optical spectra of quasars (see, e.g., [@kunbick07a]). Concluding Remarks ================== The publication of these proceedings coincides with the 50th anniversary of the discovery of the MRI [@vel59], which has had such a profound impact on our understanding not only of accretion disks – arguably nature’s most powerful energy source – but plasmas in general, ranging from laboratory scales to galaxy scales. Further landmarks in accretion disk theory came with the laying down of the standard theory [@sha73] in what remains the most cited paper in all of astrophysics; the discovery that large scale magnetic fields can vertically transport matter, energy and momentum [@bla82]; the first numerical MHD simulations [@uch85; @shi86]; as well as the rediscovery of the MRI in accretion disks and the demonstration by numerical simulations that this is indeed the MHD process anticipated by the analytic standard theory necessary to produce the turbulent radial transport of energy and momentum [@bal91; @bal98]. Despite these major steps forward in assembling the components of a complete accretion disk theory, we have not to date seen numerical simulations which can *self-consistently* produce all the salient features of quasars and other high energy astrophysical sources: high mass accretion rates, outflows, winds and jets, the formation of a magnetised corona, and the observed thermal spectrum. We expect it will again be magnetic fields which will hold the key to resolving these outstanding issues. To this end, we are developing new 3D global MHD simulations to test the hypothesis that small-scale, stochastic fields can self-consistently generate the large-scale poloidal magnetic fields necessary for the transport of energy and momentum from accreting matter necessary to produce each of the above observed features. The analytical basis for this generalized model was laid down by the companion work [@kun04] to our current numerical modelling, which the rapid advances in computing power accompanied by new codes and more efficient algorithms has now made possible. It is an exciting time for accretion disk theory and for our understanding of the microphysics that drives these and other plasma systems in nature. [9]{} Shakura, N. I. and Sunyaev, R. A., Astron. Astrophys. **24**, 337 (1973). Novikov, I. D. and Thorne, K. S. in Black Holes, C. DeWitt & B. DeWitt (New York: Gordon and Breach), 343 (1973). Velikhov, E. P., J. Exp. Theor. Phys. (USSR) **36**, 1398 (1959). Chandrasekhar, S. Proc. Natl. Acad. Sci. USA **46**, 253 (1960). Balbus, S. A. and Hawley, J. F., Astrophys. J. **376**, 214 (1991). Balbus, S. A. and Hawley, J. F., Rev. Mod. Phys. **70**, 1 (1998). Balbus, S. A., ARAA **41**, 555 (2003). Ji, H., Burin, M., Schartman, E. and Goodman, J., Nature **444**, 343, (2006). Uchida, Y. and Shibata, K., PASJ **37**, 515, (1985). Shibata, K. and Uchida, Y., PASJ **38**, 631, (1986). Hawley, J. F., Balbus, S. A. and Stone, J. M., Astrophys. J. **554**, L49 (2001). Steinacker, A. and Henning, T., Astrophys. J. **554**, 514 (2001). Stone, J. M. and Pringle, J. E., Mon. Not. R. Astron. Soc. **322**, 461 (2001). Hawley, J. F. and Balbus, S. A., Astrophys. J. **573**, 738 (2002). Steinacker, A. and Papaloizou, Astrophys. J. **571**, 413 (2002). Kato, Y., Mineshige, S. and Shibata, K., Astrophys. J. **605**, 307 (2004). Kigure, H. and Shibata, K., Astrophys. J. **634**, 879 (2005). McKinney, J. C., Mon. Not. R. Astron. Soc. **368**, 1561 (2006). Machida, M. and Matsumoto, R., PASJ **60**, 613 (2008). Jolley, E. J. D. and Kuncic, Z., Astrop. & Sp. Sci. **311**, 257, (2007). Livio, M., Phys. Rep. **311**, 255, (1999). Fender, R., Wu, K., Johnston, H., Tzioumis, T., Jonker, P., Spencer, R. and van der Klis, M., Nature **427**, 222 (2004). Pudritz, R. E., Ouyed, R., Fendt, C. and Brandenburg, A., in Protostars and Planets V, ed. B. Reipurth, D. Jewitt & K. Keil (Tucson: University Arizona Press), 277 (2006). Blandford, R. D. and Payne, D. G., Mon. Not. R. Astron. Soc. **199**, 883 (1982). Pudritz, R. E. and Norman, C. A., Astrophys. J. **274**, 677 (1983). Salmeron, R., Konigl, A. and Wardle, M., Mon. Not. R. Astron. Soc. **375**, 177 (2007). Beckwith, K., Hawley, J. F. and Krolik, J. H., Astrophys. J. **678**, 1180 (2008). Christensson, M., Hindmarsh, M. and Brandenburg, A., Phys. Rev. E **64**, 056405-1 (2001). Tout, C. A. and Pringle, J. E., Mon. Not. R. Astron. Soc. **281**, 219 (1996). Lovelace, R. V. E., Romanova, M. M. and Bisnovatyi-Kogan, G. S., Mon. Not. R. Astron. Soc. **275**, 244 (1995). Hayashi, M. R., Shibata, K. and Matsumoto, Astrophys. J. **468**, L37 (1996). Romanova, M. M., Ustyugova, G. V., Koldoba, A. V., Chechetkin, V. M. and Lovelace, R. V. E., Astrophys. J. **500**, 703 (1998). McKinney, J. C. and Blandford, R. D., to be published in Mon. Not. R. Astron. Soc. \[arXiv:0812.1060\] (2009). Machida, M., Nakamura, K. E. and Matsumoto, R., PASJ **58**, 193 (2006). Regev, O. and Umurhan, O. M., Astron. Astrophys. **481**, 21 (2008). Bodo, G., Mignone, A., Cattaneo, F., Rossi, P. and Ferrari, A., Astron. Astrophys. **487**, 1 (2008). Hayes, J. C., Norman, M. L., Fiedler, R. A., Bordner, J. O., Li, P. S., Clark, S. E., ud-Doula, A. and Mac Low, M. M., Astrophys. J.S **165**, 188 (2006). Fryxell, B., Olson, K., Ricker, P., Timmes, F. X., Zingale, M., Lamb, D. Q., MacNeice, P., Rosner, R., Truran, J. W. and Tufo, H., Astrophys. J. Supp. **131**, 273 (2000). Woodward, P. and Colella, P., J. Comp. Phys **54**, 115 (1984). Colella, P. and Woodward, P., J. Comp. Phys **54**, 174 (1984). Evans, C. R. and Hawley, J. F., Astrophys. J. **332**, 659 (1988). Kuncic, Z. and Bicknell, G. V., Astrophys. J. **616**, 669 (2004). Kuncic, Z. and Bicknell, G. V., Astrop. Sp. Sci. **311**, 127 (2007a). Kuncic, Z. and Bicknell, G. V., Mod. Phys. Lett. A **22**, 1685 (2007b). Jolley, E. J. D. and Kuncic, Z., Mon. Not. R. Astron. Soc. **386**, 989, (2008). Campbell, C. G., Mon. Not. R. Astron. Soc. **345**, 123 (2003). Vasyliunas, V. M., Rev. Geophys. Space Phys. **13**, 303 (1975). Biskamp, D., Phys. Rep. **237**, 179 (1993). Shakura, N. I. and Sunyaev, R. A., Mon. Not. R. Astron. Soc. **175**, 613 (1976). Alme, M. L. and Wilson, J. R., Astrophys. J **186**, 1015 (1973). Levermore, C. D. and Pomraning, G. C., Astrophys. J. **248**, 321 (1981). Castor, J. I., in Radiation Hydrodynamics (Cambridge: Cambridge Univ. Press) (2004). Turner, N. J. and Stone, J. M., Astrophys. J. Supp. **135**, 95 (2001). Turner, N. J., Astrophys. J. **606**, L45 (2004). Hirose, S. and Krolik, J. H, Astrophys. J. **640**, 901 (2006). Krolik, J. H., Hirose, S. and Blaes, O., Astrophys. J. **664**, 1045 (2007). Blaes, O., Hirose, S. and Krolik., J. H., Astrophys. J. **664**, 1057 (2007). Krumholz, M. R., Klein, R. I., McKee, C. F. and Bolstad, J., Astrophys. J. **667**, 626 (2007). [^1]: FLASH is freely available at http://flash.uchicago.edu. [^2]: In particular, triple correlation terms of the form $\langle t_{ij}v^{\prime}_j \rangle$ are assumed negligible compared to analogous correlations with the mean fluid velocity $\langle t_{ij} \rangle\tilde{v}^{\prime}_j$.
--- abstract: 'We present a theoretical analysis of spin relaxation, for a polarized gas of spin 1/2 particles undergoing restricted adiabatic diffusive motion within a container of arbitrary shape, due to magnetic field inhomogeneities of arbitrary form.' author: - 'M. Guigue' - 'R. Golub' - 'G. Pignol' - 'A. K. Petukhov' bibliography: - 'article.bib' title: 'Universality of spin-relaxation for spin 1/2 particles diffusing over magnetic field inhomogeneities in the adiabatic regime' --- Introduction ============ Hyperpolarized gas such as $^{3}$He is used in a wide variety of scientific and medical situations. In physics, $^{3}$He is commonly used as a spin-filter for neutrons [@Andersen2006], as a precision magnetometer [@Gemmel2010] and as a probe for new fundamental spin-dependent interactions [@Petukhov2010; @Fu2011; @Petukhov2011; @Bulatowicz2013; @Tullney2013]. In medicine, it is used for magnetic resonance imaging [@VanBeek2003a; @Thien2008; @Zheng2011b]. In certain conditions, the polarization lifetime of a $^{3}$He cell can be as long as a few weeks, which is a desirable feature in most of those applications. To reach and improve on low depolarization rates, all possible depolarization phenomena should be carefully controlled. One important depolarization channel is the relaxation induced by the motion of the polarized particles in an inhomogeneous magnetic field. This depolarization has been of ongoing interest since the pioneering work in the field [@Bloembergen1948; @Bouchiat1960; @Gamblin1965; @Schearer1965]. The polarization evolution is characterized by three quantities: the longitudinal and transversal depolarization rate $\Gamma_{1}$ and $\Gamma_{2}$ and the associated frequency shift $\delta\omega$. In many experimental setups, the Helium 3 polarized gas is contained in cells with typical sizes of $10\,\mathrm{{cm}}$ at pressure of the order of $1\,\mathrm{{bar}}$. In those cases, the gas is in the diffusive regime, where interparticle collisions are more frequent than wall collisions. While there are various theoretical approaches [@Grebenkov2007], we will treat the depolarization rate induced by magnetic field inhomogeneities using the results of the standard perturbation theory [@Redfield1965; @McGregor1990; @Goldman2001; @Lamoreaux2005b; @Petukhov2010]. This has been applied in various ways in the literature; the Redfield approach [@Redfield1965; @Slichter1963; @McGregor1990] starting with the equation of motion of the density matrix has been shown to be equivalent to the perturbation theory applied to the Torrey equation for the density matrix by expansion in eigenfunctions [@Cates1988; @Golub2010c], as well as equivalent to a perturbative solution of the same equation based on the Green’s function and a direct perturbative solution of the Schroedinger equation [@Golub2014]. Since the approach is perturbative, it is valid only for a limited range of appropriate parameters. The early theoretical treatments of the problem were limited to simple cell geometries (1D, cylindrical or spherical in [@McGregor1990] ) and the magnetic field [gradients]{} [were]{} assumed uniform over the cell volume. However, every-day practice often has to deal with magnetic field inhomogeneities with a second or even higher order variation with position. The situation is even more complex if one uses cells with very special shapes (such as wide-angle “banana” cells [@Stewart2006]; see Fig. \[fig:cells\] for examples of cells used nowadays for polarized helium 3). ![Examples of cells used as polarized helium 3 containers.[]{data-label="fig:cells"}](wideanglecell.pdf "fig:") ![Examples of cells used as polarized helium 3 containers.[]{data-label="fig:cells"}](weirdcell.pdf "fig:") ![Examples of cells used as polarized helium 3 containers.[]{data-label="fig:cells"}](rectangularcell2.pdf "fig:") ![Examples of cells used as polarized helium 3 containers.[]{data-label="fig:cells"}](cylindricalcell.pdf "fig:") Very recently, a more general solution valid for an arbitrary magnetic field has been proposed in terms of a Fourier series expansion [@Petukhov2010] for a 1D diffusive motion. A 3D generalization for the case of an arbitrary magnetic field and a rectangular cell [valid for all values of interparticle collision rate is given ]{} in [@Clayton2011; @Swank2012]. In this paper we present simple analytical expressions for $\Gamma_{1}$ and $\delta\omega$ valid for spin 1/2 particles undergoing adiabatic restricted diffusive motion (the term “adiabatic” means that the holding magnetic field is high, $\omega _{0}\tau_{\rm{corr}}\gg 1$, [where ]{}$\omega_{0}=\gamma B_{0}$ [is the Larmor frequency and ]{}$\tau_{\rm{corr}}$ is the correlation time for the field fluctuations seen by the particles, and the spins approximately follow its local direction: this will be discussed in detail in Section IV) within a cell of arbitrary form and influenced by a magnetic field of arbitrary shape, using the Redfield theory involving correlation functions. The conditions necessary to apply the Redfield (perturbation) theory and our result will be clearly explained. Also an exact solution for power law ($b[z]\approx gz^{k},k=1,2,3,4$) inhomogeneity is presented and compared with our approximate result. General concepts of polarized gas relaxation ============================================ We will consider the case of an assembly of spin one-half particles with a gyromagnetic ratio $\gamma$ evolving in a slightly inhomogeneous magnetic field. For a polarized gas at “normal conditions” contained in a typical 10 cm cell exposed to a weak (few tenths of Gauss) magnetic field with a tiny inhomogeneity across the cell, the field correlation time $\tau _{\rm{corr}}$ is approximately the time constant of the lowest diffusion mode $\tau _{\rm{corr}} \approx \tau _D \approx R^2/D\pi ^2 \approx 1\,\rm{s}$. The total magnetic field in which the gas is immersed is defined as $\vec{B}=B_{0}\vec{e}_{z}+\vec{b}$ where $\vec{b}=b_{x}\vec{e}_{x}+b_{y}\vec{e}_{y}+b_{z}\vec{e}_{z}$ only depends on the position. The magnetic field inhomogeneity corresponds to $\vec{b}$. We defined $\omega_{0}=\gamma B_{0}$ and $\langle b_{i}\rangle=0$. To apply the Redfield theory for spin-relaxation in slightly inhomogeneous magnetic fields, the resulting relaxation time $T$ must be longer than the correlation time $\tau_{\rm{corr}}$ [@Redfield1965; @Goldman2001]. If one is looking for the longitudinal relaxation rate $\Gamma _1$, then this time $T$ is $T_1 =1/\Gamma _{1}$; for the frequency shift, it is equal to $1/\delta\omega$. This condition implies that the relaxation is slow enough to let the spins diffuse across the cell many times before they are strongly relaxed. Typical values for magnetic inhomogeneities gradients are $\frac{\vec{\nabla}b}{B_{0} }\approx10^{-3}$ to $10^{-4}\,\mathrm{{cm}^{-1}}$, leading to longitudinal spin-relaxation time constants from a few tenths to a few thousands hours, so in practice, the condition of applicability of Redfield (perturbation) theory is well fulfilled. The attractive feature of Redfield theory is the result that expresses the observables of interest $\Gamma _1$, $\Gamma _2$ and $\delta \omega$ in terms of the Fourier spectra of corresponding field correlation functions. $$\label{eq:long-relaxation-rate} \Gamma _1=\frac{1}{T_1}=\gamma ^2 \left(\mathcal{R}e \left[S_{xx}(\omega _0) + S_{yy}(\omega _0)\right] +\mathcal{I}m\left[ S_{yx}(\omega _0) -S_{xy}(\omega _0)\right]\right),$$ $$\label{eq:trans-relaxation-rate} \Gamma _2=\frac{1}{T_2} =\frac{1}{2} \Gamma _1 +\gamma ^2 S_{zz}(0),$$ $$\label{eq:frequency-shift} \delta \omega =\frac{\gamma ^2}{2}\left(\mathcal{R}e \left[ S_{xy}(\omega _0) - S_{yx}(\omega _0)\right] + \mathcal{I}m \left[ S_{xx}(\omega _0) +S_{yy} (\omega _0)\right]\right),$$ where $S_{ij}(\omega)$ is the Fourier transform or spectrum of the magnetic field correlation function defined as: $$S_{ij}(\omega)=\int_{0}^{\infty}\langle b_{i}(0)b_{j}(\tau)\rangle\exp (i\omega\tau)\mathrm{{d}\tau}.$$ The ensemble average of the variable $X$ is denoted [by]{} $\langle X\rangle$. The correlation function of $b_{i}$ and $b_{j}$ can be expressed as: $$\langle b_{i}(0)b_{j}(\tau)\rangle=\frac{1}{V}\int_{V}\mathrm{{d}}\vec{r}_{0}\int_{V}\mathrm{{d}}\vec{r}b_{i}(\vec{r}_{0})b_{j}(\vec{r})p(\vec{r},\tau\mid\vec{r}_{0})\label{defn:corr-functions},$$ where $V$ is the volume of the cell and $p(\vec{r},\tau\mid\vec{r}_{0})$ is the conditional probability (or propagator) for a particle which is at $\vec{r}_{0}$ at $t=0$ to be at $\vec{r}$ at the time $t$. Moreover, $p(\vec{r},\tau\mid\vec{r}_{0})$ [satisfies]{} the initial condition: $$p(\vec{r},t=0\mid\vec{r}_{0})=\delta(\vec{r}-\vec{r}_{0} )\label{eq:initial-condition}.$$ As a consequence, we have: $$\langle b_{i}(0)b_{j}(0)\rangle=\frac{1}{V}\int_{V}\mathrm{{d}}{\vec{r}}b_{i}(\vec{r})b_{j}(\vec{r})=\overline{b_{i}b_{j}}. \label{def:volume-average}$$ $\overline{X}$ corresponds to the volume average of the quantity $X$. When the gas is in the diffusion regime, (the mean free path between interparticles collisions is [much ]{}shorter than the mean free path between wall collisions); the propagator is governed by the diffusion equation: $$\frac{\partial p(\vec{r},\tau\mid\vec{r}_{0})}{\partial\tau}=D\bigtriangleup p(\vec{r},\tau\mid\vec{r}_{0})\label{eq:diffusion},$$ where $D$ is the diffusion coefficient which is inversely proportional to the pressure of the gas. This equation gives correct solutions for the time behavior of the propagator, [for times]{} longer than $\tau _{ \rm{coll}}$, the mean time between particle collisions. We only consider the relaxation due to restricted diffusive spin motion in a slightly inhomogeneous magnetic field. Any additional depolarization due to possible interactions of the spin with magnetic impurities in/on the cell walls is neglected. This leads to the boundary condition on the container walls [@Cates1988]: $$\vec{\nabla}p(\vec{r},\tau|\vec{r}_{0})\cdot \vec{n}=0\label{eq:boundaries},$$ where $\vec{n}$ is the vector normal to the surface. The shape of the magnetic field inhomogeneity $b$ must be taken into account in order to calculate the correlation functions (\[defn:corr-functions\]). For the simplest case of an uniform gradient and specific cell geometry (rectangular, spherical, cylindrical [@Cates1988; @McGregor1990; @Clayton2011]), it is commonly known that at high magnetic field and pressure, the longitudinal relaxation rate can be expressed as:$$\label{eq:commonlyknowresult} \Gamma_{1}=D\frac{|\vec{\nabla}b_{x}|^{2}+|\vec{\nabla}b_{y}|^{2}}{B_{0}^{2}}.$$ For higher orders of power law inhomogeneities and when the gas is confined in a rectangular cell, the solution has been obtained in the form of a Fourier series [@Petukhov2010; @Swank2012] (For details, see Appendix C). Similar behavior is observed for all power law inhomogeneities at high magnetic field and pressure (see Fig. 2). But for more complicated cell geometries, the solution of the problem is quickly limited by one’s capability of finding an appropriate basis, in which the propagator can be expanded. ![Behavior of normalized relaxation rates $\tilde{\Gamma}_{1}^{k}$ defined with Eq. (\[eq:gamma\_tild\]) for different power law magnetic field inhomogeneities depending on $\phi_{L}=\gamma B_{0}\frac{L^{2}}{\pi^{2}D}$.[]{data-label="fig:comb_plot"}](CombinedPlot-20140514.pdf) Longitudinal relaxation rate in the diffusive adiabatic regime ============================================================== From Eq. (\[eq:long-relaxation-rate\]), we see that the longitudinal relaxation rate $\Gamma_{1}$ is defined by the spectrum of the magnetic field correlation functions. Below, we show that for high frequencies the spectrum, and hence relaxation rate, can be obtained in a closed form for arbitrary magnetic inhomogeneities [and]{} container shapes. Further on, we will be interested in a time scale much longer than a typical time between particle collisions $\tau _{\rm{coll}}$. For such times $\left\{ t_k \right\}$, the field $\vec{b}(t_k)$ experienced by the spin of a randomly moving particles forms a stationary stochastic process $\left\{ b_j (k)\right\}$. This assumption leads us to the time translational invariance of the field correlation function: $$\label{eq:timeinv_corrfunction} \langle b_j (t) b_j (t+\tau)\rangle = \langle b_j (0) b_j (\tau)\rangle .$$ In addition, for the field $\left\{b _i\right\}$ dependent explicitly only on the position, the field correlation function is a real valued even function of time: $$\label{eq:timerev_corrfunction} \langle b_i (0) b_j(\tau)\rangle = \langle b_i (0) b_j (-\tau)\rangle,$$ (when one of the components $b_i$ is linearly dependent on the particle velocity, as it is in the nEDM experiment [@Lamoreaux2005b], Eq. (\[eq:timerev\_corrfunction\]) is no more valid). From Eq. (\[eq:timeinv\_corrfunction\]) and (\[eq:timerev\_corrfunction\]) immediately follows that the cross-correlation terms are equal: $$\langle b_i (0)b_j(\tau)\rangle = \langle b_j(0) b_i (-\tau)\rangle = \langle b_j (0)b_i (\tau)\rangle.$$ This allows us to drop out the cross-correlation terms in Eq. (\[eq:long-relaxation-rate\]) and (\[eq:frequency-shift\]). By integrating Eq. (\[eq:long-relaxation-rate\]) twice by parts and using the fact that $\langle b_{i}(0)b_{i}(\tau)\rangle\rightarrow0$ (with $i=x,y,z$) for $\tau\rightarrow+\infty$, we can write: $$\label{eq:intermediate-equation} \Gamma _1=\frac{\gamma ^2 }{\omega _0 ^2}\left(-\frac{{\rm{d}}}{{\rm{d}}\tau}\langle b_x(0)b_x(\tau)+b_y(0)b_y(\tau)\rangle\mid _{\tau =0} - \int _0 ^{\infty} {\rm{d}}\tau {\cos \omega _0 \tau} \frac{d^2}{d\tau ^2}\langle b_x (0)b_x(\tau ) + b_y (0)b_y (\tau )\rangle \right).$$ We analytically treat the first term in Eq. (\[eq:intermediate-equation\]) (for details, see Appendix A) using the definition of ensemble average, replacing the time derivative by the right side of the diffusion equation (\[eq:diffusion\]), using the Green-Ostrogradski theorem [(also called the divergence theorem)]{} [@Appel] and applying the initial and boundary conditions. The result is: $$\frac{\mathrm{{d}}}{\mathrm{{d}}\tau}\langle b_{i}(0)b_{i}(\tau)\rangle|_{\tau=0}=-D\overline{(\vec{\nabla}b_{i})^{2}}.$$ To estimate the second term of Eq. (\[eq:intermediate-equation\]), we applied the Lebesgue-Riemann lemma [@Appel] (see Appendix B) which says that for high frequency (adiabatic regime), it goes to zero faster than the first term. Finally we can write: $$\Gamma_{1}\approx D\frac{\overline{\left\vert \vec{\nabla}b_{x}\right\vert ^{2}}+\overline{\left\vert \vec{\nabla}b_{y}\right\vert ^{2}}}{B_{0}^{2}.}\label{eq:gamma1-Adiab-Diff}$$ Notice that we obtain a volume average and not an ensemble average. One can see that, for uniform gradients, our main result (\[eq:gamma1-Adiab-Diff\]) is equal to Eq. (\[eq:commonlyknowresult\]). However, Eq. (\[eq:gamma1-Adiab-Diff\]) can also be applied to arbitrary shapes of cells and magnetic field inhomogeneities. One can carry out a similar procedure to obtain a volume average formula for the frequency shift. This time, integrating [the second term in Eq. (\[eq:frequency-shift\]) ]{}only once by parts, [we]{} obtain: $$\label{eq:dw-intermadiate-equation} \delta\omega=\frac{\gamma^{2}}{2\omega_{0}}\left( \langle b_{x}(0)b_{x} (\tau)+b_{y}(0)b_{y}(\tau)\rangle|_{\tau=0}-\int_{0}^{\infty }\mathrm{{d}}\tau\cos{\omega}_{o}{\tau}\frac{\mathrm{{d}} }{\mathrm{{d}}\tau}\langle b_{x}(0)b_{x}(\tau)+b_{y}(0)b_{y}(\tau )\rangle\right)$$ Again, according to the Lebesgue-Riemann lemma [and the definition of volume average (\[def:volume-average\])]{}, the second term in Eq. (\[eq:dw-intermadiate-equation\]) goes to zero as $\omega _0$ goes to infinity, leading to the following result for the high frequency asymptote of the frequency shift: $$\delta\omega\approx\frac{\gamma^{2}}{2\omega_{0}}\left( \overline{b_{x}^{2} }+\overline{b_{y}^{2}}\right). \label{eq:FreqShift-Adiab}$$ In the derivation of Eq. (\[eq:FreqShift-Adiab\]), we did not use any propagator, meaning that it is valid not only for the diffusive regime but also for quasi-ballistic movements of particles. The only hypothesis is that the correlation function $\langle b_{i}(0)b_{i}(\tau)\rangle$ goes to zero when $\tau$ tends to infinity. The remarkable feature of our results (\[eq:gamma1-Adiab-Diff\], \[eq:FreqShift-Adiab\]) is that they were obtained without any specific assumptions on the cell geometry and on the shape of magnetic field. This universality of adiabatic diffusive motions explains why previous results for the longitudinal relaxation $\Gamma_{1}$ due to constant gradient magnetic fields are exactly the same for 1D models [@Petukhov2010], for a 3D rectangular cell [@Clayton2011], and for spheres [@Cates1988; @McGregor1990]. Validity domain of the result and discussion ============================================ Let us discuss the [domain of validity]{} for the main result (\[eq:gamma1-Adiab-Diff\]). First, the evolution equation (\[eq:diffusion\]) of the propagator is only valid at times much longer [than]{} the collision time between two particles $\tau_{{coll}}$. This means that for the corresponding spectrum, the frequency $\omega_{0}$ should satisfy $\omega _{0}\ll1/\tau_{coll}$. A more general theory of the correlation functions for arbitrary field and any time scale (from quasi-ballistic to diffusive motion) may be found in [@Swank2012]. It is important to notice that the Lebesgue-Riemann lemma does not tell how fast the second term in Eq. (\[eq:intermediate-equation\]) goes to zero with $\omega_{0},$ [nor]{} [the]{} critical frequency above which we can apply our result (\[eq:gamma1-Adiab-Diff\]). To find an answer on the last point, let’s first have a look [at]{} the results shown on Fig. \[fig:comb\_plot\] (see also Appendix C). For a magnetic field perturbation $b(r)$ weakly dependent on the position within the cell (low power of power law perturbing field, Appendix C), the exact result (\[eq:Gamma1k\]) tends to the asymptotic behavior (\[eq:gamma1-Adiab-Diff\]) when the [Larmor ]{}phase accumulated during the time required by the gas particles to diffuse across the cell $\phi_{L}=\omega_{0}\tau_{D}$, ($\tau _{D}=\frac{L^{2}}{\pi^{2}D}$), is significant: $\phi_{L}\gg1$. In the case of weakly position dependent inhomogeneities, $L$ is equal to the typical cell size. This means that the spin diffusive motion is adiabatic everywhere within the cell. A very similar behavior has been observed for a short-range exponential spin-dependent force $b(r)\approx b_{0}\exp (-r/\lambda)$ [@Petukhov2010; @Swank2012] ($\tau_{\lambda}=\frac{\lambda ^{2}}{\pi^{2}D}$) and for the magnetic field of a small hard-core dipole of radius $\rho$ placed inside the spherical cell of radius $R\gg\rho$ [@Petukhov] ($\tau_{\rho}=\frac{\rho^{2}}{\pi^{2}D}$). In those cases where the inhomogeneity is localized within a region of size $\lambda$ which is much less than the effective size of the cell, the typical spatial scale is not the cell size $L$ but the magnetic inhomogeneity size $\lambda$. We define the size of inhomogeneity as the size of the region, $\lambda$, where the magnetic forces acting on the spins are significant. Notice, that in all above examples $\tau _{\rm{corr}} = \tau_{\rm{\lambda}}$. Those observations allow us to formulate a general condition for our result to be valid: $$\frac{1}{\tau_{\lambda}}\ll\omega_{0}\ll\frac{1}{\tau_{\rm{coll}}},\label{eq:intermediate-frequency-condition}$$ Finally, the requirement $\tau_{\rm{corr}}\ll T$ for applying the Redfield theory can be translated into requirements on the strength of the magnetic inhomogeneities. This step depends on the experimental conditions and the observable of interest [@Redfield1965; @Goldman2001]. The adiabatic regime $\omega_{0} \tau_{\rm{corr}}\gg1$, for which $1/T_{1}$ is approximately equal to $\frac{\langle b^2 \rangle }{B_0^2 \tau _{\rm{corr}}}$ [@Goldman2001] leads us to the condition $\sqrt{\langle b^2 \rangle}\ll \vert B_{0}\vert$. For a local perturbation, like the field from a small size magnetic impurity, we expect a stronger limit to the strength of the perturbation $|b| \ll \vert B_0\vert $. In the opposite limit, there always will be regions within the cell volume, where the total field $\vec{B}=\vec{B_{0}}+\vec{b}$ crosses zero value. Within such regions the spin motion is no more adiabatic and our main result is not applicable (example of such situation may be found in [@Schmiedeskamp2006]). Finally, the validity domain of our result may be formulated as the following: $$\label{eq:frequency-condition} \frac{1}{\tau_{\rm{\lambda}}} \ll \omega _0 \ll \frac{1}{\tau _{\rm{coll}}}\, \rm{and}\, |b| \ll \vert B_0\vert.$$ This combined condition guarantee the validity of the Redfield theory approach and the validity of the adiabatic limit everywhere within the cell. Conclusion ========== We considered the gaseous spin relaxation of particles due to their restricted diffusive motion in an arbitrary geometry cell and exposed to a magnetic field, depending only on position with an arbitrary dependence. Applying the (perturbation) Redfield theory and the diffusion equation, we derived the asymptotic behavior (\[eq:gamma1-Adiab-Diff\]) of the longitudinal relaxation rate for the high frequency limit (\[eq:frequency-condition\]). This asymptotic behavior is found to be in excellent agreement with all previously know results [@Cates1988; @McGregor1990; @Petukhov2010; @Swank2012; @Petukhov]. We also derived a general high frequency asymptote for the frequency shift (\[eq:FreqShift-Adiab\]), regardless of the [type of ]{} motion (diffusive or ballistic) in the magnetic inhomogeneity. Due to the generality of our results, we expect that they will find a very broad spectrum of applications. Treatment of the leading term in Eq. (\[eq:intermediate-equation\]) =================================================================== We have to calculate the first term in Eq. (\[eq:intermediate-equation\]): $$\label{eq:A}A = \frac{\mathrm{{d}}}{\mathrm{{d}}\tau} \langle b_{i} (0)b_{i}(\tau)\rangle\vert_{\tau= 0}.$$ Using the correlation function definition given by Eq. (\[defn:corr-functions\]) and the diffusion equation (\[eq:diffusion\]), one can write: $$\begin{aligned} A& = \frac{1}{V}\displaystyle\int_{V} \displaystyle\int_{V} \mathrm{{d}}{\vec{r}} \mathrm{{d}}\vec{r}_{0} b_{i}({\vec{r}})b_{i}({\vec{r}}_{0}) \frac{\mathrm{{d}}}{\mathrm{{d}}\tau}p({\vec{r}},\tau\vert{\vec{r}}_{0})\\ & = \frac{D}{V} \displaystyle\int_{V} \mathrm{{d}}{\vec{r}}_{0} b_{i}({\vec{r}}_{0})\int_{V} \mathrm{{d}} {\vec{r}} b_{i}({\vec{r}}) \Delta p(\vec{r},\tau\vert\vec{r}_{0}).\end{aligned}$$ The Green-Ostrogradski theorem gives the following relations:$$\begin{aligned} \label{eq:GO1}\int_{V} \left( \vec{F}\cdot{\vec{\nabla}}g + g(\vec{\nabla}\cdot\vec{F})\right) \mathrm{{d}}\vec{r} & = \displaystyle\oint _{S} g{\vec{F}}\cdot\mathrm{{d}}{\vec{S}},\\ \label{eq:GO2} \int_{V} \left( f\vec{\nabla}^{2} g + \vec{\nabla}f\cdot\vec{\nabla}g\right) \mathrm{{d}} \vec{r} & = \displaystyle\oint _{S} f\vec{\nabla}g\cdot\mathrm{{d}}\vec{S}.$$ According to Eq. (\[eq:GO1\]), we have:$$\begin{aligned} B (\tau ) & =& \displaystyle\int_{V} \mathrm{{d}} \vec{r} b_{i}(\vec{r}) \vec{\nabla}\cdot\left( \vec{\nabla}p(\vec{r},\tau\vert\vec{r}_{0})\right) \\ & =& \displaystyle\oint _{S} b_{i}(\vec{r}) \vec{\nabla}p(\vec{r},\tau\vert\vec{r} _{0})\cdot\mathrm{{d}}\vec{S} - \displaystyle\int_{V} \mathrm{{d}}\vec{r}\vec{\nabla}b_{i}(\vec{r})\cdot\vec{\nabla}p(\vec{r},\tau\vert\vec{r}_{0}).\end{aligned}$$ Using boundary conditions given by Eq. (\[eq:boundaries\]) and then using Eq. (\[eq:GO2\]), we obtain:$$\begin{aligned} B (\tau ) & =& -\displaystyle\int_{V} \mathrm{{d}} \vec{r} \vec{\nabla}b_{i}(\vec{r})\cdot \vec{\nabla}p(\vec{r},\tau\vert\vec{r}_{0})\\ & =& - \displaystyle\oint _{S} p(\vec{r},\tau\vert\vec{r}_{0}) \vec{\nabla}b_{i}(\vec {r})\cdot\mathrm{{d}}\vec{S}\\ & +& \displaystyle\int_{V} \mathrm{{d}} \vec{r} p(\vec{r},\tau\vert\vec{r}_{0})\Delta b_{i}(\vec{r}).\end{aligned}$$ For $\tau= 0$, the initial condition (\[eq:initial-condition\]) gives:$$\begin{aligned} B (\tau =0) & =& -\displaystyle\oint _{S} \delta(\vec{r}-\vec{r}_{0})\vec{\nabla}b_{i}(\vec {r})\cdot\mathrm{{d}}\vec{S}\\ & +& \displaystyle\int_{V} \mathrm{{d}}\vec{r} \delta(\vec{r}-\vec{r}_{0})\Delta b_{i}(\vec{r})\\ & =& -\displaystyle\oint _{S} \delta(\vec{r}-\vec{r}_{0})\vec{\nabla}b_{i}(\vec{r})\cdot\mathrm{{d}}\vec{S} + \Delta b(\vec{r}_{0}),\end{aligned}$$ leading to:$$\begin{aligned} A & = \frac{D}{V}\displaystyle\int_{V} \mathrm{{d}} \vec{r}_{0} b_{i}(\vec{r}_{0})\left( -\displaystyle\oint _{S} \delta(\vec{r}-\vec{r}_{0})\vec{\nabla}b_{i}(\vec {r})\cdot\mathrm{{d}}\vec{S}\right) \\ & + \frac{D}{V}\displaystyle\int_{V} \mathrm{{d}} \vec{r}_{0} b_{i}(\vec{r}_{0}) \Delta b_{i}(\vec{r}_{0})\end{aligned}$$ $$\label{eq:one}= -\frac{D}{V} \displaystyle\oint _{S} b_{i}(\vec{r})\vec{\nabla}b_{i}(\vec{r})\cdot\mathrm{{d}}\vec{S} + \frac{D}{V}\int_{V} \mathrm{{d}} \vec{r}_{0} b_{i}(\vec{r}_{0}) \Delta b_{i}(\vec{r}_{0}).$$ Another use of Eq. (\[eq:GO2\]) on the second term in right member of Eq. (\[eq:one\]) leads to:$$\begin{aligned} A & = -\frac{D}{V} \displaystyle\oint _{S} b_{i}(\vec{r})\vec{\nabla}b_{i}(\vec {r})\cdot\mathrm{{d}}\vec{S} + \frac{D}{V}\displaystyle\oint _{S} b_{i}(\vec{r})\vec{\nabla}b_{i}(\vec{r})\cdot\mathrm{{d}}\vec{S}\\ & -\frac{D}{V}\displaystyle\int_{V} \mathrm{{d}}\vec{r}_{0} \vec{\nabla}b_{i}(\vec{r}_{0})\cdot\vec{\nabla}b_{i}(\vec{r}_{0}).\end{aligned}$$ Finally, we can write: $$\frac{\mathrm{{d}}}{\mathrm{{d}}\tau}\langle b_{i}(0)b_{i}(\tau)\rangle|_{\tau=0}=-D\overline{\left\vert \vec{\nabla}b_{i}\right\vert ^{2}}\label{eq:premier-terme-resultat}.$$ Treatment of the second term in Eq. (\[eq:intermediate-equation\]) ================================================================== Consider the second term in Eq. (\[eq:intermediate-equation\]). This term corresponds to a Fourier transform of the second order time derivative of the magnetic field inhomogeneities divided by $\omega _{0}^{2}$. To estimate the high-frequency behavior of this term, we apply the Lebesgue-Riemann lemma which states that the cosine transform of an integrable function goes to zero as $\omega_{0}$ goes to $\infty$. Therefore, we have to estimate the following integral: $$I=\int_{0}^{\infty}\mathrm{{d}}t|\frac{\mathrm{{d}}^{2}}{\mathrm{{d}}t^{2} }\langle b_{i}(0)b_{i}(t)\rangle|.$$ Now, performing the integration formally, we can write: $$I=\frac{\mathrm{{d}}}{\mathrm{{d}}t}\langle b_{i}(0)b_{i}(t)\rangle |_{t=0,\infty}.$$ $\langle b_{i}(0)b_{i}(t)\rangle$ is an auto-correlation function which describes a diffusive motion and, hence, it goes to zero together with its time derivative as $t\rightarrow\infty$. Finally, we can write: $$I=-\frac{\mathrm{{d}}}{\mathrm{{d}}t}\langle b_{i}(0)b_{i}(t)\rangle |_{t=0}\label{eq:second-term}.$$ We see that Eq. (\[eq:second-term\]) is the same expression which has been calculated in Eq. (\[eq:premier-terme-resultat\]). Therefore, our function $\frac{\mathrm{{d}}^{2}}{\mathrm{{d}}t^{2}}\langle b_{i}(0)b_{i}(t)\rangle$ is integrable and according to Lebesgue-Riemann lemma, its cosine transform goes to zero when $\omega_{0}$ goes to $\infty$. Hence, the second term in Eq. (\[eq:intermediate-equation\]) decays faster than the first one as $\omega_{0}$ goes to $\infty$. Exact solution of spin-relaxation problem due to 1D diffusive motion in inhomogeneous field of power law shape ============================================================================================================== In this section, we will consider spin relaxation of spin 1/2 particles due to a restricted $z\in\left[ -L/2,L/2\right] $ one-dimensional (1D) diffusive motion in a magnetic field $\vec{B}(z)=B_{0}\vec{e}_{z}+\vec{b}(z)$ where $B_{0}$ is an homogeneous part and $\vec{b}(z)=b_{x}(z)\vec{e}_{x} +b_{y}(z)\vec{e}_{y}+b_{z}(z)\vec{e}_{z}$ with $\left\Vert \vec{b}(z)\right\Vert \ll \vert B_{0}\vert$. Instead of considering 3D geometries, we consider a 1D problem with only one component $b(z)$. Then to obtain the final result, we will need to make a proper combination of the components. To be more specific, we will consider the perturbation field $b(z)$ which has the form of a power law: $$b(z) = b_{0} \left( z^{k} - \frac{1}{L} \int_{-L/2} ^{L/2} z^{k} \mathrm{{d}}z\right),$$ with $k=1,2,3,4$. Notice that with this definition, the average of the volume perturbation is null. This type of perturbation is of great practical importance since many ideal magnetic systems designed to create homogeneous fields have the dominant component of inhomogeneity of the shape with $k = 2$. Moreover, for a popular Helmholtz pair of DC coils or End-compensated solenoid one can expect $k=4$. Imperfections in the wiring will give terms with $k=1,3...$. The solution of the problem for $k=1$ is well known and will be given here for completeness. A general expression for longitudinal relaxation $\Gamma_{1}$ due to 1D diffusive motion reads [@Petukhov2010; @Clayton2011; @Swank2012]: $$\label{eq:gamma1_1D} \Gamma _1 = 2\gamma ^2 \left( \sum _{n=0} ^{\infty} \frac{\tau _{2n+1}}{1+(\omega_0 \tau_{2n+1})^2}\vert b_{2n+1} \vert ^2 + \sum _{n=1} ^{\infty} \frac{\tau _{2n}}{1+(\omega_0 \tau_{2n})^2}\vert b_{2n} \vert ^2 \right),$$ with $\tau_{n} =\frac{\tau_{0}}{n^{2}}$ is a time constant for $n^{\mathrm{{th}}}$ order diffusive mode, $\tau_{0} = \frac{L^{2}}{\pi^{2} D}$ the time constant for the lowest mode and $D$ the diffusion coefficient. The $b_{2 n+1}$ and $b_{2 n}$ represents the odd and even Fourier components of the field $b(z)$: $$\begin{aligned} b_{2n} & =\frac{1}{L}\int_{-L/2}^{L/2}b(z)\cos\frac{2n\pi z}{L}\mathrm{{d}}z,\\ b_{2n+1} & =\frac{1}{L}\int_{-L/2}^{L/2}b(z)\sin\frac{(2n+1)\pi z}{L}\mathrm{{d}}z.\end{aligned}$$ Depending on the symmetry (odd, even) of $b(z)$, only one of the sums in Eq. (\[eq:gamma1\_1D\]) contributes to the relaxation. With [these]{} remarks, the calculation of the relaxation rate is straight-forward: $$\label{eq:Gamma1-k} \Gamma_{1}^{k}=2\gamma^{2}\tau_{0}\langle g_{k}^{2}\rangle L^{2}S_{k},$$ where $\langle g_{k}^{2}\rangle=\frac{1}{L}\int_{-L/2}^{L/2}\left( \frac{\mathrm{{d}}b(z)}{\mathrm{{d}}z}\right) ^{2}\mathrm{{d}}z$, $$\begin{aligned} \label{eq:S1} S_1 &=& \frac{1}{2\pi ^2 \phi_L ^2}\left( 1-\frac{1}{\pi \sqrt{\phi _L /2}}\frac{\sinh \pi\sqrt{\phi _L/2}+\sin \pi\sqrt{\phi _L/2}}{\cosh \pi\sqrt{\phi _L/2} + \cos \pi\sqrt{\phi _L/2}}\right),\\ \label{eq:S2} S_2 &=& \frac{1}{2\pi ^2 \phi_L ^2}\left( 1-\frac{3}{\pi \sqrt{\phi _L /2}}\frac{\sinh \pi\sqrt{\phi _L/2}-\sin \pi\sqrt{\phi _L/2}}{\cosh \pi\sqrt{\phi _L/2} - \cos \pi\sqrt{\phi _L/2}}\right),\\ \label{eq:S3}S_3 &=& \frac{1}{2\pi ^2 \phi _L ^2}\left( 1- \frac{320}{\pi ^4 \phi _L^2}-\frac{5\sqrt{2}}{\pi ^5 \phi _L ^{5/2}}\frac{ (\pi ^4\phi _L ^2-16\pi ^2\phi _L-64)\sinh \pi\sqrt{\phi _L/2} + (\pi ^4\phi _L^2 +16\pi ^2\phi _L-64)\sin \pi\sqrt{\phi _L/2}}{\cosh \pi\sqrt{\phi _L/2} + \cos \pi\sqrt{\phi _L/2}}\right),\\ \label{eq:S4}S_4 &=& \frac{1}{2\pi ^2 \phi _L ^2}\left( 1- \frac{2688}{\pi ^4 \phi _L^2}-\frac{7\sqrt{2}}{\pi ^5 \phi _L ^{5/2}}\frac{ (\pi ^4\phi _L ^2-48\pi ^2\phi _L-576)\sinh \pi\sqrt{\phi _L/2} - (\pi ^4\phi _L^2 +48\pi ^2\phi _L-576)\sin \pi\sqrt{\phi _L/2}}{\cosh \pi\sqrt{\phi _L/2} + \cos \pi\sqrt{\phi _L/2}}\right).\end{aligned}$$ where $\phi_{L} = \gamma B_{0} \frac{L^{2}}{\pi^{2} D}$ is the Larmor phase accumulated by the particle spin during the interval of time required to diffuse over the distance $L$. In the adiabatic regime, when $\phi_{L} \gg1$, all hyperbolic functions dominate over the trigonometric ones which allows us to write:$$\label{eq:Gamma_1_k-result}S_{k} \approx\frac{1}{2\pi^{2} \phi_{L}^{2}}\left( 1+ O\left( 1/\sqrt{\phi_{L}}\right) \right),$$ and for $k=1,2,3,4$, the longitudinal relaxation rate may be written as: $$\label{eq:Gamma1k}\Gamma_{1} ^{k} = D\frac{\langle g_{k} ^{2}\rangle}{B_{0}^{2}}\left( 1 + O\left( 1/\sqrt{\phi_{L}}\right) \right).$$ The normalized frequency dependence $$\tilde{\Gamma}_{1}^{k}=\frac{\Gamma_{1}^{k}}{\gamma^{2}\tau_{0}\langle g_{k}^{2}\rangle L^{2}}\label{eq:gamma_tild}$$ of the longitudinal relaxation rate for different kinds of inhomogeneities ($k=1,2,3,4$) are shown on Fig. \[fig:comb\_plot\]. Since the transverse relaxation rate (\[eq:trans-relaxation-rate\]) is given by: $$\label{eq:Gamma2_phiL} \Gamma_{2}(\phi_{L})=\frac{1}{2}\Gamma_{1}(\phi_{L})+\gamma ^2 S_{zz}(0),$$ and $S_{ii}(\phi _L) \ll S_{ii} (0)$, we obtain $\Gamma _2 \approx \gamma ^2 S_{zz} (0)$ in the adiabatic limit. Eq. (\[eq:Gamma1-k\]), (\[eq:Gamma2\_phiL\]) and Fig. \[fig:comb\_plot\] imply that the spectrum of the field correlation functions, and, hence, $T_2$ are independent of magnetic field strength $B_0$ but strongly depend on the shape of inhomogeneities, the cell size and geometry.
--- bibliography: - 'thesis.bib' title: '**Search for relic neutralinos with Milagro**' ---
--- abstract: | We prove a result about reducibility behaviour of Thue polynomials over the rationals that was conjectured in [@M]. Special cases have been proved e.g. by Müller in [@M], Theorem 4.9, and Langmann ([@L], Satz 3.5).\ The proof uses ramification theory to reduce the assertion to a statement about permutation groups containing an $n$-cycle. This statement is finally proven with the help of the classification of primitive permutation groups containing an $n$-cycle (a result which rests on the classification of finite simple groups). author: - 'J. König[^1]' title: On the reducibility behaviour of Thue polynomials --- \[thm\][Lemma]{} [***Keywords:*** Polynomials; Hilbert’s irreducibility theorem; Siegel functions; Permutation groups]{} Introduction and statement of the main theorem ============================================== The famous Hilbert irreducibility theorem states that if $K$ is a number field, $f(t,X) \in K(t)[X]$ an irreducible polynomial, then there are infinitely many specializations $t \mapsto t_0 \in K$ such that $f(t_0,X)$ remains irreducible (and one can even demand that the $t_0$ be integers of $K$). A related question is whether there are also infinitely many integer specializations such that $f(t_0,X)$ becomes reducible. This question is linked to Siegel’s theorem about integral points of algebraic curves.\ In many cases one can obtain finiteness results for the set of reducible specializations. This was done e.g. in [@M]. We refer to this paper for some background on the role of Siegel’s theorem, as well as for the basics of ramification theory that will be used here. The goal of this article is to prove the following theorem, which was conjectured in [@M Conjecture 4.10]: \[T1\] Let $H(t,X) \in \mathbb{Q}[t,X]$ be a homogeneous polynomial of degree $n$, not divisible by $t$, and not a proper power over $\overline{\mathbb{Q}}$. Then one of the following holds: - $H(t_0,X) - 1$ becomes reducible for only finitely many $t_0 \in \mathbb{Z}$ - $n \in \{2,4\}$. [**Remark:**]{}\ Note that the second case cannot be excluded, cf. [@M]:\ There the examples $H(t,X) = X^2-dt^2$ and $H(t,X) = -4dX^2(dX^2-t^2)$, $d>1$ a square-free integer, are given. The Galois groups of the polynomials $H(t,X)-1$ over $\overline{\mathbb{Q}}(t)$ are $C_2$ and $D_4$ respectively; and we will recognize this observation again in the course of the proof. Some results about permutation groups {#permgroups} ===================================== To prove Theorem \[T1\], we will need to know about the primitive groups containing a cyclic transitive subgroup. These groups have been classified (using the classification of finite simple groups) by Feit ([@F], 4.1) and Jones ([@J]), with the following result: \[feit\] Let $G\leq S_n$ be a primitive group containing a cyclic transitive subgroup. Then one of the following holds: - $n=p \in \mathbb{P}$, and $C_p \leq G \leq AGL_1(p)$ (where $AGL_1(p)$ is the symmetric normalizer of $C_p$, of order $p \times (p-1)$). - $G = A_n$ (for $n$ odd) or $S_n$. - $n = \frac{q^d-1}{q-1}$ with $d \geq 2$ and $q$ a prime power, $PSL_d(q) \leq G \leq P\Gamma L_d(q)$. - $n = 11$, $G = PSL_2(11)$ in its action on 11 points. - $G = M_{11}$ or $M_{23}$ in the natural action. For our situation, we need the following lemma, which follows easily from the above classification result: \[cyclic\] Let $G\leq S_n$ be a primitive group containing a cyclic transitive subgroup, and $N \trianglelefteq G$. - If $G/N$ is cyclic of order $k\geq n$, then $k=n$ is prime and $G = C_n$. - If $G/N \cong C_{n/2}$, then $G = C_2$ or $G = S_4$. The only cases worth some consideration come from the (not always cyclic) factor $P\Gamma L_d(p^r)/PSL_d(p^r)$. This factor has order $r\cdot(d,p^r-1)$, and $\frac{n}{2} = \frac{p^{rd}-1}{2(p^r-1)} > \frac{1}{2} p^{r(d-1)}$. But for this to be smaller than $r\cdot (d,p^r-1)$, we need $d=2$, $r=1$ and $p\leq3$, which leaves only the groups $PGL_2(3) = S_4$ and $PGL_2(2) = S_3$. We now deduce a result that will help prove Theorem \[T1\], but may also be interesting itself. We therefore state it in more generality than what will later be needed in the proof of Theorem \[T1\]. \[group\] Let $G \le S_n$ be a finite permutation group generated by a cyclic transitive subgroup $\langle\tau\rangle$ and a normal transitive subgroup $N$. Then the following hold: - $|N\cap \langle\tau \rangle|\ge 2$. - If $|N\cap \langle\tau \rangle|=2$, then $G$ is of the form $((..(C_{p_1}^{k_1}.C_{p_2}^{k_2})...).C_{p_m}^{k_m}).\tilde{G}$,[^2] where $\tilde{G} \in \{C_2, S_4\}$, $\prod{p_i} = \frac{n}{2}$ or $\frac{n}{4}$ respectively, $k_i \in \mathbb{N}$ (and the $p_i$ are primes). In particular $G$ does not contain any element of order larger than $n$. i). Assume $N\cap \langle \tau \rangle = \{1\}$. As $G/N$ is abelian (even cyclic of order $n$), we have $g\tau g^{-1}\tau^{-1}\in N$ for all $g\in G$. In particular, $\tau^G \subset N\tau$. But also $C_G(\tau) = \langle \tau\rangle$ (an $n$-cycle is self-centralizing even in all of $S_n$), so $\tau^G$ must be of cardinality $|N|$, i.e. $\tau^G = N\tau$. Denote by $G_1$ a point stabilizer in $G$. By the transitivity of $N$, the stabilizer $N_1:=G_1\cap N$ has index $n$ in $G_1$. Let $g_1,...,g_n$ be a set of coset representatives of $N_1$ in $G_1$. As $g_iN=g_jN$ already implies $g_i^{-1}g_j\in N\cap G_1 = N_1$, the elements $g_1,...,g_n$ are also a set of coset representatives for $N$ in $G$. In particular, $G_1$ must intersect every coset of $N$, which is impossible, as the coset of $\tau$ consists entirely of fixed point free elements, namely $n$-cycles. This is the desired contradiction. ii). Assume now that $|N\cap \langle \tau \rangle| = 2$.\ From the previous lemma it follows that if $G$ is primitive, then $G = C_2$ or $G=S_4$, and these groups certainly contain no element of order larger than the degree of $G$. Now let $G$ be imprimitive, $P$ be a minimal partition of $\{1,...,n\}$ into $G$-blocks (i.e. the action of a block stabilizer on a single block is primitive) and let $K\trianglelefteq G$ be the kernel of the action of $G$ on the blocks. Denote by $\tilde{n}$ the number of blocks in $P$ and by $n'$ the length of a block. So $G/K$ acts transitively on $\tilde{n}$ points, and $n=n'\tilde{n}$.\ Now $NK/K$ is a transitive normal subgroup of $G/K$, with cyclic factor group $\langle \tau\rangle/(\langle \tau\rangle \cap NK)$, which has order a divisor of $\tilde{n}$.\ Now if $|K/(N \cap K)| < n'$, then $NK/N (\cong K/(N\cap K))$ would be cyclic of order less then $n'$, so $G/NK$ would be cyclic of order larger than $\frac{\tilde{n}}{2}$. But this is impossible because $G/NK = (G/K)/(NK/K)$, where $G/K$ is generated by the transitive normal subgroup $NK/K$ and the $\tilde{n}$-cycle $\langle \tau\rangle K/K$, and we have already seen in i) that these two subgroups must intersect at least in a subgroup of order 2.\ \ We are going to prove however, that $|K/(N \cap K)| > n'$ is impossible, and $|K/(N \cap K)| = n'$ only if $K$ is an elementary abelian group (of exponent $n'$).\ \ So consider the cyclic group $K/(N\cap K)$. By minimality of the partition $P$, the image of a block stabilizer in the action on a single block is primitive; furthermore the image of $K$ in this action is a transitive normal subgroup (as it contains a cyclic transitive subgroup), and therefore this image, let us call it $H$, is also primitive, as the list in Theorem \[feit\] shows. So $K$ embeds into a direct product of copies of a primitive group $H$ of degree $n'$, containing an $n'$-cycle. (Also, by transitivity the image $H$ is independent of the chosen block, so $K$ is in fact a subdirect product.) We will discuss the different possibilities for $H$. [*Case 1: $H$ non-solvable.*]{}\ In this case we obtain (using Theorem \[feit\] to get the isomorphism types for $H$) that $K$ is the extension of a (solvable) group of exponent at most $\frac{|H|}{|soc(H)|}$ by a direct product of non-abelian simple groups. It is clear that this group cannot have a cyclic factor larger than $\frac{|H|}{|soc(H)|}$, and by lemma \[cyclic\] this factor is always smaller than $\deg(H)$.\ That leaves $H=S_4$ or $H \leq AGL_1(p)$. [*Case 2: $C_p \leq H \leq AGL_1(p)$.*]{}\ Here, $K$ is the (split) extension of an abelian group $A$ of exponent at most $p-1$ by an elementary-abelian $p$-group $P$. If this group $K$ had a normal subgroup with cyclic factor group of order at least $p$, then in particular it would have one of index $p$ (as factoring out only from $A$ can never yield cyclic factors larger than $p-1$). But then $K'$ is a proper subgroup of $P$.\ Now assume furthermore that $H \ne C_p$. Then $Z(K)=\{1\}$ (as even the images of the projection to a component have trivial center). Also, as a special case of a classical theorem by Gaschütz ([@Ga]), $K$ splits over the normal $p$-subgroup $K'$. Let $U$ be a complement. Then $U$ is abelian and contains an element $x$ of order $p$ (i.e. $x\in P$). But then $x\in Z(K)$, a contradiction. So $K$ can only have cyclic factors of order smaller than $p$, unless $H=C_p$ (and in this case there are certainly no cyclic factors larger than $p$). [*Case 3: $H=S_4$.*]{}\ Here, $K$ is the extension of a subgroup of $S_3^{\tilde{n}}$ (not contained in $C_3^{\tilde{n}}$!) by an elementary abelian $2$-group $P$. First we look for cyclic factors of the subgroup $K/P$ of $S_3^{\tilde{n}}=AGL_1(3)^{\tilde{n}}$; but this subgroup has a structure just like the groups considered in the previous case, so we already know that there are no cyclic factors of order larger than 2. So if $K$ had any normal subgroup $N$ with cyclic factor group of order larger than 2, then $N$ could not contain $P$, so $U := K \cap A_4^{\tilde{n}}$ would have a normal subgroup $N\cap U$ (obviously not containing $P$ any more!) with cyclic quotient, i.e. $U' < P$. But one proves $U'=P$ just as in the $AGL_1(p)$-case. So we have proven, under the assumptions of ii), that $H=C_p$ (so $K$ is elementary-abelian), and the assertion now follows by induction, because if the factor group $G/K$ contains no element of order larger than its degree, then the analogous statement holds in $G$. Proof of Theorem \[T1\] ======================= We are now ready for the proof of the main theorem.  \ [*First step: Reduction to group theory via ramification theory*]{}\ We first follow [@M] (Proof of theorems 4.9 and 4.6) in reducing the problem to a group theoretic one:\ $H(t,X)$ is not a proper power over $\overline{\mathbb{Q}}$, therefore by elementary transformations one sees that $H(t,X) - 1$ is absolutely irreducible. Assume there are infinitely many $t_0 \in \mathbb{Z}$ such that $H(t_0,X) - 1$ becomes reducible. Then according to [@M], Prop. 2.1., there is a rational function $g(Z)\in \mathbb{Q}(Z)$ such that the following hold: - $g$ is a $\mathbb{Z}$-Siegel function, that is the set $g(\mathbb{Q}) \cap \mathbb{Z}$ is infinite. - $H(t,X)-1$ is reducible over $\mathbb{Q}(z)$, where $z$ is a root of $g(Z)-t$. Write $H(t,X) - 1 = t^n H_2(\frac{X}{t}) - 1$. Substituting $\frac{X}{t}$ by $X$ and denoting by $z$ a zero of $g(Z)-t$, we get that the polynomial $f(X):= t^n H_2(X) - 1$ is irreducible over $\overline{\mathbb{Q}}(t)$, whereas it becomes reducible over $\overline{\mathbb{Q}}(z)$. Let $x$ be a root of $f$ over $\overline{\mathbb{Q}}(t^n)$, let $\overline{\mathbb{Q}}(y)$ be a minimal intermediate field of $\overline{\mathbb{Q}}(z)|\overline{\mathbb{Q}}(t^n)$ over which $f$ is reducible, and assume without loss that $y$ is contained in the splitting field of $f$ over $\overline{\mathbb{Q}}(t^n)$. Set $G:=Gal(f|\overline{{{\mathbb{Q}}}}(t^n))$. $G$ acts on the roots of $f$ as a transitive subgroup of $S_n$. Let $\tau \in G$ be the generator of an inertia subgroup of a place extending $t^n\mapsto 0$. Similarly, let $\sigma\in G$ be the generator of an inertia subgroup of a place extending $t^n\mapsto \infty$.\ As $\frac{1}{H_2(x)} = t^n$, the place $t^n \mapsto 0$ is fully ramified (of ramification index $deg(H_2) = n$) in $\overline{\mathbb{Q}}(x)$, i.e. the corresponding inertia subgroup generator $\tau$ in an $n$-cycle.\ Furthermore, $H_2$ is not a proper power, so if we denote by $n_1,...,n_r$ the multiplicity of the zeros of $H_2$, we get $\gcd(\{n_1,...,n_r\}) = 1$. The $n_i$ are of course also the cycle lengths of $\sigma$ (see e.g. Lemma 3.1 in [@M]).\ \ Now consider the ramification in $\overline{\mathbb{Q}}(y)|\overline{\mathbb{Q}}(t^n)$. As $g(z)^n = t^n$ and a $\mathbb{Z}$-Siegel function has either one pole or two algebraically conjugate poles (see [@Lg], 8.5.1), the inertia subgroup corresponding to a place of $\overline{\mathbb{Q}}(z)$ lying over $t^n \mapsto \infty$ is generated by an element consisting of at most two cycles of equal length. The same therefore holds for the places of $\overline{\mathbb{Q}}(y)$ lying over $t^n \mapsto \infty$. Let $m$ be the cycle length in the latter field, i.e. the inertia group generator here has cycle structure $(m)$ or $(m,m)$, or in other words, $\sigma$ has cycle structure $(m)$ or $(m,m)$ in the action on $G/G_y$.\ \ Let $u$ be an orbit length of the stabilizer $G_x$ in its action on $G/G_y$. As a conjugate of $\sigma^{n_i}$ lies in $G_x$, and $\sigma^{n_i}$ has orbits of length $\frac{m}{(m,n_i)}$, we get that $\frac{m}{(m,n_i)}$ divides $u$, for all $i$. I.e., $m$ divides all $u \cdot (m,n_i)$, and therefore also the greatest common divisor of these terms, which is just $u$, as $\gcd(\{n_1,...,n_r\}) = 1$.\ So $u$ is a multiple of $m$, and as $G_x$ has to act intransitively on the conjugates of $y$ (because so does $G_y$ on the conjugates of $x$), we get that $u=m$ and $\sigma$ must have cycle structure $(m,m)$ on $G/G_y$. That means, the two places of $\overline{\mathbb{Q}}(y)$ over $t^n \mapsto \infty$ are fully ramified in $\overline{\mathbb{Q}}(z)$, and as that is a rational function field, the Riemann-Hurwitz genus formula (cf.  e.g. [@St Theorem 3.4.13]) yields that those are the only places ramified in $\overline{\mathbb{Q}}(z)|\overline{\mathbb{Q}}(y)$. In particular the places over $t^n \mapsto 0$ are unramified in $\overline{\mathbb{Q}}(z)|\overline{\mathbb{Q}}(y)$, i.e. (as $\overline{\mathbb{Q}}(t)|\overline{\mathbb{Q}}(t^n)$ lies in $\overline{\mathbb{Q}}(z)$) they have ramification index $n$ over $t^n \mapsto 0$. So $2m$ is a multiple of $n$:\ $2m = k\cdot n$, with some $k \in \mathbb{N}$. By transferring the places of $\overline{\mathbb{Q}}(y)$ extending the place $t^n\mapsto\infty$ to $0$ and $\infty$, we can assume $\frac{h(y)^n}{y^m} = t^n$, with a separable polynomial $h$ of degree $k$. [*Second step: Application of the results of Section \[permgroups\]*]{}\ We will now show, that $m$ cannot be a multiple of $n$.\ Assume $m$ were a multiple of $n$; then we get $\frac{h(y)}{y^{m/n}} = t$, with $\deg(h) \ge 2$, so $\overline{\mathbb{Q}}(t)$ is a proper subfield of $\overline{\mathbb{Q}}(y)$.\ In particular, $\overline{{{\mathbb{Q}}}}(t)$ is then contained in the splitting field of $f$ over $\overline{{{\mathbb{Q}}}}(t^n)$, and of course the extension $\overline{{{\mathbb{Q}}}}(t)|\overline{{{\mathbb{Q}}}}(t^n)$ is normal with cyclic Galois group of order $n$.\ Setting $N := Gal(f|\overline{\mathbb{Q}}(t))$ we therefore get $N\trianglelefteq G$ and $[G:N]=n$. Furthermore the place $t^n\to 0$ is already fully ramified of ramification index $n$ in $\overline{{{\mathbb{Q}}}}(t)$, which means there is no further ramification above $\overline{{{\mathbb{Q}}}}$(t); in other words $\langle \tau \rangle\cap N = \{1\}$. So we have shown $G = N \rtimes \langle\tau\rangle$, where $N$, in the action on $G/G_x$, is a transitive normal subgroup, and $\tau$ is an $n$-cycle in $S_n$. This is however impossible by Theorem \[group\]i). So $2m$ must be an [*odd*]{} multiple of $n$ (and in particular $n$ is even, which was already shown in [@M], Theorem 4.9). Then we have $\frac{h(y)^2}{y^k} = t^2$. Just as in the above case, we get $G = N \langle\tau\rangle$, with a transitive normal subgroup $N$ ($:=Gal(f | \overline{\mathbb{Q}}(t^2))$)); this time with $|N \cap \langle\tau\rangle| = 2$.\ \ Theorem \[group\]ii) shows that $G$ does not contain an element of order $>n$. But $G$ contains the element $\sigma$, which modulo $K_y := core_G(G_y)$ [^3] has order $m = k \cdot \frac{n}{2}$, $k$ odd; so $k=1$. But then the inertia group generators of $\overline{\mathbb{Q}}(y)|\overline{\mathbb{Q}}(t^n)$ over $0$ and $\infty$ have cycle structure $(n)$ and $(\frac{n}{2},\frac{n}{2})$ respectively. By the Riemann-Hurwitz genus formula, there can only be one more ramified place in this extension, and it has to be simply ramified, i.e. the inertia group generator is a transposition in $S_n$. [*Final step: Showing $G=C_2$ or $G=D_4$*]{}\ So $G/K_y$ is generated by an $n$-cycle, an $(\frac{n}{2},\frac{n}{2})$-cycle and a transposition, and the product of these three elements is the identity. This readily implies that $G/K_y = C_2 \wr C_{n/2}$, which can be seen as follows:\ By appropriate numbering, the $((\frac{n}{2},\frac{n}{2}))$-cycle is $(1,3,5,...,n-1)(2,4,6,...,n)$, and the transposition is $(1,2)$. The group generated by these two elements acts imprimitively, with the block system $\{\{1,2\}, \{3,4\},...,\{n-1,n\}\}$. Also, its image in the action on the $\frac{n}{2}$ blocks is a cyclic group of order $\frac{n}{2}$, as the transposition acts trivially on the blocks. Therefore $G/K_y \le C_2 \wr C_{n/2}$, and the existence of a transposition in this group enforces equality.\ \ Furthermore $N K_y/K_y = C_2^{n/2}$, the block kernel of the above wreath product. $G$ also acts transitively on the cosets of $G_xK_y$, with kernel at least $K_y$. But also $G_xK_y$ is still intransitive on $G/G_y$, so $G_y$ is intransitive on $G/(G_xK_y)$. In particular, $G_y\cdot core_G(G_xK_y)$ is intransitive on $G/G_x$, which by minimality of $\overline{\mathbb{Q}}(y)$ enforces $K_y \ge core_G(G_xK_y)$, with equality altogether.\ Then however $G/K_y$ has a faithful transitive action on $\tilde{n}:=[G:G_xK_y]$ points (note that $\tilde{n}|n$), with $NK_y/K_y = C_2^{n/2}$ acting as a transitive normal subgroup. But as a transitive abelian group, this subgroup acts regularly, so $2^{n/2}=\tilde{n}\le n$. This only leaves $n \in \{2,4\}$, and $\tilde{n} = n$, i.e. $G/K_y = C_2$ or $D_4$, and $G_xK_y = G_x$, which yields $K_y = \{1\}$, as the action on $G/G_x$ is of course faithful.\ Therefore we are left with $G=C_2$ or $G= D_4$. These examples occur indeed, as mentioned after the statement of theorem \[T1\].\ This completes the proof. [**Remark:**]{}\ It is easy to write down all rational polynomials with monodromy group $C_2$ or $D_4$ (there is only one possible ramification structure in each case). The above proof then shows that the examples given in the remark after Theorem \[T1\] are in fact the only counter-examples (up to linear transformations in the variables). [9]{} W. Feit, *Some Consequences of the Classification of Finite Simple Groups*. Proc. Symp. Pure Math., 37, Amer. Math. Soc.  (1980), 175–181. W. Gaschütz, *Zur Erweiterungstheorie der endlichen Gruppen*. J. reine angew. Math. 190 (1952), 93–107. G.A. Jones, *Cyclic Regular Subgroups of Primitive Permutation Groups*. J. Group Theory 5 (2002), 403–407. S. Lang, *Fundamentals of Diophantine Geometry*. Springer Verlag, New York (1983). K. Langmann, *Werteverhalten holomorpher Funktionen auf Überlagerungen und zahlentheoretische Analogien II*. Math. Nachr. 211 (2000), 79–108. P. Müller, *Finiteness Results for Hilbert’s Irreducibility Theorem*. Ann. Inst. Fourier 52 (2002), 983–1015. H. Stichtenoth, *Algebraic Function Fields and Codes*. Springer Verlag, GTM 254 (2008). [^1]: Universität Würzburg, Emil-Fischer-Str. 30, 97074 Würzburg, Germany. email: joachim.koenig@mathematik.uni-wuerzburg.de [^2]: We use “Atlas notation" here to denote by $N.H$ a group with normal subgroup $N$ and corresponding quotient group $H$. [^3]: By $core_G(U)$ we denote the kernel of the action of $G$ on $G/U$, or equivalently the largest normal subgroup of $G$ contained in $U$
--- abstract: 'We observe an unconventional superconducting minigap induced into a ferromagnet SrRuO$_3$ from a spin-triplet superconductor Sr$_2$RuO$_4$ using a Au/SrTiO$_3$/SrRuO$_3$/Sr$_2$RuO$_4$ tunnel junction. Voltage bias differential conductance of the tunnel junctions exhibits V-shaped gap features around zero bias, corresponding to a decrease in the density-of-states with an opening of a superconducting minigap in SrRuO$_3$. Observation of a minigap at a surface of a 15 nm thick SrRuO$_3$ layers confirms the spin-triplet nature of induced superconductivity. The shape and temperature dependence of the gap features in the differential conductance indicate that the even-frequency $p$-wave correlations dominate, over odd-frequency $s$-wave correlations. Theoretical calculations support this $p$-wave scenario. Our work provides the density-of-states proof for $p$-wave Cooper pair penetration in a ferromagnet and significantly put forward our understanding of the $p$-wave spin-triplet proximity effect between spin-triplet superconductors and ferromagnets.' author: - 'M. S. Anwar' - 'M. Kunieda' - 'R. Ishiguro' - 'S. R. Lee' - 'L. A. B. Olde Olthof' - 'J. W. A. Robinson' - 'S. Yonezawa' - 'T. W. Noh' - 'Y. Maeno' title: 'Observation of superconducting gap spectra of long-range proximity effect in Au/SrTiO$_3$/SrRuO$_3$/Sr$_2$RuO$_4$ tunnel junctions' --- Introduction ============ Spin-triplet superconductivity is rich in physics due to its spin and orbital degrees of freedom compared to its counterpart spin-singlet superconductivity. It not only exists in bulk materials like Sr$_2$RuO$_4$ (SRO214), UPt$_3$, [*etc.*]{} [@Maeno2012] but also emerges in superconductor-ferromagnet (SSC/F) heterostructures that exhibit particular broken symmetries [@Bergeret2001; @Keizer2006; @Eschrig2008; @Khaire2010; @Robinson2010; @Anwar2010; @Anwar2012; @Bernardo2015]. Devices based on spin-triplet superconductivity are the fundamental building blocks to generate dissipationless spin-polarized supercurrents required to established superconducting spintronics [@Bergeret2005; @Linder2015]. In the last two decades, extensive theoretical and experimental knowledge has been developed to understand the generation of spin-triplet correlations in SSC/F junctions. In such junctions, spin degree of freedom may not be fully preserved since SSC has zero spin polarization. This issue can be solved by replacing SSC with a spin-triplet superconductor (TSC). However, crucial concerns in using TSCs are, firstly, availability of handful amount of bulk TSCs, and secondly, their compatibility to form an electronically transparent interface with other materials. Recently, some of the present authors developed an epitaxial TSC/F heterostructure by growing ferromagnetic SrRuO$_3$ (SRO113) thin films on superconducting SRO214 substrates [@Anwar2015]. Furthermore, long-range spin-triplet proximity effect induced into SRO113 [@Anwar2016] was observed. Now, it is highly required to study the symmetry of induced correlations. Superconductivity occurs in SRO214 with superconducting critical temperature ($T_{\rm c}$) of 1.5 K. Extensive experimental and theoretical studies [@Maeno2012] indicate that SRO214 exhibits a chiral $p$-wave spin-triplet state with spontaneous breaking of the time-reversal symmetry [@Maeno1994; @Luke1998; @Ishida1998; @Nelson2004; @Xia2006; @Xia2006; @Kindwingira2006; @Anwar2013; @Anwar2017; @Anwar2016], although there are unresolved issues [@Hicks2010; @Yonezawa2013]. Furthermore, SRO214 has recently been attracted interest for exploring topological superconducting phenomena originating from its orbital phase winding [@Maeno2012]. Spin-triplet superconductivity at SSC/F interfaces exhibits various subgap features depending on the symmetry of the induced correlations. These subgap features can be observed in the electronic density of states (DoS) [@Eschrig2003; @Wang2013]. Recently, zero-bias conductance peaks (ZBCPs) corresponding to odd-frequency $s$-wave spin-triplet correlations were observed in various experiments using metallic [@Bernardo2015; @Pal2017] and oxide-based SSC/F systems [@Kalcheim2012; @Kalcheim2014; @Kalcheim2015; @Bernardo2017]. Bernardo [*et al.*]{} [@Bernardo2017] reported the observation of $p$-wave correlations in graphene connected with a $d$-wave high temperature superconductor Pr$_{2-x}$Ce$_x$CuO$_4$ by scanning tunnelling microscopy (STM). They observed a variety of subgap structures such as V-shaped gaps, ZBCPs, and split ZBCPs, depending on the position of the STM tip. Recently, some of the present authors developed superconducting junctions based on SRO214 in combination with the itinerant ferromagnet SRO113, where direct penetration of superconducting correlations over a 15-nm-thick SRO113 layer was observed, through multiple Andreev reflection features in Au/SRO113/SRO214 proximity junctions [@Anwar2016]. To confirm this long-range penetration and to investigate symmetry of the induced superconductivity, DoS measurements are required. Moreover, in that system, it was argued that $p$-wave correlations may dominate since the superconducting source was of $p$-wave and the SRO113 layer was thinner than electron mean free path ($l_e$). This is a unique and interesting possibility, but experimental verification is still absent. Here, to address these questions, we developed tunnel junctions by depositing a 2-nm-thick insulating SrTiO$_3$ (STO) layer between the F-layer SRO113 and Au electrode. We performed differential conductance measurements of the Au/STO/SRO113/SRO214 heterostructures. A V-shaped gap feature in the conductance spectra, corresponding to the superconducting minigap induced in a 15-nm-thick SRO113 layer, is observed. Previous studies suggest that $p$-wave spin-triplet proximity effect leads to such a V-shaped gap [@Wang2013; @Bernardo2017]. Furthermore, our theoretical calculations confirm the $p$-wave symmetry of the induced superconductivity in the SRO113 F-layer. Experimentation =============== Single crystals of SRO214 with minimal eutectic segregation of Sr$_3$Ru$_2$O$_7$, SRO113, and Ru are carefully selected at cost of slightly reduced $T_{\rm c}$ and utilized to fabricate Au/STO/SRO113/SRO214 junctions. Ferromagnetic SRO113 thin films are grown epitaxially by pulsed laser deposition on cleaved $ab$-surfaces of SRO214 substrates with a thickness of 0.5 mm and a surface area of 3$\times$3 mm$^2$ (Further details in Ref. [@Anwar2015]). Immediately after the deposition of SRO113, a 2-nm-thick insulating STO layer followed by a Au(20nm)/Ti(5nm) capping layer, are deposited [*ex-situ*]{} by DC sputtering. ![Au/STO/SRO113/SRO214 tunnel junctions. (a) Optical micrograph of junctions fabricated on a SRO214 substrate. (b) Magnified micrograph of device area indicated with a white dashed rectangle in (a). (c) Schematic three-dimenssional view of the junction. (d) Neck strucutre of SRO214 below SRO113. (e) Series resistor circuit model of the junction.[]{data-label="device"}](Fig1){width="8cm"} Au/STO/SRO113/SRO214 tunnel junctions with areas 20$\times$20 $\mu$m$^2$ and 5$\times$5 $\mu$m$^2$ are fabricated on 25 $\times$ 25 $\mu$m$^2$ and 10 $\times$ 10 $\mu$m$^2$ SRO113 pads by laser UV maskless photolithography. The junction area i.e. the size of the top Au electrode, is smaller than the SRO113 pads to avoid contact between the top Au electrode and bottom SRO214 substrates (Fig. \[device\]). Electrical transport measurements are performed using a four-point technique with two contacts on the top Au electrode and two directly on SRO214 (Fig. \[device\](c)). Resistivity and differential conductance are measured down to 300 mK using a ${}^3$He cryostat with a superconducting magnet. The bulk critical temperature $T_{\rm c-bulk}$ of the SRO214 single crystal was found to be 1.25 K (inset of Fig. \[RT\](a)). Results and discussion ====================== Figure \[RT\](a) shows the temperature-dependent resistance [$R$]{}($T$) in the normal state (300 K to 4 K) of a 5$\times$5 $\mu$m$^2$ junction (red curve). The resistance slowly increases with decreasing temperature [*T*]{} down to 170 K, suggesting dominance of the $c$-axis bulk resistivity of SRO214 ($\rho_c = 1$ m$\Omega$cm at 4 K) [@Hussey1998]. This behavior indicates that the current flows along the normal to the junction, and that direct electrical contact between Au and SRO214 is absent. With further decrease of temperature below 100 K, $R$ does not decrease substantially, indicating dominance of the resistive contribution of the Au/STO/SRO113 tunneling junction. Consequently, the residual resistance ratio (RRR) is low (1.25). For comparison, $R$($T$) of a metallic junction (without STO layer) exhibits a RRR of 9, as shown with the black curve. These observations demonstrate STO layer is working as a tunnel barrier. At temperatures below 6 K, the $R$ increases with decreasing $T$ due to STO tunnel barrier. A sharp decrease of $R$ at 1.2 K is observed (Fig. \[RT\](b)) corresponding to the superconducting transition of SRO214. With a further decrease in $T$, $R$ increases due to the superconducting gap opening in the superconducting state, since bias voltage $V$ is much smaller than the expected superconducting gap $\Delta$ of SRO214. Such $R$($T$) behavior was not observed in metallic junctions [@Anwar2015; @Anwar2016], again indicating the tunnelling behavior of the present junction with the STO layer. ![(a) Temperature dependent resistance $R$($T$) measured at higher temperatures for junctions with (red) and without (black) the STO barrier. These sets of data show that, in this temperature range, $R_{c}$ of the SRO214 substrate dominates in both junctions. The inset shows that the AC magnetic susceptibility of the SRO214 substrate exhibits a sharp transition at $T_{\rm c-bulk}$ = 1.25 K. (b) $R$($T$) below 3 K measured with 100 $\mu$A current for the junction with an STO barrier. A sharp superconducting transition was observed below $T_{\rm c-bulk}$. Note that the resistance continuously increases with decreasing temperature $T_{\rm c-bulk}$ because of the STO tunnel barrier. (c) Differential conductance $dI$/$dV$ at 0.3 K, showing a V-shaped gap around the zero bias and two strong dips at higher voltages. (d) Differential resistance $dV$/$dI$ obtained at 0.3 K (blue curve) and 1.3 K (red curve) plotted as a function of applied current. A central gap opens up and two sharp peaks at 1.8 mA corresponding to critical current of the first part of the junction SRO113/SRO214. To analyse the effects of tunnel barrier only, $\Delta R_{2}$ $\approx$5.5 m$\Omega$ (resistance contributions of the neck and SRO113/SRO214 interface) is subtracted from the curve measured at 1.3 K (shifted magenta curve).[]{data-label="RT"}](Fig2){width="8cm"} We now discuss the differential conductance $dI/dV$ behavior of the tunnel junction. In Fig. \[RT\](c), we have plotted $dI$/$dV$ at 0.3 K. Two main features are observed. The first sharp dips at $\pm$0.3 mV appears immediately below the $T_{\rm c}$ . Such dips mainly appears due to current-driven distraction of superconductivity in superconducting junctions [@Sheet2004; @Yang2012]. It most probably indicates that dips in $dI$/$dV$ data correspond to the critical current transition at SRO113/SRO214 interface, as we discuss later. The second but most important feature, is the suppression of conductance around the zero bias within $\pm$150 $\mu$V indicating a superconducting gap opening. This gap opening is due to the minigap of induced superconductivity in the SRO113 layer. We observed this behaviour in various junctions (see Supplemental Material [@SM]). To extract the conductance spectra of the Au/STO/SRO113 junction more quantitatively, we estimated and subtracted the resistance contributions from the SRO214 neck ($R_{\rm neck}$) and SRO113/SRO214 interface ($R_2$) as follows. Our junction consists of series of various components, as depicted in the model shown in Fig. \[device\](e). In such a series circuit, the differential resistance $dV/dI$ as a function of $I$ is appropriate for extraction of each contribution since $I$ is common to the all components and the resistance behaves additively. Thus, we plotted $dV/dI$ vs $I$ at 0.3 K and 1.3 K in Fig. \[RT\](d). In the superconducting state, mainly the Au/STO/SRO113 tunnel junction contributes to the zero bias resistance. However, the resistance of the SRO113/SRO214 interface can have small but non-negligible contribution. In the normal state, the SRO214 neck under the SRO113 pad (see Fig. \[device\](d)) adds an additional resistance due to higher resistivity of SRO214 along the $c$-axis [@Anwar2015; @Anwar2016]. We subtracted the contributions of the neck and SRO113/SRO214 by shifting d$V$/d$I$ curve at 1.3 K by $\Delta R_2 = R_2 + R_{\rm neck}=$5.5 m$\Omega$. The shifted curve is used to normalise the obtained conductance data at various temperatures and applied fields. More details are given in the Supplemental Material [@SM]. To understand the features in detail, we measured $dI$/$dV$ at various temperatures and magnetic fields applied along the $c$-axis (out-of-plane) (Fig. \[dIdV\]). Both $V_{\rm Gap}$ (characteristic voltage of central gap opening) and $V_{\rm Dip}$ (characteristic voltage corresponding to the sharp dips) become suppressed with increasing temperature or applied field. Furthermore, these features are observed only below $T_{\rm c}$, indicating that they originate from superconductivity in the junction. As shown in Fig. \[Analysis\](a), $V_{\rm Gap}$ disappears above 1 K, well below $T_{\rm c}$. However, $V_{\rm Dip}$ survives to $T_c$ of SRO214. This different temperature dependence can be explained as follows: induced correlation is suppressed with the increase in the temperature and disappears first at the Au/STO/SRO113 interface and then at the SRO113/SRO214 interface. These observations suggest that $V_{\rm Gap}$ and $V_{\rm Dip}$ emerge from different interfaces of the junction; Au/STO/SRO113 and SRO113/SRO214, respectively. Figure \[Analysis\](d) shows $V_{\rm Gap}$ and $V_{\rm Dip}$ as a function of applied field ($H$). ![(a) Normalized differential conductance as a function bias voltage obtained at different temperatures in zero field, and (b) at various applied out-of-plane magnetic fields in mT at 0.3 K.[]{data-label="dIdV"}](Fig3){width="8cm"} ![(a) Minigap (filled circle) and dip (filled square) as well as normalized zero bias conductance (open circles) vs $T$. (b) Normalized value of minigap vs $T$ with different theoretical fits. (c) Critical current as a function of $T$ of SRO113/SRO214 junction. The AB theory shows a good fit (solid red line). (d) Field dependence of minigap and dip measured at 0.3 K.[]{data-label="Analysis"}](Fig4){width="8cm"} The suppression of the differential conductance around zero bias is due to the decrease in DoS because of superconducting minigap opening. This observation demonstrates that a proximity effect over 15 nm in to SRO113 and confirm the long-range spin-triplet correlations in SRO113, since the spin-singlet superconducting coherence length for SRO113 is only $\xi_{\rm F} = 1$ nm [@Anwar2016]. The value of the minigap with $V_{\rm Gap}=150~\mu$V at 0.3 K decreases monotonically with increasing temperature. We here assume that the minigap is proportional to the bulk superconducting gap and consider the theoretical temperature dependence of the bulk gap, $\Delta(T)=\Delta_0{\rm tanh}(A\sqrt{\frac{T_{\rm c}}{T}-1})$, where $\Delta_0$ is the superconducting energy gap at $T=0$ and $A$ is a constant which is 1.74 for an $s$-wave BCS gap. Our data follow this relation with $A=1.56$ equivalently to the calculations of Nomura and Yamada [@Nomura2002] for $p$-wave superconductivity (see Fig. \[Analysis\](b)). Furthermore, $2\Delta_0/k_{\rm B}T_{\rm c}=3.2$ (Here we took $T_{\rm c}=1.1$ K since the gap feature disappears around 1.1 K), which is lower than the expected value of 4.3 for $s$-wave superconductivity. These parameter values also suggests the unconventionality of the induced minigap in the SRO113 layer. The dominating orbital symmetry of spin-triplet correlations at an SSC/F interface can be odd-frequency $s$-wave or even-frequency $p$-wave, depending on the thickness of the F-layer ($t_{\rm F}$) or length of the junction [@Eschrig2008]. For a diffusive junction ($l_{\rm e}<t_{\rm F}$), odd-frequency isotropic $s$-wave spin-triplet correlations dominate since $p$-wave is sensitive to potential scatterings. In contrast, in a clean/ballistic junction ($l_{\rm e}>t_{\rm F}$), even-frequency anisotropic $p$-wave correlations may take over. In our junctions, the nature of the induced correlations can be $p$-wave spin-triplet because $t_{\rm F}=15$ nm is shorter than $l_e=20$ nm. A junction with a barrier between the top Au electrode and SRO113 layer probes the minigap of the induced superconductivity in SRO113. The shape of the observed minigap is V-shaped (see Fig. \[RT\](c)), which supports the even-frequency anisotropic $p$-wave scenario. A SRO113 layer with residual conductivity $\rho_0 =10~{\rm \mu}\Omega$cm exhibits an electron mean free path of $l_{\rm e} \approx 20$ nm, which is larger than the thickness of SRO113 used in our junctions. We can therefore consider that our junctions are in the clean limit. The bias voltage corresponding to the minigap should be of the same order as the Thouless energy for a clean system (the escape energy corresponding to inverse of the escape time) $E_{\rm Th}=\hbar v_{\rm F}/t_{\rm F}$, where $v_{\rm F} \approx 1 \times 10^{6}$ cm/s is the Fermi velocity [@Alexander2005] and $t_{\rm F}=15$ nm is the thickness of the SRO113 layer. It leads to $E_{\rm Th} \approx 440~{\rm \mu}$eV, which is of the same order as the measured minigap value $V_{\rm c} \approx 150~{\rm \mu}$V at 0.3 K. For comparison, we can estimate $E_{\rm Th}$ in diffusive limit by using shorter $l_{\rm e} \approx 10$-nm and using the formula $E_{\rm Th} = \hbar D/t_{\rm F}^2$, where $D$ is the diffusion coefficient. The diffusion coefficient can be calculated using free electron model $e^2\rho_0DN=1$, where $e$ is the electron charge and $N$ is the density of states of SRO113 at its Fermi level. Using $\rho_0\approx 30~\mu\Omega$cm and $N\approx 1 \times 10^{47}$ states/Jm$^3$, gives $D \approx 13$ cm$^2$/s and thus $E_{\rm Th} \approx $ 4 meV, which is an order of magnitude higher than that of the clean limit and the obtained value for the minigap. These comparisons support that our junctions are in the clean limit. In such a system, an even-frequency $p$-wave superconducting order parameter may dominate over the odd-frequency $s$-wave spin triplet. ![(a) Calculated differential conductance vs biase voltage for chiral $p$-wave and non-chiral $p$-wave superconducting correlations with $Z_1=2$, and $Z_2=0$. (b) Normalized differential conductance measured at 0.3 K.[]{data-label="Calculations"}](Fig5){width="8cm"} To understand the nature of the induced minigap, we calculate the differential conductance of a Normal-metal(N)/Insulator(I)/F/TSC junction based on a recent theoretical model [@Linde2018]. This model was developed particularly for Au/SRO113/SRO214 junctions, but can be applied to our system by considering that the barrier heights $Z_1$ and $Z_2$ correspond to the Au/STO/SRO113 and SRO113/SRO214 interfaces, respectively. We assume the $p$-wave order parameters for induced superconductivity in the SRO113 layer, $\Delta=\Delta_0 (k_x+i\chi k_y)\hat{\sigma}_x$, where $\Delta_0$ is the superconducting energy gap at $T=0$, $\sigma_x$, $\sigma_y$, and $\sigma_z$ are Pauli matrices and $\chi=\pm 1$ is the chirality. For normalization purposes, the superconducting gap is defined as $\Delta_0=1$. We assume the chemical potential is constant across the junction and equal to $\mu = 1000\Delta_{o}$. The effective masses of the F and TSC layers are normalized with respect to the mass of the N layer as $m_{\rm N}=1$, $m_{\rm F} = 7m_{\rm N}$ (SRO113) and $m_{\rm S}^\parallel=1.3m_{\rm N}$ (SRO214 in-plane) and $m_{\rm S}^\perp=16m_{\rm N}$ (SRO214 out-of-plane). To incorporate the properties of its layered structure, the SRO214 Fermi surface is approximated by an ellipsoid with the cut-off angle of $\pi/10$. The magnetization $M$ and thickness $t_{\rm F}$ of the ferromagnet are set to be, respectively, $X=M/H_{\rm ex}=0.6$ and $t_{\rm F}=k_{\rm F}L=11$. We assumed the magnetization direction is parallel to the $d$-vector of the superconductor, as both magnetization of SRO113 and $d$-vector of SRO214 bulk are along the $c$-axis [@Anwar2015; @Maeno2012]. Figure \[Calculations\](a) shows the normalized conductance as a function of the normalized bias voltage in the cases of chiral $p$-wave and non-chiral $p$-wave ($\chi = 0$) symmetry of the induced minigap in the F-layer. The barriers were taken as $Z_1=2$ and $Z_2 = 0$ (definations are given in the caption of Fig. \[Calculations\]). The V-shaped induced minigap in the experimental data is reproduced well by the calculated non-chiral $p$-wave spectrum. This suggests that the induced superconductivity in the SRO113 is non-chiral $p$-wave. Calculated conductance spectra for different barrier heights, in the presence of a magnetic field and for non-zero temperatures are discussed in the Supplemental Material [@SM]. To obtain a better understanding of the proximity induced unconventional superconductivity in ferromagnets, self-consistent and/or multiband models would be needed. To gain more about the parity of the induced superconductivity, Green function model can be considered. Now, we briefly discuss the $V_{\rm Dip}$ feature, which corresponds to the critical current of a superconducting junction [@Sheet2004; @Yang2012]. Temperature dependence of $dI$/$dV$ (Fig. \[dIdV\]) shows that $V_{\rm Gap}$ from Au/STO/SRO113 interface disappears around 1.1 K, as discussed above, but $V_{\rm Dip}$ persists up to the bulk $T_{\rm c}$. It indicates that $V_{\rm Dip}$ attributes to the critical current of the induced superconductivity because of the proximity effect at SRO113/SRO214 transparent interface [@Sheet2004; @Yang2012]. For further confirmation, we applied the Ambegaokar–Baratoff (AB) theory [@AB_Theory] using the relation, $$\frac{I_c(T)}{I_{c0}}=\frac{\Delta(T)}{\Delta_0}{\rm tanh}\left( \frac{\Delta(T)}{2k_BT}\right)$$ which fits reasonably well with experimental $I_{\rm c}$($T$) data (Fig. \[Analysis\](c)). Broken inversion symmetry at the SRO113/SRO214 interface can also induce the odd-frequency $s$-wave spin-triplet correlations, which is the only dominating pair amplitude in the diffusive system ($l_{\rm e}<t_{\rm F}$). However, in the clean limit ($l_{\rm e}>t_{\rm F}$), an odd-frequency component may coexist with the dominating even-frequency component. Such a situation can be probed with detailed differential conductance measurements at low bias, where a ZBCP may emerge within V-shaped gap. To study the proximity effect relative to the magnetization rotation of ferromagnetic layer, an F-layer with lower coercive field is desirable. Potential materials can be La$_{0.7}$Sr$_{0.3}$MnO$_3$ or La$_{0.7}$Ca$_{0.3}$MnO$_3$, if a good electronic contact (small value of $Z_2$) can be achieved with SRO214. Conclusion ========== We systematically observed a superconducting induced minigap in the SRO113 ferromagnet using Au/STO/SRO113/SRO214 tunnel junctions. The minigap width roughly matches with the Thouless energy of a 15-nm thick SRO113 in a clean limit. The V-shape differential conductance around zero bias indicates the $p$-wave nature of the induced spin-triplet correlations. This is also supported by the calculations of differential conductance for non-chiral $p$-wave order parameter. Our work will pave the way towards the study of the $p$-wave spin-triplet proximity effect and play a crucial role in the development of superconducting spintronics. Acknowledgement =============== We are thankful for valuable discussions with Y. Tanaka, K. Yada and A Golubov. This work is supported by the JSPS KAKENHI projects Topological Quantum Phenomena (JP22103002 and JP25103721) and Topological Materials Science (JP15H05851, JP15K21717 and JP15H05852), JSPS KAKENHI 17H04848, as well as by JSPS-EPSRC core-to-core program ”Oxide-Superspin (OSS)” . MSA is supported as an International Research Fellow of the JSPS. [99]{} Y. Maeno, S. Kittaka, T. Nomura, S. Yonezawa, and K. Ishida, [*Evaluation of Spin-Triplet Superconductivity in Sr$_2$RuO$_4$*]{}, J. Phys. Soc. Jpn. [**81**]{}, 011009 (2012). F. S. Bergeret, A. F. Volkov, and K. B. Efetov, *Long-range proximity effects in superconductor-ferromagnet structures*, Phys. Rev. Lett. [**86**]{}, 4096 (2001). R. S. Keizer, S. T. B. Goennenwein, T. M. Klapwijk, M. Miao, G. Xiao and A. Gupta, *A spin triplet supercurrent through the half-metallic ferromagnet CrO$_2$*, Nature [**439**]{}, 825 (2006). M. Eschrig and T. Löfwander, *Triplet supercurrents in clean and disordered half-metallic ferromagnets*, Nat. Phys. [**4**]{}, 138 (2008). T. S. Khaire, M. A. Khasawneh, Jr. W. P. Pratt and N. O. Birge, *Observation of spin-triplet superconductivity in Co-based Josephson junctions*, Phys. Rev. Lett. [**104**]{}, 137002 (2010). J. W. A. Robinson, J. D. S. Witt and M. G. Blamire, *Controlled injection of spin-triplet supercurrents into a strong ferromagnet*, Science [**329**]{}, 59 (2010). M. S. Anwar, F. Czeschka, M. Hesselberth, M. Porcu and J. Aarts *Long-range supercurrents through half-metallic ferromagnetic CrO$_2$*, Phys. Rev. B [**82**]{}, 100501(R) (2010). M. S. Anwar, M. Veldhorst, A. Brinkman and J. Aarts *Long range supercurrents in ferromagnetic CrO$_2$ using a multilayer contact structure*, Appl. Phys. Lett. [**100**]{}, 052602 (2012). A. Di Bernardo, S. Diesch, Y. Gu, J. Linder, G. Divitini, C. Ducati, E. Scheer, M. G. Blamire and J. W. A. Robinson, *Signature of magnetic-dependent gapless odd frequency states at superconductor/ferromagnet interfaces*, Nat. Commun. [**6**]{}, 8053 (2015). F. S. Bergeret, A. F. Volkov, and K. B. Efetov, *Odd triplet superconductivity and related phenomena in superconductor-ferromagnet structures*, Rev. Mod. Phys. [**77**]{}, 1321 (2005). J. Linder, and J. W. A. Robinson, *Superconducting spintronics*, Nature Phys. [**11**]{}, 307 (2015). M. S. Anwar, Y. J. Shin, S. R. Lee, S. J. Kang, Y. Sugimoto, S. Yonezawa, T. W. Noh and Y. Maeno, *Ferromagnetic SrRuO$_3$ thin-film deposition on a spin-triplet superconductor Sr$_2$RuO$_4$ with a highly conducting interface*, Appl. Phys. Exp. [**8**]{}, 019202 (2015). M. S. Anwar, S. R. Lee, R. Ishiguro, Y. Sugimoto, Y. Tano, S. J. Kang, Y. J. Shin, S. Yonezawa, D. Manske, H. Takayanagi, T. W. Noh Y. Maeno, *Direct penetration of spin-triplet superconductivity into a ferromagnet in Au/SrRuO$_3$/Sr$_2$RuO$_4$ junctions*, Nat. Commun. [**8**]{}, 13220 (2016). Y. Maeno, H. Hashimoto, K. Yoshida, S. Nishizaki, T. Fujita, J. G. Bednorz and F. Lichtenberg, *Superconductivity in a layered perovskite without copper*, Nature (London) [**372**]{}, 532 (1994). G. M. Luke, Y. Fudamoto, K. M. Kojima, M. I. Larkin, J. Merrin, B. Nachumi, Y. J. Uemura, Y. Maeno, Z. Q. Mao, Y. Mori, H. Nakamura and M. Sigrist *Time-reversal symmetry-breaking superconductivity in Sr$_2$RuO$_4$*, Nature (London) **394**, 558–561 (1998). K. Ishida, H. Mukuda, Y. Kitaoka, K. Asayama, Z. Q. Mao, Y. Mori and Y. Maeno, *Spin-triplet superconductivity in Sr$_2$RuO$_4$ identified by ${}^{17}$O knight shift*, Nature (London) [**396**]{}, 658 (1998). K. D. Nelson, Z. Q. Mao, Y. Maeno and Y. Liu, *Odd-parity superconductivity in Sr$_2$RuO$_4$*, Science [**306**]{}, 1151 (2004). J. Xia, Y. Maeno, P. T. Beyersdorf, M. M. Fejer, and A. Kapitulnik, *High resolution polar Kerr effect measurements of Sr$_2$RuO$_4$: Evidence for broken time-reversal symmetry in the superconducting state*, Phys. Rev. Lett. [**97**]{}, 167002 (2006). F. Kidwingira, J. D. Strand, D. J. V. Harlingen and Y. Maeno, *Dynamical superconducting order parameter domains in Sr$_2$RuO$_4$*, Science [**314**]{}, 1267 (2006). M. S. Anwar, T. Nakamura, S. Yonezawa, M. Yakabe, R. Ishiguro, H. Takayanagi and Y. Maeno*Anomalous switching in Nb/Ru/Sr$_2$RuO$_4$ topological junctions by chiral domain wall motion*, Sci. Rep. [**3**]{}, 2480 (2013). M. S. Anwar R. Ishiguro, T. Nakamura, M. Yakabe, S. Yonezawa, H. Takayanagi, and Y. Maeno, [*multicomponent order parameter superconductivity of Sr$_2$RuO$_4$*]{}, Phys. Rev. B [**95**]{}, 224509 (2017). C. W. Hicks, J. R. Kirtley, T. M. Lippman, N. C. Koshnick,M. E. Huber, Y. Maeno, W. M. Yuhasz, M. B. Maple, and K. A.Moler, *Limits on superconductivity-related magnetization in Sr$_2$RuO$_4$ and PrOs$_4$Sb$_{12}$ from scanning SQUID microscopy*, Phys. Rev. B [**81**]{}, 214501 (2010). S. Yonezawa, T. Kajikawa, and Y. Maeno, *First-Order Superconducting Transition of Sr$_2$RuO$_4$*, Phys. Rev. Lett. [**110**]{}, 077003 (2013). M. Eschrig, J. Kopu, J. C. Cuevas, and Gerd Schon, *Theory of Half-Metal/Superconductor Heterostructures*, Phys. Rev. Lett. **90**, 137003 (2003). Y. Wang, L. Wen, Guo-Qiao Zha and Shi-Ping Zhou, *Odd-frequency spin-triplet pairing states in half-metal/$d$-wave superconductor junctions*, **161**, 38 (2013). A. Pal, J. A. Ouassou, M. Eschrig, J. Linder and M. G. Blamire, *Spectroscopic evidence of odd frequency superconducting order*, Sci. Rep. **7**, 40604 (2017). Y. Kalcheim, O. Millo, M. Egilmez, J. W. A. Robinson, and M. G. Blamire, [*Evidence for anisotropic triplet superconductor order parameter in half-metallic ferromagnetic La$_{0.7}$Ca$_{0.3}$Mn$_{3}$O proximity coupled to superconducting Pr$_{1.85}$Ce$_{0.15}$CuO$_{4}$*]{}, Phys. Rev. B [**85**]{}, 104504 (2012). Y. Kalcheim, I. Felner, O. Millo, T. Kirzhner, G. Koren, A. Di Bernardo, M. Egilmez, M. G. Blamire, and J. W. A. Robinson, [*Magnetic field dependence of the proximity-induced triplet superconductivity at ferromagnet/superconductor interfaces*]{}, Phys. Rev. B [**89**]{}, 180506(R) (2014). Y. Kalcheim, O. Millo, A. Di Bernardo, A. Pal, and J. W. A. Robinson, [*Inverse proximity effect at superconductor-ferromagnet interfaces: Evidence for induced triplet pairing in the superconductor*]{}, Phys. Rev. B [**92**]{}, 060501(R) (2015). A. Di Bernardo, O. Millo, M. Barbone, H. Alpern, Y. Kalcheim, U. Sassi, A. K. Ott, D. De Fazio, D. Yoon, M. Amado, A.C. Ferrari, J. Linder and J.W.A. Robinson, *$p$-wave triggered superconductivity in single-layer graphene on an electron-doped oxide superconductor*, Nat. Commu. [**8**]{}, 14024 (2017). N. E. Hussey, A. P. Mackenzie, J. R. Cooper, Y. Maeno, S. Nishizaki, T. Fujita, *Normal-state magnetoresistance of Sr$_2$RuO$_4$*, Phys. Rev. B [**57**]{}, 5505 (1998). G. Sheet, S. Mukhopadhyay, and P. Raychaudhuri, [*Role of critical current on the point-contact Andreev reflection spectra between a normal metal and a superconductor*]{}, Phys. Rev. B 69, 134507 (2004). F. Yang, Y. Ding, F. Qu, J. Shen, J. Chen, Z. Wei, Z. Ji, G. Liu, J. Fan, C. Yang, T. Xiang, and L. Lu, [*Proximity effect at superconducting Sn-Bi$_2$Se$_3$ interface*]{}, Phys. Rev. B [**85**]{}, 104508 (2012). See Supplemental Material for additional experimental data, analysis, and calculations. T. Nomura and K. Yamada, *Detailed Investigation of Gap Structure and Specific Heat in the $p$-wave Superconductor Sr$_2$RuO$_4$*, J. Phys. Soc. Jpn. **71**, 404 (2002). C. S. Alexander, S. McCall, P. Schlottmann, J. E. Crow and G. Cao, [*Angle-resolved de Haas-van Alphen study of SrRuO$_3$*]{}, Phys. Rev. B [**72**]{}, 024415 (2005). L. A. B. Olde Olthof, S.-I. Suzuki, A. A. Golubov, M. Kunieda, S. Yonezawa, Y. Maeno, and Y. Tanaka, *Theory of tunneling spectroscopy of normal metal/ferromagnet/spin-triplet superconductor junctions*, Phys. Rev. B [**98**]{}, 014508 (2018). A. Barone and G. Paterno, *Physics and application of the Josephson effect*, (Wiley, 1982).
--- abstract: 'We adopt a three-level bosonic model to investigate the quantum phase transition in an ultracold atom-molecule conversion system which includes one atomic mode and two molecular modes. Through thoroughly exploring the properties of energy level structure, fidelity, and adiabatical geometric phase, we confirm that the system exists a second-order phase transition from an atom-molecule mixture phase to a pure molecule phase. We give the explicit expression of the critical point and obtain two scaling laws to characterize this transition. In particular we find that both the critical exponents and the behaviors of ground-state geometric phase change obviously in contrast to a similar two-level model. Our analytical calculations show that the ground-state geometric phase jumps from zero to $\pi/3$ at the critical point. This discontinuous behavior has been checked by numerical simulations and it can be used to identify the phase transition in the system.' author: - 'Sheng-Chang Li$^{1}$' - 'Li-Bin Fu$^{2,3}$' - 'Fu-Li Li$^{1}$' title: 'Quantum phase transition in a three-level atom-molecule system' --- Introduction ============ Quantum phase transition (QPT) is one of the most important concepts in the many-body quantum theory. As a central fundamental transition phenomenon at the temperature of absolute zero, it describes an abrupt change in the ground state of a many-body system due to its quantum fluctuations [@sachdevs2003; @sondhisl1997]. The experimental observation of a QPT from a superfluid (SF) to a Mott insulator (MI) in a gas of ultracold atoms [@greinerm2002] inspired great interest in investigating clean, highly controllable, and strong correlated bosonic systems [@tilahund2011]. Actually, the ultracold atomic gases [@pitaevskiil2003] have become an ideal platform to study many-body physics because of their enormous applications and the advanced experimental techniques available in the fields of atomic and optical physics [@ruseckasj2005]. In recent years, a remarkable development in the aforementioned filed is to convert ultracold atoms to molecules via Feshbach resonance [@Donleyea2002; @xuk2003; @Herbigj2003] or photoassociation [@Wynarr2000; @Romt2004; @whinklerk2005] techniques. Compared with the fermionic model in this kind of systems, the bosonic model is of interest for theoretically exploring the QPTs. When both ultracold atoms and molecules are bosons, the systems possess a few degrees of freedom and a large particle number which can greatly simplify the calculations. On the other hand, the atom-molecule conversion systems can be well described by a mean-field theory when the particle number is large enough and this treatment will lead to nonlinearity. These features have stimulated much efforts to study the adiabatic evolution [@Gaubatzu1988; @Kuklinskijr1989; @Pazye2005; @linghy2007; @Shapiroea2007; @mengsy08], geometric phase [@fulb2010; @wub2011], and phase transition [@santosg10; @lisc11a] of the systems. Instead of the traditional approaches for describing a QTP (i.e., using the concepts of order parameter and symmetry breaking), very recently Santos [*el al*]{}. adopted the concepts of entanglement and fidelity to investigate the QPT in a two-level bosonic atom-molecule system [@santosg10]. Motivated by this work we discussed same problems from the perspectives of scaling laws and Berry curvature [@lisc11a]. However, the connection between the mean-field Berry phase and the phase transition in this type of bosonic model, and the properties of the QPT in a three-level atom-molecule system are still unresolved, which call for further theoretical considerations. As a continuous work, in this paper we investigate the quantum phase transition in an ultracold atom-molecule conversion system by adopting a $\Lambda$-type three-level bosonic model. Based on this model, we first discuss the structure of quantum energy levels and analyze the properties of ground state. In order to compare the phase transition properties with a similar two-level model [@santosg10], we study the energy gap, the ground-state fidelity, and mean-field geometric phase. We illustrate that, when the ratio of the coupling strength between two molecular modes to that between atomic mode and the upper molecular mode exceeds a critical value, a similar QPT from a mixed atom-molecule phase to a pure molecular phase is also observed in our system. To characterize this transition we obtain the analytical expression of the critical point by using the mean-field approach and derive two critical exponents via numerically studying scaling laws. In particular we calculate the ground-state geometric phase and find its discontinuous behavior at the phase transition point. Our paper is organized as follow: In Sec. II we give the second-quantized model and its mean-field description. In Sec. III we explore the properties of energy levels and ground states. In Sec. IV we choose the characteristic scaling law, fidelity, and adiabatic geometric phase to describe the QPT. Section V presents our conclusion. Three-level model and mean-field description ============================================ The system we consider here is illustrated schematically in Fig. \[fig1\]. It describes the process of creating ultracold diatomic molecules from bosonic condensed atoms, which constitutes a $\Lambda$-type three-level model. In the three-mode description, each mode $|\alpha\rangle$ ($\alpha=a,g$, and $e$ respectively represent the atomic mode, the ground-state molecular mode, and the excited-state molecular mode) is associated with an annihilation operator $\hat{\beta}$ ($\beta=a,b_g$, and $b_e$) due to the basic assumption that the spatial wavefunctions for these modes are fixed. By setting the energy of atomic mode as zero, the Hamiltonian of the system takes the following second-quantized form with $\hbar=1$ [@wub2011]: $$\begin{aligned} \label{H1} \hat{H}_S=&\omega_e\hat{b}_e^\dagger\hat{b}_e+\omega_g\hat{b}_g^\dagger\hat{b}_g +\notag\\&\Omega_de^{i\nu_dt}\hat{b}_e^\dagger\hat{b}_g+\frac{\Omega_pe^{-i\nu_pt}}{\sqrt{N}}\hat{b}_e^\dagger \hat{a}\hat{a}+\mathrm{H.c.},\end{aligned}$$ where the abbreviation $\mathrm{H.c.}$ denotes the operation of Hermitian conjugate. $\nu_d$ and $\nu_p$ are the frequencies of two laser pulses $\Omega_d$ and $\Omega_p$, respectively. The frequencies $\omega_g$ and $\omega_e$ measure the molecular ground state and excited state energies, respectively. The total atom number $N=N_a+2(N_g+N_e)$ with $N_a=\hat{a}^\dagger\hat{a}$, $N_g=\hat{b}_g^\dagger\hat{b}_g$, and $N_e=\hat{b}_e^\dagger\hat{b}_e$, commutes with the Hamiltonian (\[H1\]) and is therefore conserved. Notice that the laser pulse parameter $\Omega_p$ can be complex. To achieve this one can split the laser pulse into two beams and then recombine and focus them on the system. As a result, we can express the complex parameter $\Omega_p$ as $\Omega_p=\xi_1+\xi_2e^{-i\varphi}$ with $\xi_1$ and $\xi_2$ being real numbers, where the phase factor $\varphi$ is determined by the difference of optical paths between the two laser beams [@wub2011]. For convenience, we rewrite the above Schrödinger picture Hamiltonian as, $\hat{H}_S=\hat{H}_0+\hat{H}_1$, where $$\begin{aligned} \hat{H}_0=&\nu_p\hat{b}_e^\dagger\hat{b}_e+(\nu_p-\nu_d)\hat{b}_g^\dagger\hat{b}_g,\\ \hat{H}_1=&(\omega_e-\nu_p)\hat{b}_e^\dagger\hat{b}_e+(\omega_g-\nu_p+\nu_d) \hat{b}_g^\dagger\hat{b}_g+\notag\\&\Omega_de^{i\nu_dt}\hat{b}_e^\dagger\hat{b}_g+{\Omega_pe^{-i\nu_pt}\over\sqrt{N}}\hat{b}_e^\dagger \hat{a}\hat{a}+\mathrm{H.c.},\end{aligned}$$ then we choose $\omega_e=\omega_g+\nu_d$ and apply the interaction picture, i.e., $\hat{H}_I=e^{i\hat{H}_0t}\hat{H}_1e^{-i\hat{H}_0t}$, the Hamiltonian finally becomes $$\begin{aligned} \label{H2} \hat{H}_I=\Delta(\hat{b}_e^\dagger\hat{b}_e+\hat{b}_g^\dagger\hat{b}_g)+z\hat{b}_e^\dagger\hat{b}_g+{\rho e^{-i\phi}\over\sqrt{N}}\hat{b}_e^\dagger \hat{a}\hat{a}+\mathrm{H.c.},\end{aligned}$$ where the new parameters $\Delta=\omega_e-\nu_p$, $z=\Omega_d$, $\rho=|\Omega_p|$, and $\phi=\arg(\Omega_p)$ have been introduced. To complement the quantum description and gain insights into the existence of a QPT in our model, we adopt a semiclassical description of the system by following the usual mean-field approach, which has been proven to be a powerful tool for studying ultracold atoms and Bose-Einstein condensates (BECs). In the semiclassical limit $N\rightarrow\infty$, the quantum model becomes classical and one can replace the operator $\hat{\beta}$ with a corresponding complex number $\beta$ ($\beta=a,b_g,b_e$), i.e., $\mathcal{H}=\Delta(|{b}_e|^2+|{b}_g|^2)+z({b}_e^\ast{b}_g+{b}_g^\ast{b}_e)+\rho [e^{-i\phi}{b}_e^\ast{a}^2+e^{i\phi}{b}_e(a^\ast)^2]$. By using the equations $id{\beta}/dt={\partial\mathcal{H}}/{\partial\beta^\ast}$, we can obtain the following Schrödinger equations together with the normalization condition $|a|^2+2(|b_g|^2+|b_e|^2)=1$ to govern the dynamical behaviors of the system, $$\begin{aligned} \label{Hmf} i\frac{d}{dt}|\psi\rangle=H_{mf}|\psi\rangle,\end{aligned}$$ where $$\begin{aligned} H_{mf}=\left( \begin{array}{ccc} 0 & 0 & 2\rho{e}^{i\phi}{a}^\ast \\ 0 & \Delta & z \\ \rho{e}^{-i\phi}{a} & z & \Delta \\ \end{array} \right),\end{aligned}$$ and $|\psi\rangle=(a,b_g,b_e)^T$. It is worth emphasizing that, although the collisions between ultracold particles have been neglected in our model, the nonlinearity also arises from the mean-field treatment for the fact of two atoms to form one molecule. Mathematically, the mean-field Hamiltonian $H_{mf}$ is a function of the instantaneous wave function as well as its conjugate. It is a nonhermitian matrix and it is invariant under the following transformation [@mengsy08]: $$\begin{aligned} |\psi\rangle\rightarrow U_s|\psi\rangle={e}^{i\Theta(\theta)}|\psi\rangle={e}^{i{\left( \begin{smallmatrix} \theta & 0 & 0 \\ 0 & 2\theta & 0 \\ 0 & 0 & 2\theta \\ \end{smallmatrix} \right)}}|\psi\rangle.\end{aligned}$$ The lack of $U(1)$ gauge transformation is a particular interesting point of the above mean-field model, which may lead to some new properties of the system. In the subsequent sections, based on models (\[H2\]) and (\[Hmf\]), we will discuss the QPT in the system both from the fully quantum perspective and the mean-field perspective. Energy levels and Ground states =============================== Taking advantage of the fact of $N$ is conserved, one can diagonalize the quantum Hamiltonian $\hat{H}_I$. For simplicity, hereafter we assume that $N$ takes an even constant value, then the Hilbert space of the $N$-particle system can reduce to $\tfrac{1}{2}(\tfrac{N}{2}+1)(\tfrac{N}{2}+2)$ dimension in the Fock basis, i.e., $|n_a\rangle|n_g\rangle|n_e\rangle=(n_a!n_g!n_e!)^{-1/2}(\hat{a}^\dagger)^{n_a} (\hat{b}_g^\dagger)^{n_g}(\hat{b}_e^\dagger)^{n_e}|0\rangle$ with $|0\rangle$ being the vacuum state, where $n_g=0,1,\cdots,\tfrac{N}{2}$, $n_e=0,1,\cdots,\tfrac{N}{2}-n_g$, and $n_a=N-2(n_g+n_e)$ represent the populations of particle in states $|b_g\rangle$, $|b_e\rangle$, and $|a\rangle$, respectively. By directly diagonalizing the Hamiltonian matrix with a fixed $N$, we obtain the eigen-energy levels and the ground states of the system as shown in Figs. \[fig2\] and \[fig3\], respectively. It is well known that a typical $\Lambda$-type three-level system supports dark-state solutions with zero eigenvalue [@puh07; @linghy04; @linghy07]. This type of state can result in a phenomenon known as coherent population trapping (CPT). For our system, when $\Delta=0$, from Figs. \[fig2\](b)-\[fig2\](d) we see that the energy levels with energy value being zero are degenerate while other nonzero-energy levels are nondegenerate and are symmetrically distributed in both sides of the center level. This symmetrical energy structure is determined by the symmetry of the Hamiltonian $\hat{H}_I$ with $\phi=0$, i.e., the change of variables $(z,\rho)\rightarrow -(z,\rho)$ is equivalent to the unitary transformation $\hat{b}_e\rightarrow-\hat{b}_e$. In this case, the degeneracy of the zero-energy level (i.e., $d$) is given by $$\begin{aligned} d=\lceil(\frac{N}{2}+1)/2\rceil=\frac{1}{4}(N-\mathrm{Mod}[N,4])+1,\end{aligned}$$ the symbol $\lceil~\rceil$ stands for the ceiling function which maps a real number to the smallest following integer. Notice that, if the parameter $\Delta$ has a perturbation, the above mentioned symmetry of the system with $\Delta=0$ will be broken. This leads to the energy levels shift and the zero-energy level splitting. For example, when $\Delta=0.2$ and $N=8$ \[see Fig. \[fig1\](d)\], all energy levels have been pushed up and the zero-energy level has split into three nonzero-energy levels. For different $N$, the maximum number of energy levels should be $(\tfrac{N}{2}+1)(\tfrac{N}{2}+2)/2$. Now we discuss the ground-state properties which are closely associated with the QPT in the system. On one hand, by diagonalizing the Hamiltonian $\hat{H}_I$ numerically, both for $\Delta=0$ and for $\Delta\neq 0$, we have calculated the ground states with different total atom numbers. The results for atomic population fraction (i.e., $N_a/N$) in the ground state are demonstrated in Fig. \[fig3\]. We find that the atomic fraction in the ground state decreases and gradually approaches to zero as the ratio of the coupling strength between two molecular modes to that between atom mode and the molecular mode increases. On the other hand, we study the ground state from the mean-field perspective. Based on the model (\[Hmf\]), we solve the mean-field ground state from the eigen-equation $H_{mf}(\bar{a}^\ast,\bar{a})|\bar{\psi}\rangle=\Theta(\mu)|\bar{\psi}\rangle$ with $\mu$ being the chemical potential for atoms and $|\bar{\psi}\rangle$ being the eigenstate. For $\Delta=0$, $z\geq 0$, and $\rho>0$, we obtain the eigenvalue and the corresponding eigenfunction for the ground state as follows: $$\begin{aligned} \mu_0=&\left\{ \begin{array}{cc} -{z\over 2}, & z>2\rho, \\ -{(z^2-2\rho^2)\sqrt{z^2+8\rho^2}\over 4\sqrt{3}\rho^2}, & z<2\rho; \\ \end{array} \right.\\ |\bar{\psi_0}\rangle=& \left\{ \begin{array}{cc} \left( \begin{array}{c} 0 \\ {1\over 2} \\ -{1\over 2} \\ \end{array} \right), & z>2\rho, \\ \left( \begin{array}{c} {\sqrt{4-z^2/\rho^2}\over\sqrt{6}} \\ {z\over 4\rho}e^{-i\phi} \\ -{\sqrt{z^2+8\rho^2}\over 4\sqrt{3}\rho}e^{-i\phi} \\ \end{array} \right), & z<2\rho. \\ \end{array} \right.\label{mfgs}\end{aligned}$$ For $\Delta\neq 0$, although the solutions to ground state can also be obtained analytically, the expressions are generally too messy to be instructive. We therefore simply display the results in Fig. \[fig3\]. From Fig. \[fig3\] we find that, both for $\Delta=0$ and for $\Delta\neq 0$, the results for the quantum model (\[H2\]) will tend to the analytical mean-field results with increasing the total particle number $N$. It must be mentioned that, for our nonlinear system (\[Hmf\]), the classical energy $\mathcal{E}$ does not equal the chemical potential, and the relation between them is $\mathcal{E}=\mu\pm\rho|\bar{b_e}||\bar{a}|^2$. We have calculated the ground-state energy $\mathcal{E}_0$ analytically and find its second derivative possesses a discontinuity at a critical point $z_c=2\rho+\Delta$ as demonstrated in Fig. \[fig3a\]. This divergence behavior implies that the system exists a second-order phase transition in the thermodynamic limit. When $z<z_c$ the system is in an atom-molecule mixture phase (i.e., $|\bar{a}|^2>0$) and when $z>z_c$ the system is in a pure molecule phase where $|\bar{a}|^2=0$. In the mixture phase, the asymptotic behavior of the ground state in the vicinity of the critical point with $\Delta=0$ is given by the variation of the parameter $s_0=|\bar{a}|^2$, i.e., $$\begin{aligned} s_0|_{z\rightarrow z_c}=\frac{1}{6}[4 - z_c(2 z- z_c)].\end{aligned}$$ Quantum phase transition ======================== The previous calculations and analysis have demonstrated that the process of converting ultracold atoms to homonuclear diatomic molecules in bosonic system is a QPT which differs from the well-known BCS-BEC crossover phenomena in fermionic systems [@regalca2004; @linksj2003]. Subsequently, we will describe and characterize this phase transition from different perspectives. Scaling laws ------------ In order to understand the QPT in the system, we begin our discussion by analyzing the dimensionless energy gap between the first excited state and the ground state, namely, $\Delta E=(E_1-E_0)/\rho$. With the help of diagonalizing the Hamiltonian (\[H2\]) numerically, we have calculated the energy levels with different particle numbers. For a fixed $N$, the energy gap takes a minimum value at a point $z_N$ (it can be viewed as a pseudo-critical point of the $N$-particle system) and this point just corresponds to the position of avoided level crossing (see Fig. \[fig2\]). Generally, the QPTs often occur at the positions of level crossings or avoided level crossings. For our avoided level crossings system, the existence of the minimum of energy gap indicates a basic signature of the phase transition. Similar to the phenomena studied in a two-level atom-molecule system [@santosg10] and in other systems, we find that, the gap $\Delta E$ in our system also tends to zero at a single point rather than over an interval of the dimensionless parameter $z/\rho$ with the particle number $N\rightarrow \infty$. This specific phenomenon implies that, when $N\rightarrow \infty$, the ground state is degenerate at the point $z_c$ where is no phase, which is a requirement for the occurrence of a broken-symmetry phase [@santosg10]. To capture more features of the QPT in the system, we study the scaling behavior of the energy gap near the critical point. To this end, we first calculate the energy gap for different parameter $\Delta$ and the results have been plotted in Fig. \[fig4\]. Either for the variation of the total atomic number $N$ versus $|z_N-z_c|$ \[see Fig. \[fig4\](a)\] or for the change of the minimum of the energy gap (i.e., $\Delta E_{\mathrm{min}}$) with respect to $N$ \[see Fig. \[fig4\](b)\], the same characteristic scaling laws have been observed for different $\Delta$, and different lines in each figures with a same slope gives the evidence. In the quantum model (\[H2\]), the total atom number $N$ can be regard as a correlation length scale of the system, and then one can connects this length scale to the offset between the pseudo-critical point and the critical point. Quantificationally, we have $$\begin{aligned} \kappa|z_c-z_N|^{\nu}\simeq N^{-1},\end{aligned}$$ where $\nu\simeq1.54764$ is a critical exponent and $\kappa\simeq 0.18273$ is a inessential constant. This scaling law shows that the pseudo-critical point changes and tends as $N^{-1/\nu}$ toward the critical point and clearly approaches $z_c$ as $N\rightarrow\infty$ \[see Fig. \[fig4\](a)\]. From Fig. \[fig4\](b), we find another scaling law, that is $$\begin{aligned} \Delta E_\mathrm{min}/N\simeq\Gamma N^{-\zeta},\end{aligned}$$ where $\Gamma\simeq 1.67506$ is a constant. $\zeta\simeq 1.32912$ gives another important exponent, namely, the dynamic critical exponent. It must be mentioned that all constants and exponents given in the above two formulas are obtained in the case of $\Delta=0$, for other cases their values may have a slightly change. Comparing the product of two exponents in our system with that in a two-level atom-molecule system [@lisc11a] we find that the values are obviously different. This difference indicates that the above two models belong to different universality classes. Fidelity -------- Similar to other concepts, the behavior of the fidelity can also be employed to identify the phase transition [@zanardip2006; @zhouhq2008]. The fidelity is a measure of the distance between two states and this concept has been widely used in the field of quantum information [@nielsenma200]. One can define the fidelity through the modulus of the wave functions overlap between two states, i.e., $$\begin{aligned} F(\psi_1,\psi_2)=|\langle\psi_1|\psi_2\rangle|.\end{aligned}$$ Here we only focus on the behaviors of the fidelity between two ground states. One ground state is obtained when $\Delta=0$, namely, $|\Delta=0\rangle$, the other ground state is calculated by treating $\Delta$ as a perturbation parameter, denoted as $|\Delta\neq 0\rangle$. we have estimated the wave-function overlap between two ground states with different particle number $N$ and varying parameter $\Delta$. Figure \[fig8\] shows the fidelities between the ground state $|\Delta=0\rangle$ and the ground state $|\Delta=\alpha\rangle$ with $\alpha=0.1,0.2,0.3$, and $0.4$. Both for $N=50$ and for $N=100$ we see that the fidelity $|\langle\Delta=0|\Delta\neq 0\rangle|$ shows dip behavior at the point corresponding to the pseudo-critical point. The results imply that the two ground states are distinguishable and there is an obvious signal for the QPT as long as $\alpha\neq 0$. Moreover, we observe that the dip of the ground-state fidelity becomes deeper and the point where the fidelity has a minimum varies with increasing the value of $\alpha$, this phenomenon is very different from that in a two-level atom-molecule model where the fidelity has a minimum at the same point [@santosg10]. The reason is that, in our model the phase transition point $z_c$ is a function of the parameters $\rho$ and $\Delta$. For a larger value of $\alpha$, we find that the distinguishability of the two states increases and the minimum of the fidelity changes evidently. Now we compare the results obtained in the case of $N=50$ with the results for $N=100$. For a same $\Delta$, it is seen that the position of the minimum fidelity for $N=100$ is closer to the critical point $z_c$ than that for $N=50$. In fact we have calculated the ground-state fidelity with varying the particle number $N$ and find that, for a fixed $\alpha$, with increasing $N$, the wave functions overlap between the two states become smaller (i.e., the two states are more distinguishable) and the position where the minimum fidelity occurs moves toward $2\rho+\Delta$. Thus in the finite particle number case the occurrence of the minima of the fidelity gives a information about the phase transition in the system. Geometric phase --------------- In this subsection we convert to investigate the behavior of the ground-state geometric phase starting from the mean-field model (\[Hmf\]). In the following study we only focus our attention on the situation that the detuning is absent (i.e., $\Delta=0$). Because the analytical results can be obtained in this case. To employ the procedures for calculating geometric phase in nonlinear systems proposed in Refs. [@liuj2010; @fulb2010], we introduce some new variables through setting $a=\sqrt{1-2(p_1+p_2)}e^{i\lambda}, b_g=\sqrt{p_1}e^{i (2\lambda+q_1)}$, and $b_e=\sqrt{p_2}e^{i(2\lambda+q_2)}$, where $\lambda=\arg(a)$ denotes the total phase, $p_1=|b_g|^2$ and $p_2=|b_e|^2$ are the population probabilities of the ground state and excited state molecules, respectively, $q_1=\arg(b_g)-2\arg(a)$ and $q_2=\arg(b_e)-2\arg(a)$ measure the relative phases. With the help of these new variables, the three-level system can be cast into a classical Hamiltonian $$\begin{aligned} \mathcal{H}=&2z\sqrt{p_1p_2}\cos(q_1-q_2)\notag\\&+2\rho\sqrt{p_2}[1-2 (p_1 + p_2)]\cos(q_2+\phi).\end{aligned}$$ The Schrödinger equations (\[Hmf\]) together with the normalization condition lead to $$\begin{aligned} \frac{d\lambda}{dt}=-2\rho\sqrt{p_2}\cos(q_2+\phi),\label{lamt}\end{aligned}$$ and $$\begin{aligned} \frac{dp_i}{dt}=-\frac{\partial\mathcal{H}}{\partial q_i},~~~ \frac{dq_i}{dt}=\frac{\partial\mathcal{H}}{\partial p_i},\label{pqt}\end{aligned}$$ with $i=1,2$. These four equations have established a connection between the projected Hilbert space spanned by $\mathbf{S}(p_i,q_i)$ and the parameter space spanned by $\mathbf{R}(z,\rho,\phi)$. Now we calculate the geometric phase for the ground state of the system. For simplicity, we construct a closed loop $C$ in the parameter space by treating $z$ and $\rho$ as constants and varying $\phi$ with time from $0$ to $2\pi$. The system is assumed to evolve adiabatically along the cyclic path with a rate $\epsilon\sim|{d\phi\over dt}|\sim{\frac{1}{T}}$, where $T$ is the period of the cyclic evolution. Initially, we prepare the system in the ground state of $H_{mf}(\phi=0)$, after a cyclic adiabatic evolution, the state will acquire a geometric phase besides the dynamical phase. Here it is noted that the adiabatic parameter $\epsilon\ll{1}$, and thus it can be regarded as a small parameter during the process for determining the geometric phase. Following the method in Ref. [@liuj2010], we expand the total phase $\lambda$ in a perturbation series in $\epsilon$, i.e., $$\begin{aligned} \label{lamda} {d\lambda\over dt}=\lambda_0(\epsilon^0)+\lambda_1(\epsilon^1)+O(\epsilon^2),\end{aligned}$$ to separate the pure geometric part from the total phase. The time integrals of the zero-order term and the first-order term in Eq. (\[lamda\]) will respectively give the dynamic phase and the geometric phase in the adiabatic limit $\epsilon\rightarrow 0$ or $T\rightarrow\infty$, and the contribution from the higher-order terms will vanish. During the adiabatic evolution the system will fluctuate around the ground state due to the small but finite value of $\epsilon$. This fact allows us to make expansions as $p_i=\bar{p}_i(\mathbf{R})+\delta{p}_i(\mathbf{R})$ and $q_i=\bar{q}_i(\mathbf{R})+\delta{q}_i(\mathbf{R})$ with $i=1,2$, where $(\bar{p}_i,\bar{q}_i)$ stand for the instantaneous ground state, $\delta p_i$ and $\delta q_i$ denote the fluctuations induced by the slow change of the system. Substituting these expressions back into Eq. (\[lamt\]), when $z<2\rho$, we have $$\begin{aligned} \lambda_0=&-\mu_0(\mathbf{R}),\label{dynamicphase1}\\ \lambda_1=&{-2 \bar{p}_2\sqrt{\bar{p}_1}z+\rho[\bar{p}_2(6\bar{p}_2+6\bar{p}_1-1)+\delta{p}_2] \over \sqrt{\bar{p}_2}}, \label{berryphase1}\end{aligned}$$ where the chemical potential of the ground state can be expressed as $\mu_0=-2z\sqrt{\bar{p}_1\bar{p}_2}-3\rho\sqrt{\bar{p}_2}[1-2(\bar{p}_1+\bar{p}_2)]$. Moreover, from Eqs. (\[pqt\]), we have $$\begin{aligned} \label{berryphase2} \delta{p}_2=\frac{2\sqrt{\bar{p}_2}(z\bar{p}_2+z\bar{p}_1-4\rho\bar{p}_1^{3/2} )}{\rho[z(1+6\bar{p}_2+6\bar{p}_1)-16\rho\bar{p}_1^{3/2}]}\frac{d\phi}{dt}.\end{aligned}$$ To deduce Eqs. (\[berryphase1\]) and (\[berryphase2\]), we have used the fixed-point equations ${\partial\mathcal{H}\over\partial{p}_i}|_{(\bar{p}_i,\bar{q}_i)}=0$ and the condition ${d\over dt}\delta{q}_i\sim O(\epsilon^2)$. Combining Eq. (\[berryphase2\]) with Eq. (\[berryphase1\]), and then using the fixed-point values corresponding to the ground state, i.e., $(\bar{p}_1,\bar{q}_1;\bar{p}_2,\bar{q}_2)= (\tfrac{z^2}{16\rho^2},\bar{q}_2\pm\pi;\tfrac{z^2+8\rho^2}{48\rho^2},\pm\pi-\phi)$, we obtain $$\begin{aligned} \lambda_1={1\over 6}{d\phi\over dt}.\label{berryphase3}\end{aligned}$$ By integrating Eq. (\[berryphase3\]) over $T$ with respect to time, we have $$\begin{aligned} \lambda_g=\int_{0}^T\lambda_1dt={1\over 6}\int_0^{2\pi}{d\phi}=\frac{\pi}{3}.\label{berryphase4}\end{aligned}$$ As a comparison we give the result that is directly calculated from the Berry’s formula [@berrymv1984], i.e., $$\begin{aligned} \lambda_b=i\int_0^{2\pi}\langle\bar{\psi_0}|\nabla_\phi|\bar{\psi_0}\rangle{d\phi}=\frac{\pi}{6}\left( 2+\frac{z^2}{\rho^2}\right).\end{aligned}$$ Now we consider the situation when $z>2\rho$. In this case, the ground state, i.e., $\bar{p}_1=\bar{p}_2=1/4$, is independent of the parameter $\mathbf{R}$. Simple calculations from Eqs. (\[lamt\]) and (\[pqt\]) lead to $$\begin{aligned} \label{normalphase} {d\lambda\over{dt}}={z\over 2}=-\mu_0,\end{aligned}$$ and then $$\begin{aligned} \label{normalphase1} \lambda=\int_0^T\lambda_0dt=-\int_0^T\mu_0{dt}=\lambda_d.\end{aligned}$$ This result implies that the geometric phase $\lambda_g=0$ in the case $z>2\rho$. To sum up, our above theoretical calculations demonstrate that the mean-field geometric phase of the ground state jumps from zero to $\pi/3$ when the system undergoes the phase transition from a mixture phase to a pure molecular phase. These analytical predictions have been checked by numerically solving Eq. (\[Hmf\]) or Eqs. (\[lamt\]) and (\[pqt\]) as illustrated in Fig. \[fig9\]. We see that, if the evolution period $T$ is large enough, the simulated results show a good agreement with the analytical predictions. We use Fig. \[fig10\] to exhibit the convergence behaviors of the ground-state and the geometric phase with increasing $T$, and a large convergence rate has been observed. It is worth emphasizing that, the different values of the geometric phase in different parameter regions can be an evident signature of the QPT in the system. Conclusion ========== We have investigated the quantum phase transition in a three-level atom-molecule conversion system. By using different approaches we show that the system exists a second-order phase transition similar to a QPT exhibited in a two-level bosonic model [@santosg10]. Firstly, though analyzing the properties of energy gap we derive two scaling laws and the corresponding critical exponents. It is found that the two-level model and our model belong to different universality classes. Secondly, we discuss the ground-state fidelity. A minimum value of the fidelity near the critical point has been found. Finally, we have calculated the ground-state geometric phase and a discontinues behavior at the critical point has been observed. This phenomenon is similar to that studied in a system of a Bose-Einstein condensate in an optical cavity [@lisc11b]. It establishes a connection between the ground-state geometric phase and the QPT in an interacting atom-molecule bosonic model following the early works in spin-chain systems [@Carolloacm2005; @zhusl2006]. In summary, we have demonstrated novel characteristic scaling laws and abrupt change of the ground-state geometric phase in the vicinity of the critical point, which give the pronounced signals toward the existence of a QPT in the system. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== This work is supported by the National Fundamental Research Program of China (Contract No. 2011CB921503) and the National Natural Science Foundation of China (Contracts No. 10725521, No. 91021021, No. 11075020, and No. 11078001). [99]{} S. Sachdev, Quantum Phase Transitions (Cambridge University Press, Cambridge, 1999); M. Vojta, Rep. Prog. Phys. [**66**]{}, 2069 (2003). S. L. Sondhi, S. M. Girvin, J. P. Carini, and D. Shahar, Rev. Mod. Phys. [**69**]{}, 315 (1997). M. Greiner, O. Mandel, T. Esslinger, T. W. Hänsch, and I. Bloch, Nature [**415**]{}, 39 (2002). D. Tilahun, R. A. Duine, and A. H. MacDonald, Phys. Rev. A [**84**]{}, 033622 (2011). See, e.g., L. Pitaevskii and S. Stringari, [*Bose-Einstein Condensation*]{} (Clarendon Press, Oxford, 2003). J. Ruseckas, G. Juzeliūnas, P. Öhberg, and M. Fleischhauer, Phys. Rev. Lett. [**95**]{}, 010404 (2005). E. A. Donley, N. R. Claussen, S. T. Thompson, and C. E. Wieman, Nature (London) [**417**]{}, 529 (2002). K. Xu, T.Mukaiyama, J. R. Abo-Shaeer, J. K. Chin, D. E. Miller, and W. Ketterle, Phys. Rev. Lett. [**91**]{}, 210402 (2003). J. Herbig, T. Kraemer, M. Mark, T. Weber, C. Chin, H.-C. Nägerl, and R. Grimm, Science [**301**]{}, 1510 (2003). R. Wynar, R. S. Freeland, D. J. Han, C. Ryu, and D. J. Heinzen, Science [**287**]{}, 1016 (2000). T. Rom, T. Best, O. Mandel, A. Widera, M. Greiner, T. W. Hänsch, and I. Bloch, Phys. Rev. Lett. [**93**]{}, 073002 (2004). K. Winkler, G. Thalhammer, M. Theis, H. Ritsch, R. Grimm, and J. H. Denschlag, Phys. Rev. Lett. [**95**]{}, 063202 (2005). U. Gaubatz, P. Rudecki, M. Becker, S. Schiemann, M. Külz, and K. Bergmann, Chem. Phys. Lett. [**149**]{}, 463 (1988). J. R. Kuklinski, U. Gaubatz, F. T. Hioe, and K. Bergmann, Phys. Rev. A [**40**]{}, 6741 (1989). E. Pazy, I. Tikhonenkov, Y. B. Band, M. Fleischhauer, and A. Vardi, Phys. Rev. Lett. [**95**]{}, 170403 (2005). H. Y. Ling, P. Maenner, W. Zhang, and H. Pu, Phys. Rev. A [**75**]{}, 033615 (2007). E. A. Shapiro, M. Shapiro, AviPe’er, and J. Ye, Phys. Rev. A [**75**]{}, 013405 (2007). S.-Y. Meng, L.-B. Fu, and J. Liu, Phys. Rev. A [**78**]{}, 053410 (2008). L. B. Fu and J. Liu, Ann. Phys. [**325**]{}, 2425 (2010). F. Cui and B. Wu, Phys. Rev. A [**84**]{}, 024101 (2011). G. Santos, A. Foerster, J. Links, E. Mattei, and S. R. Dahmen, Phys. Rev. A [**81**]{}, 063621 (2010). S. C. Li and L. B. Fu, Phys. Rev. A [**84**]{}, 023605 (2011). H. Pu, P. Maenner, W. Zhang, and H. Y. Ling, Phys. Rev. Lett. [**98**]{}, 050406 (2007). H. Y. Ling, H. Pu, and B. Seaman, Phys. Rev. Lett. [**93**]{}, 250403 (2004). H. Y. Ling, P. Maenner, W. Zhang, and H. Pu, Phys. Rev. A [**75**]{}, 033615 (2007). C. A. Regal, M. Greiner, and D. S. Jin, Phys. Rev. Lett. [**92**]{}, 040403 (2004); G. B. Partridge, K. E. Strecker, R. I. Kamar, M. W. Jack, and R. G. Hulet, [*ibid*]{}. [**95**]{}, 020404 (2005). J. Links, H. Q. Zhou, R. H. McKenzie, and M. D. Gould, J. Phys. A [**36**]{}, R63 (2003); G. Santos, A. Tonel, A. Foerster, and J. Links, Phys. Rev. A [**73**]{}, 023609 (2006); J. Li, D. F. Ye, C. Ma, L. B. Fu, and J. Liu, [*ibid*]{}. [**79**]{}, 025602 (2009). P. Zanardi and N. Paunković, Phys. Rev. E [**74**]{}, 031123 (2006). H.-Q. Zhou and J. P. Barjaktarevic, J. Phys. A [**41**]{}, 412001 (2008). M. A. Nielsen and I. L. Chuang, [*Quantum Computation and Quantum Information*]{} (Cambridge University Press, Cambridge, UK, 2000). J. Liu and L. B. Fu, Phys. Rev. A [**81**]{}, 052112 (2010). M. V. Berry, Proc. R. Soc. London, Ser. A [**392**]{}, 45 (1984). S. C. Li, L. B. Fu, and J. Liu, Phys. Rev. A [**84**]{}, 053610 (2011). A. C. M. Carollo and J. K. Pachos, Phys. Rev. Lett. [**95**]{}, 157203 (2005). S.-L. Zhu, Phys. Rev. Lett. [**96**]{}, 077206 (2006).
--- abstract: 'We investigate the stability and global existence of weak solutions to a free boundary problem governing the evolution of polymeric fluids. We construct weak solutions of the two-phase model by performing the asymptotic limit of a macroscopic model governing the suspensions of rod-like molecules (known as Doi-Model) in compressible fluids as the adiabatic exponent $\gamma$ goes to $\infty.$ The convergence of these solutions, up to a subsequence, to the free-boundary problem is established using techniques in the spirit of Lions and Masmoudi [@LionsMasmoudi-1999].' address: - | Departement of Engineering Computer Science and Mathematics\ University of L’Aquila\ 67100 L’Aquila, Italy. - | Department of Mathematics\ University of Maryland\ College Park, MD 20742-4015, USA. author: - Donatella Donatelli - Konstantina Trivisa title: 'On a free boundary problem for polymeric fluids: Global existence of weak solutions ' --- [^1] Introduction {#S1} ============ Motivation ---------- The evolution of rod-like molecules in polymeric fluids is of great scientific interest with a variety of applications in science and engineering. The present article deals with a free-boundary problem for the suspension of rod-like molecules in a dilute regime. This article is part of a research program whose objective is the investigation of fluids with imbedded domains (large bubbles) filled with gas in the presence of rod-like molecules: standard models involve a threshold on the pressure beyond which one has the incompressible Navier-Stokes equations for the fluid and below which one has a compressible model for the gas. The model under consideration couples a Fokker-Planck-type equation on the sphere for the orientation distribution of the rods to the Navier-Stokes equations, which are now enhanced by additional stresses reflecting the orientation of the rods on the molecular level. The coupled problem is 5- dimensional (three-dimensions in physical space and two degrees of freedom on the sphere) and it describes the interaction between the orientation of rod-like polymer molecules on the microscopic scale and the macroscopic properties of the fluid in which these molecules are contained. The macroscopic flow leads to a change of the orientation and, in the case of flexible particles, to a change in shape of the suspended microstructure. This process, in turn yields the production of a fluid stress. The free-boundary problem is defined with the aid of a threshold for the pressure beyond which one has the incompressible Navier-Stokes equations for the fluid and below which one has a compressible model for the gas. Outline ------- The outline of this article is as follows: Section \[S1\] presents the main motivation for the upcoming investigation. Section \[S2\] introduces modeling aspects of the problem: the physical setting, constitutive relations, the free-boundary problem, and the statement of the problem which outlines the main objective of the work, namely the establishment of the global existence of the weak solutions to the free-boundary problem by rigorously showing that they can be obtained as the limit of weak solutions to the Doi problem for compressible fluids. Additionally, the main results of the article as well as the notion of solutions to the macroscopic system are introduced. Section \[S3\] is dedicated to the construction of a suitable approximate system and the definition of its weak solution. Section \[S4\] presents the existence of approximate solutions which relies on the derivation of suitable a priori estimates. Section \[S5\] presents the proof of the main theorem which relies on the establishment of the compactness of the sequence of solutions. Formulation of the main problem {#S2} =============================== Notations: ---------- Before formulating the governing equation of our main problem we fix here some notations we are going to use in the paper. - $L^{p}(0,T;X)$ denotes the Banach set of Bochner measurable functions $f$ from $(0,T)$ to $X$ endowed with either the norm $\Big(\int ^{T}_{0} \|g(\cdot, t)\|^{p}_{X}dt\Big)^{\frac{1}{p}}$ for $1\leq p<\infty$ or $\displaystyle\sup_{t>\infty} \|g(\cdot,t)\|_{X}$ for $p=\infty$. In particular, $f\in L^{r}(0, T; XY)$ denotes $\Big(\int ^{T}_{0} \big\|\big(\|f(t)\|_{Y_{\tau}}\big)\big\|^{p}_{X}dt\Big)^{\frac{1}{p}}$ or $\displaystyle\sup_{t>\infty} \big\|\big(\|f(t)\|_{Y_{\tau}}\big)\big\|_{X}$ for $p=\infty$. The notation $L^{p}_{t}L^{q}_{x}$ will abbreviate the space $L^{p}(0,T);L^{q}(\Omega))$. - ${\mathcal M}((0,T) \times \Omega)$ is the space of bounded measures on $(0,T) \times \Omega$. - $A \lesssim B $ means there is a constant $C$ such that $A \leq C B $. - $\mathbf{1}_{X}$ is the indicator function which is 1 for $x\in X$ and 0 otherwise. - $C(T)$ is a function only depending on initial data and $T$, $C_{w}([0,T];X)$, is the space of continuos function from $(0,T)$ to $X$ endowed with the weak topology. - $\rightharpoonup$ and $\rightarrow$ denote weak limit and strong limit, respectively. - We denote by $\overline{x_{n}}$ the weak limit of a sequence $x_{n}$. Governing equations ------------------- We start by introducing the basic equations of motion for polymeric fluids. We recall that a smooth motion of a body in continuum mechanics is described by a family of one-to-one mappings $$X(t, \cdot): \Omega \rightarrow \Omega, \quad t\in I.$$ The curve $X(t,x)$ represents the trajectory of a particle occupying at time $t$ a spatial position $x$. A smooth motion $X$ is completely determined by a velocity field $u: I\times \Omega \rightarrow \mathbb{R}^{3}$ through $$\frac{\partial}{\partial t}X(t,x)=u(t, X(t,x)), \quad X(0,a)=a.$$ Then, the conservation of mass can be formulated as follows: $$\frac{d}{dt}\int_{X(t,B)}{\varrho}(t,x)dx=0, \quad B\subset \Omega.$$ This equation is equivalent to $$\frac{d}{dt}\int_{B}{\varrho}(t,x)dx +\int_{\partial B}{\varrho}(t,x)[u(t,x)\cdot \hat{n}]dS=0,$$ where $\hat{n}$ is the unit outer normal vector on $\partial \Omega$. If ${\varrho}$ is smooth, one can use Green’s theorem to deduce the following continuity equation: $$\label{eq:1.1} {\varrho}_{t} +{\operatorname{div}}({\varrho}u)=0.$$ We next obtain equation of motion by applying Newton’s second law of motion. $$\label{eq:1.3} \frac{d}{dt}\int_{B}{\varrho}(t,x)u(u,x)dx+\int_{\partial B}{\varrho}(t,x)[u(t,x)\cdot \hat{n}]dS=\int_{\partial B}\mathbb{T}(t,x)\hat{n}dS.$$ By applying Green’s lemma to (\[eq:1.3\]), we finally have $$\label{eq:1.4} ({\varrho}u)_{t} +{\operatorname{div}}({\varrho}u\otimes u)={\operatorname{div}}\mathbb{T}, \quad ({\operatorname{div}}\mathbb{T})_{i} =\sum_{j=1}^{3}\frac{\partial \mathbb{T}_{ij}}{\partial x_{j}}.$$ The stress tensor $\mathbb{T}$ obeys Stokes’ law: $$\mathbb{T}=\mathbb{S}-p\mathbb{I}.$$ Let us determine $\mathbb{S}$ and $p$ in our model. $\mathbb{S}$ consists of two parts: $$\mathbb{S}=\mathbb{S}_{1}+\mathbb{S}_{2},$$ where $\mathbb{S}_{1}$ is the viscous stress tensor generated by the fluid $$\mathbb{S}_{1}=\mu \big(\nabla u +(\nabla u)^{t}\big) +\lambda({\operatorname{div}}u)\mathbb{I},$$ and $\mathbb{S}_{2}$ is the macroscopic symmetric stress tensor derived from the orientation of the rods at the molecular level. The microscopic insertions at time $t$ and macroscopic place $x$ are described by the probability $f(t,x,\tau) d\tau.$ The suspension stress tensor $\mathbb{S}_2$ is given by an expansion $$\mathbb{S}_2(x,t) = \sigma^{(1)}(x,t) + \sigma^{(2)}(x,t) + \sigma^{(3)}(x,t),$$ where $$\sigma^{(1)}(t,x)=\int_{S^{2}}(3\tau\otimes\tau-\mathbb{I}_{3\times3})f(t,x,\tau)d\tau,$$ $$\sigma^{(2)}(t,x) = -\sigma^{(2)}_{ij}(t,x) \mathbb{I}_{3 \times 3},\,\,\, \mbox{with} \,\,\,\sigma^{(2)}_{ij}(t,x)= \int_{S^{2}} \gamma_{ij}^{(2)}(\tau) f(t,x,\tau) d \tau,$$ and $$\sigma^{(3)}(t,x) = -\sigma^{(3)}_{ij}(t,x) \mathbb{I}_{3 \times 3},$$ with $$\sigma^{(3)}_{ij}(t,x)= \int_{S^{2}} \int_{S^{2}} \gamma_{ij}^{(3)}(\tau_1, \tau_2) f(t,x,\tau_1) f(t,x, \tau_2) d \tau_1 d\tau_2.$$ This, and more general expansions for $\mathbb{S}_2$ are encountered in the polymer literature (cf. Doi and Edwards [@Doi1986]). We refer the reader to the articles by Constantin et al [@Constantin2007], [@Constantin2008], where a general class of stress tensors is presented in the context of incompressible fluids. The structure coefficients in the expansion $\gamma_{ij}^{(2)}, \gamma_{ij}^{(3)}$ are in general smooth, time independent, $x$ independent, and do not depend on $f.$ Assuming for simplicity that $$\gamma_{ij}^{(2)} (\tau) = \gamma_{ij}^{(3)} (\tau_1, \tau_2) =1$$ and denoting $$\eta(t,x)=\int_{S^{2}}f(t,x,\tau)d\tau$$ the suspension stress tensor $\mathbb{S}_2$ takes the form $$\label{eq:1.5} \mathbb{S}_2(x,t) = \sigma^{(1)}(x,t) - \eta \mathbb{I}_{3 \times 3} - \eta^2 \mathbb{I}_{3 \times 3}.$$ In this setting, $f$ describes the time-dependent orientation distribution that a rod with a center mass at $x$ has an axis $\tau$ in the area element $d\tau$ and it is described by a compressible Fokker-Plank type equation, $$\label{eq:1.6} f_{t}+{\operatorname{div}}(f{{{\bm{u}}}})+\nabla_{\tau}\cdot(P_{\tau^{\perp}}\nabla u \tau f)-D_{\tau}\Delta_{\tau} f-D\Delta f=0,$$ where $ P_{\tau^{\perp}}(\nabla_{x} u \tau)=\nabla_{x}u \tau-(\tau\cdot \nabla_{x}u \tau)\tau$ is the projection of $\nabla u \tau$ on the tangent space of $S^{2}$ at $\tau \in S^{2}$. With $\nabla_\tau$ and $\Delta_\tau$ we denote the gradient and the Laplace operator on the unit sphere, while $\nabla$ and $\Delta$ represent the gradient and the Laplacian operator in $\mathbb{R}^3$. The second term $\nabla \cdot (uf)$ in describes the change of $f$ due to the displacement of the center of mass of the rods due to macroscopic advection. The term $\nabla_{\tau}\cdot(P_{\tau^{\perp}}\nabla u \tau f)$ is a drift-term on the sphere, which represents the shear-forces acting on the rods. The term $D_{\tau}\Delta_{\tau} f$ represents the rotational diffusion due to Brownian motion. This effect causes the rods to change their orientation spontaneously, whereas the term $D\Delta f$ is the translational diffusion due to Brownian effects. By integrating (\[eq:1.6\]) over $S^{2}$, we can obtain the equation of $\eta$: $$\label{eq:1.7} \eta_{t}+{\operatorname{div}}(\eta{{{\bm{u}}}})-D\Delta \eta=0.$$ The pressure due to the fluid $p_F$ is denoted by $$\label{eq:1.8} p_F =\pi,$$ The pressure due to the presence of the dispersed particles denoted by $p_P$ is of the form $$\label{eq:1.9} p_P = \eta + \eta^2.$$ The overall pressure of the mixture is denoted by $P(\pi, \eta)$ and is given by $$P(\pi, \eta) = p_F + p_P = \pi+ \eta + \eta^2.$$ By substituting (\[eq:1.5\]) and (\[eq:1.8\]) to (\[eq:1.4\]), the equation of motion becomes $$\label{eq:1.9a} \partial_t ({\varrho}{{{\bm{u}}}})+{\operatorname{div}}({\varrho}{{{\bm{u}}}}\otimes {{{\bm{u}}}})-\mu \Delta {{{\bm{u}}}}- \lambda \nabla {\operatorname{div}}{{{\bm{u}}}}+\nabla (\pi+\eta+ \eta^{2})={\operatorname{div}}\sigma^{(1)}.$$ Finally, we have the following system of equations for the polymeric fluid in $(0,T)\times \Omega$: $$\partial_t {\varrho}+{\operatorname{div}}({\varrho}{{{\bm{u}}}})=0, \label{eq:1.10 a}$$ $$\partial_t({\varrho}{{{\bm{u}}}}) +{\operatorname{div}}({\varrho}{{{\bm{u}}}}\otimes {{{\bm{u}}}})-\Delta {{{\bm{u}}}}- \nabla {\operatorname{div}}{{{\bm{u}}}}+\nabla P({\varrho}, \eta) ={\operatorname{div}}\sigma^{(1)}, \label{eq:1.10 b}$$ $$\partial_t f +{\operatorname{div}}(f {{{\bm{u}}}}) +\nabla_{\tau} \cdot(P_{\tau^{\perp}}(\nabla_{x} u \tau)f)- \Delta_{\tau}f- \Delta_{x}f=0, \label{eq:1.10 c}$$ $$\label{eq:1.10 d} \eta_{t}+\nabla\cdot(\eta{{{\bm{u}}}})-\Delta \eta=0.$$ For the sake of simplicity, all the coefficients are normalized by 1. We consider (\[eq:1.10 a\]), (\[eq:1.10 b\]), , on a bounded domain with Dirichlet boundary conditions, $$u=0, \,\, f=0,\, \mbox{and} \, \, \eta=0 \, \mbox{ on} \,\, \partial \Omega.$$ Statement of the problem ------------------------ In this article we are concerned with the free-boundary problem for the system - which is defined with the aid of a threshold for the pressure beyond which one has the incompressible Navier-Stokes equations for the fluid and below which one has a compressible model for the gas. In sum, the free boundary problem [**(P)**]{} is governed by the following equations in $(0,T)\times \Omega$: $$\label{2.14} \partial_t {\varrho}+{\operatorname{div}}({\varrho}{{{\bm{u}}}})=0 $$ $$\quad 0 \le {\varrho}\le 1 \nonumber $$ $$\label{2.15} \partial_t ({\varrho}{{{\bm{u}}}}) +{\operatorname{div}}({\varrho}{{{\bm{u}}}}\otimes {{{\bm{u}}}})+ \nabla(\pi + \eta + \eta^2) = {\operatorname{div}}\mathbb{S} + {\operatorname{div}}\sigma $$ $$\label{2.16} \partial_t f+{\operatorname{div}}(f {{{\bm{u}}}}) +\nabla_{\tau} \cdot(P_{\tau^{\perp}}(\nabla_{x} {{{\bm{u}}}}\tau)f)- \Delta_{\tau}f- \Delta_{x}f=0 $$ $$\label{2.17} \partial_t \eta + {\operatorname{div}}(\eta {{{\bm{u}}}}) - \Delta \eta = 0$$ and the free boundary conditions $$\label{2.18} {\operatorname{div}}{{{\bm{u}}}}= 0 \quad \mbox{a.e} \,\, \mbox{on} \,\, \{ {\varrho}= 1\}$$ $$\label{2.19} \pi \ge 0 \quad \mbox{a.e} \,\, \mbox{in} \,\, \{ {\varrho}= 1\}$$ $$\label{2.20} \pi = 0 \quad \mbox{a.e} \,\, \mbox{in} \,\, \{ {\varrho}< 1\}$$ The unknowns here are the density ${\varrho},$ the velocity vector field ${{{\bm{u}}}},$ the pressure $\pi, $ which is Lagrange multiplier associated with the incompressibility constraint ${\operatorname{div}}{{{\bm{u}}}}= 0$ a.e. in $\{{\varrho}= 1\},$ the orientation distribution $f$ and the particle density $\eta$. Note that $\pi$ is apparent only in the congested regions $\{{\varrho}=1\}.$ In fact conditions , , can be rewritten, in an equivalent way, as one constraint $$\nonumber {\varrho}\pi = \pi\ge 0. $$ The physical properties of the mixture are reflected through the following constitutive relations. [**Constitutive relations**]{} $$\nonumber P(\pi, \eta) = p_F + p_P = \pi + \eta + \eta^2.$$ $$\nonumber \mathbb{S} = \mu (\nabla {{{\bm{u}}}}+ \nabla {{{\bm{u}}}}^{T}) + \xi {\operatorname{div}}{{{\bm{u}}}}\mathbb{I}$$ $$\nonumber {\operatorname{div}}\mathbb{S}=\mu \Delta {{{\bm{u}}}}- \xi \nabla {\operatorname{div}}{{{\bm{u}}}}$$ $$\nonumber \sigma(t,x) = \sigma^{(1)}(t,x)=\int_{S^{2}}(3\tau\otimes\tau-\mathbb{I}_{3\times3})f(t,x,\tau)d\tau,$$ where, for the sake of simplicity, all the coefficients will be normalized by 1. [**Boundary conditions**]{} We consider the problem [**(P)**]{} on a bounded domain with Dirichlet boundary conditions. $$\label{2.22} {{{\bm{u}}}}=0, \,\, f=0,\, \mbox{and} \, \, \eta=0, \, \mbox{ on} \,\ \partial \Omega.$$ [**Initial data**]{} The system must be complemented with initial conditions, namely $$\label{2.23} {\varrho}|_{t=0} = {\varrho}_{0},\quad {\varrho}{{{\bm{u}}}}|_{t=0} = m_{0}, \quad \eta|_{t=0}=\eta_{0},\quad f|_{t=0}=f_{0}$$ where $$\begin{aligned} & & 0 \le {\varrho}_{0} \le 1, \quad {\varrho}_{0} \in L^1(\Omega)\nonumber\\ & & m_{0} \in L^2 (\Omega), \quad m_{0} =0 \,\, \mbox{a.e. on }\,\, \{{\varrho}_{0} = 0\}\nonumber\\ & & {\varrho}_{0} \not \equiv 0,\,\,\, \int_{\Omega}\!\!\!\!\!\!\!-{\varrho}_{0}=M<1\label{2.29}\nonumber \\ && {\varrho}_{0} |{{{\bm{u}}}}_{0}|^{2} \in L^2(\Omega),\quad {{{\bm{u}}}}^0 = \frac{m_{0}}{{\varrho}_{0}}\,\, \mbox{on} \,\, \{{\varrho}_{0} >0\}\nonumber \\ && \eta_{0}\in L^{2}(\Omega)\nonumber\\ && f_{0}\in L^{1}(\Omega\times S^{2})\nonumber\end{aligned}$$ The goal of this paper is to prove the existence of weak solutions to the free-boundary problem -, so we introduce the notion of weak solutions we are going to use through the paper. \[def1\] [**\[Weak solution of the problem (P)\]**]{}\ A vector $({\varrho}, {{{\bm{u}}}},\pi, f, \eta)$ is called a weak solution to - with boundary data and initial data if the equations $$\nonumber \partial_t {\varrho}+{\operatorname{div}}({\varrho}{{{\bm{u}}}})=0$$ $$\nonumber \partial_t ({\varrho}{{{\bm{u}}}}) +{\operatorname{div}}({\varrho}{{{\bm{u}}}}\otimes {{{\bm{u}}}})+ \nabla(\pi + \eta + \eta^2) = {\operatorname{div}}\mathbb{S} + {\operatorname{div}}\sigma$$ $$\nonumber \partial_t f+{\operatorname{div}}(f {{{\bm{u}}}}) +\nabla_{\tau} \cdot(P_{\tau^{\perp}}(\nabla_{x} {{{\bm{u}}}}\tau)f)- \Delta_{\tau}f- \Delta_{x}f=0$$ $$\nonumber \partial_t \eta + {\operatorname{div}}(\eta {{{\bm{u}}}}) - \Delta \eta = 0$$ are satisfied in the sense of distributions, the divergence free condition ${\operatorname{div}}{{{\bm{u}}}}= 0$ is satisfied a.e. in $\{ {\varrho}= 1\},$ the constrained $0\le {\varrho}\le 1$ is satisfied a.e. in $(0,T) \times \Omega$ and the following regularity properties hold $${\varrho}\in C([0,T]; L^p(\Omega)), \,\, 1\le p < \infty,$$ $${{{\bm{u}}}}\in L^2(0,T;(W_0^{1,2}(\Omega))),\,\, {\varrho}|{\bf u}|^2 \in L^{\infty}(0, T;L^1(\Omega)),$$ $$\pi \in {\mathcal M}((0,T) \times \Omega)$$ $$\eta \in L^{\infty}(0,T; L^{2}(\Omega))\cap L^{2}(0,T; \dot{H}^{1}(\Omega)), \quad f\ln f \in L^{\infty}(0,T; L^{1}(\Omega\times S^{2}))$$ $$\nabla _{\tau}\sqrt{f} \in L^{2}((0,T)\times\Omega\times S^{2} ), \quad \nabla\sqrt{f} \in L^{2}((0,T)\times\Omega\times S^{2}).$$ Moreover $\pi $ is so regular that the condition $$\pi({\varrho}-1)=0,$$ is satisfied in the sense of distribution. The objective of this work is to prove the existence of weak solutions to the free-boundary problem - by showing rigorously that they can be obtained as a limit of $({\varrho}_n, {{{\bm{u}}}}_n, f_n, \eta_n)$ - the weak solutions to the Doi model for compressible fluids $$\partial_t {\varrho}_n +{\operatorname{div}}({\varrho}_n {{{\bm{u}}}}_n)=0, \label{2.32}$$ $$\partial_t({\varrho}_n {{{\bm{u}}}}_n) +{\operatorname{div}}({\varrho}_n {{{\bm{u}}}}_n \otimes {{{\bm{u}}}}_n)-\Delta {{{\bm{u}}}}_n-\nabla {\operatorname{div}}{{{\bm{u}}}}_n+\nabla (\pi_n+ \eta_n+ \eta_n^2) ={\operatorname{div}}\sigma_n, \label{2.33}$$ $$\partial_t f_n +{\operatorname{div}}(f_n {{{\bm{u}}}}_n) +\nabla_{\tau} \cdot(P_{\tau^{\perp}}(\nabla_{x} {{{\bm{u}}}}_n \tau)f)- \Delta_{\tau}f_n- \Delta_{x}f_n=0, \label{2.34}$$ $$\label{2.35} \partial_t \eta_n+\nabla\cdot(\eta_n{{{\bm{u}}}}_n)-\Delta \eta_n=0,$$ where $$\pi_{n}=({\varrho}_{n})^{\gamma_{n}},\ \gamma_{n}\to \infty,\ \text{as $n\to \infty$}.$$ Finally, we want to point out that the solution we are going to obtain satisfies the following energy inequality $$\label{energy} \begin{split} &\int_{\Omega} \Big(\frac{\rho|{{{\bm{u}}}}|^{2}}{2}+\eta^{2} +\psi \Big)(t)dx + 4\int^{t}_{0}\int_{\Omega}\int_{S^{2}} |\nabla_{\tau}\sqrt{f}|^{2}d\tau dxdt \\ &+4 \int^{t}_{0} \int_{\Omega} \int_{S^{2}} |\nabla \sqrt{f}|^{2}d\tau dxdt +\int^{t}_{0} \int_{\Omega}\Big(|\nabla {{{\bm{u}}}}|^{2} + |{\operatorname{div}}{{{\bm{u}}}}|^{2} +2|\nabla \eta|^{2}\Big)dxdt \\ & \leq \int_{\Omega} \Big(\frac{\rho_{0}|u_{0}|^{2}}{2}+\eta^{2}_{0} +\psi_{0} \Big)dx, \end{split}$$ where $$\psi(t,x)=\int_{S^{2}} (f\ln f)(t,x,\tau) d\tau.$$ Main results ------------ Now we are ready to state the main existence results for our problem. Assume that the boundary conditions and the initial conditions are satisfied. Then, there exists a weak solution (in the sense of Definition \[def1\]) of the problem -. \[MT\] The main Theorem \[MT\] will be obtained as a consequence of the following result. Let $n \in {\mathbb{N}}$ be fixed, then there exists a global weak solution $({\varrho}_n, {{{\bm{u}}}}_n, \pi, f_n, \eta_n)$ to - in the sense of the Definition \[def2\], such that, as $n \to \infty$ $$\label{2.1.1} ({\varrho}_{n}-1)_{+}\rightarrow 0\qquad \text{in $L^{\infty}(0,T;L^{p})$, for any $1\leq p\leq 0.$}$$ Moreover, $$({\varrho}_{n})^{\gamma_{n}} \qquad \text{is bounded in $L^{1}$, for $n$ such that $\gamma_{n}\geq 3$,}$$ and up to a subsequence there exists $\pi\in \mathcal{M}((0,T)\times \Omega)$ such that $$({\varrho}_{n})^{\gamma_{n}} \rightharpoonup \pi, \qquad \text{as $n\to \infty$}.$$ If in addition ${\varrho}_{n0}\to{\varrho}_{0}$ in $L^{1}$, then the following convergence holds: $${\varrho}_n \rightharpoonup {\varrho}\,\, \mbox{weakly in} \,\, L^p((0,T) \times \Omega) \,\, 1\le p < +\infty,$$ $${\varrho}_n {{{\bm{u}}}}_n \rightharpoonup {\varrho}{{{\bm{u}}}}\,\, \mbox{weakly in} \,\, L^p((0,T; L^{r}(\Omega)),\ 1\le p < +\infty,\ 1\leq r<2,$$ $${\varrho}_n {{{\bm{u}}}}_n\otimes {{{\bm{u}}}}_n \rightharpoonup {\varrho}{{{\bm{u}}}}\otimes{{{\bm{u}}}}\,\, \mbox{weakly in} \,\, L^p((0,T; L^{1}(\Omega)),1\le p < +\infty,$$ $$f_n \rightharpoonup f \,\, \mbox{weakly in} \,\, L^2((0,T; L^{6/5}(\Omega\times S^{2})),$$ $$\eta_n \rightarrow \eta \,\, \mbox{strongly in} \,\, L^{2}(0,T;L^{2}(\Omega)),$$ $0\le {\varrho}\le 1$ and $({\varrho}, {{{\bm{u}}}}, \pi, f, \eta)$ is a weak solution to the problem - in the sense of Definition \[def1\]. \[MT2\] The rest of the paper is devoted to the proof of the Theorems \[MT\] and \[MT2\]. Approximating problem {#S3} ===================== We describe, now, the approximating scheme we are going to use. Let be $\gamma_{n}$ a sequence of real numbers such that $\gamma_{n}>\frac{3}{2}$, for any $n\in {\mathbb{N}}$ and $\gamma_{n}\to \infty$ as $n\to \infty$, we define $\{{\varrho}_{n}, {{{\bm{u}}}}_{n}, f_{n}, \eta_{n}\}$ as solutions of the following system $$\label{4.1} \partial_t {\varrho}_n+{\operatorname{div}}({\varrho}_n {{{\bm{u}}}}_n)=0, \quad {\varrho}_n \ge 0$$ $$\label{4.2} \partial_t ({\varrho}_n {{{\bm{u}}}}_n) +{\operatorname{div}}({\varrho}_n {{{\bm{u}}}}_n\otimes {{{\bm{u}}}}_n) - \Delta {{{\bm{u}}}}_n - \nabla {\operatorname{div}}{{{\bm{u}}}}_n + \nabla (\pi_{n} + \eta_n + \eta_n^2) = {\operatorname{div}}\sigma_n$$ $$\label{4.3} \partial_t f_n+{\operatorname{div}}(f_n {{{\bm{u}}}}_n) +\nabla_{\tau} \cdot(P_{\tau^{\perp}}(\nabla_{x} {{{\bm{u}}}}_n \tau)f_n)- \Delta_{\tau}f_n- \Delta_{x}f_n=0$$ $$\label{4.4} \partial_t \eta_n + {\operatorname{div}}(\eta_n {{{\bm{u}}}}_n) - \Delta \eta_n = 0,$$ where $$\pi_{n}=({\varrho}_n)^{\gamma_n}$$ and $$\label{4.5} \sigma_n(t,x) = \int_{S^{2}}(3\tau\otimes\tau-\mathbb{I}_{3\times3})f_n(t,x,\tau)d\tau$$ The approximating system must be complemented with boundary and initial data as follows. [**Boundary data**]{} $$\label{b1} {{{\bm{u}}}}_{n}=0, \,\, f_{n}=0,\, \mbox{and} \, \, \eta_{n}=0, \, \mbox{ on} \,\ \partial \Omega.$$ [**Initial data**]{} $${\varrho}_n|_{t=0} = {\varrho}_{n_0}, \quad {\varrho}_n {{{\bm{u}}}}_n|_{t=0} = m_{n_0}, \quad \eta_{n}|_{t=0}=\eta_{n_0},\quad f_{n}|_{t=0}=f_{n_0} \label{i1}$$ where $$\begin{aligned} & & 0 \le {\varrho}_{n_0} \quad \mbox{a.e}, \quad {\varrho}_{n_{0}} \in L^1(\Omega) \cap L^{\gamma_n}(\Omega), \nonumber\\ & & \int ({\varrho}_{n_0})^{\gamma_n} dx \le c \gamma_n \,\, \mbox{for some}\,\, c, \label{i5}\\ & & \quad \quad \quad m_{n_0} \in L^{\frac{2 \gamma_n}{\gamma_n + 1}} (\Omega), \nonumber \\ && {\varrho}_{n_0} |{{{\bm{u}}}}_{n_0}|^2 \,\, \mbox{is bounded in}\,\, L^1(\Omega), \nonumber\\ && \quad \quad \quad {{{\bm{u}}}}_{n_0} = \frac{m_{n_0}}{{\varrho}_{n_0}}\,\, \mbox{on} \,\, \{{\varrho}_{n_0} >0\}, \nonumber\\ && \quad \quad \quad {{{\bm{u}}}}_{n_0} = 0\,\, \mbox{on} \,\, \{{\varrho}_{n_0} = 0\}, \nonumber\\ && \quad \quad \quad f_{n_0} \in L^1(\Omega \times S^{2}), \nonumber\\ && \quad \quad \quad \eta_{n_0} \in L^2(\Omega \times S^{2}).\nonumber\end{aligned}$$ Furthermore we assume that $$M_{n}=\int_{\Omega}\!\!\!\!\!\!\!-{\varrho}_{n_0},\quad 0<M_{n}<M<1,\quad M_{n}\to M. \label{i4}$$ $$\nonumber {\varrho}_{n_0}{{{\bm{u}}}}_{n}\rightharpoonup m_{0}\quad \text{weakly in $L^{2}(\Omega)$,}$$ $${\varrho}_{n_0}\rightharpoonup {\varrho}_{0}\quad \text{weakly in $L^{1}(\Omega)$.} \label{i3}$$ Definition of weak solution of the approximate system {#sec:2.2} ----------------------------------------------------- For any fixed $\gamma_{n}>3/2$ we now define the notion of weak solution of the system -, with initial data and boundary data . \[def2\] For any fixed $\gamma_{n}>3/2$, we say $\{{\varrho}_n, {{{\bm{u}}}}_n, f_n, \eta_n, \sigma_n\}$ is a weak solution of the system - if 1. \(i) $${\varrho}_{n} \in L^{\infty}(0,T; L^{\gamma_{n}}(\Omega)), \quad \nabla {{{\bm{u}}}}_n \in L^{2}(0;T; L^{2}(\Omega)),$$ $${\varrho}_n|{{{\bm{u}}}}_n|^{2}\in L^{\infty}(0,T; L^{1}(\Omega)), \quad {\varrho}_n {{{\bm{u}}}}_n \in C_{w}([0,T];L^{\frac{2\gamma_{n}}{\gamma_{n}+1}}(\Omega)),$$ $$\eta_n \in L^{\infty}(0,T; L^{2}(\Omega))\cap L^{2}(0,T; \dot{H}^{1}(\Omega)), \quad f_n\ln f_n \in L^{\infty}(0,T; L^{1}(\Omega\times S^{2}))$$ $$\nabla _{\tau}\sqrt{f_n} \in L^{2}((0,T)\times\Omega\times S^{2}), \quad \nabla\sqrt{f_n} \in L^{2}((0,T)\times\Omega\times S^{2}),$$ 2. \(ii) holds in the sense of renormalized solutions, i.e., $$\label{eq:2.10} \partial_{t}(b({\varrho}_n)) +{\operatorname{div}}(b({\varrho}_n){{{\bm{u}}}}_n)+\left(b'({\varrho}_n){\varrho}_n-b({\varrho}_n)\right){\operatorname{div}}{{{\bm{u}}}}_n=0$$ holds in the sense of distributions for any $b\in C^{1}$ such that $|b^{'}(z)z|+|b(z)| \leq C$ for all $z\in \mathbb{R}$, 3. \(iii) , , , and hold in the sense of distributions, 4. \(iv) the following energy inequality is satisfied : $$\label{4.19} \begin{split} &\int_{\Omega} \Big[\frac{{\varrho}_n |{{{\bm{u}}}}_n|^{2}}{2}+\frac{{\varrho}_n^{\gamma_{n}}}{\gamma_{n}-1}+\eta_n^{2} +\psi_n \Big](t) dx + 4 \int_0^t \int_{\Omega}\int_{S^{2}} |\nabla_{\tau}\sqrt{f_n}|^{2}d\tau dx dt + \\ &4 \int_0^T \int_{\Omega} \int_{S^{2}} |\nabla \sqrt{f_n}|^{2}d\tau dx dt + \int_0^t \int_{\Omega}\Big[|\nabla {{{\bm{u}}}}_n|^{2} + |{\operatorname{div}}{{{\bm{u}}}}_n|^{2} +2|\nabla \eta_n|^{2}\Big]dx dt \leq\\ & \int_{\Omega} \Big[\frac{{\varrho}_{n_0} |{{{\bm{u}}}}_{n_0}|^{2}}{2}+\frac{{\varrho}_{n_0}^{\gamma_{n}}}{\gamma_{n}-1}+\eta_{n_0}^{2} +\psi_{n_0} \Big] dx=E_{n_{0}}, \end{split}$$ where $$\psi_n(t,x)=\int_{S^{2}} (f_n\ln f_n)(t,x,\tau) d\tau.$$ Existence of approximate solutions {#S4} ================================== For any fixed $n\in {\mathbb{N}}$, the existence of weak solutions for the system - has been proved by Bae and Trivisa in [@BT2012] (we refer the reader to [@BT2013] for the treatment of the Doi model for incompressible-polymeric fluids) , we can summarize their existence result as follows. \[thm:2.2\] Let $\gamma_{n}>\frac{3}{2}$ and $\Omega$ be a $C^{1}$ bounded domain. Assume that the initial data $\{{\varrho}_{n_0}, {{{\bm{u}}}}_{n_0}, f_{n_0}, \eta_{n_0}\}$ satisfy - and the boundary conditions hold. Then, there exists a weak solution (in the sense of Definition \[def2\]) $\{{\varrho}_n, {{{\bm{u}}}}_n, f_n, \eta_n, \sigma_n\}$ of the system - satisfying at $t=0$. By following the same line of arguments of [@BT2012] we recall in the next section the main compactness properties of the approximate solution $\{{\varrho}_n, {{{\bm{u}}}}_n, f_n, \eta_n, \sigma_n\}$. Energy estimates of the approximating system -------------------------------------------- Besides the bounds mentioned in (i) of Definition \[def2\] we can collect some further estimates satisfied by the solutions $\{{\varrho}_n, {{{\bm{u}}}}_n, f_n, \eta_n, \sigma_n\}$. By the energy inequality and the Sobolev embedding $\dot{H}^{1}\subset L^{6}$, we can estimate $\sqrt{f_{n}}$ as $$\sqrt{f_{n}} \in L^{2}\big(0,T; L^{2}(\Omega)L^{6}(S^{2}) \cap L^{6}(\Omega)L^{2}(S^{2})\big).$$ This implies that $$\label{eq:2.7} f_{n}\in L^{1}\big(0,T; L^{1}(\Omega)L^{3}(S^{2}) \cap L^{3}(\Omega)L^{1}(S^{2})\big) \subset L^{1}(0,T; L^{2}(\Omega\times S^{2})).$$ We finally estimate $\sigma_{n}$. Since $|\sigma_{n}(t,x)| \leq 3 \displaystyle\int_{S^{2}}f_{n}(t,x,\tau) d\tau=3\eta_{n}(t,x)$, $$\nonumber \sigma_{n} \in L^{1}(0,T; L^{3}(\Omega)) \cap L^{\infty}(0,T; L^{2}(\Omega))$$ where the first space is derived by integrating $f_{n}$ over $S^{2}$ using (\[eq:2.7\]) and the second bound is from $\eta_{n} \in L^{\infty}(0,T; L^{2}(\Omega))$. We next estimate the derivative of $\sigma$. By using the entropy dissipation, $$\begin{split} |\nabla \sigma_{n}(t,x)| &\leq 3 \int_{S^{2}}|\nabla f_{n}(t,x,\tau)|d\tau \lesssim \left[\int_{S^{2}} |\nabla \sqrt{f_{n}}|^{2}d\tau \right]^{\frac{1}{2}} \left[\int_{S^{2}}(\sqrt{f_{n}})^{2}d\tau \right]^{\frac{1}{2}}\\ &=\left[\int_{S^{2}} |\nabla\sqrt{f_{n}}|^{2}d\tau \right]^{\frac{1}{2}} (\eta_{n})^{\frac{1}{2}}. \end{split}$$ Since $$(\eta_{n})^{\frac{1}{2}} \in L^{\infty}(0,T; L^{4}(\Omega)) \cap L^{2}(0,T; L^{6}(\Omega)), \quad \left[\int_{S^{2}} |\nabla\sqrt{f_{n}}|^{2}d\tau \right]^{\frac{1}{2}} \in L^{2}(0,T; L^{2}(\Omega)),$$ we have $$\nonumber \nabla \sigma_{n} \in L^{1}(0,T; L^{\frac{3}{2}}(\Omega)) \cap L^{2}(0,T; L^{\frac{4}{3}}(\Omega)).$$ Moreover, as we will see in the Section \[S4\] we will be able to show to following uniform bound in $n\in {\mathbb{N}}$ for ${\varrho}_{n}$ $$\nonumber {\varrho}_{n}\in L^{\infty}(0,T; L^{1}\cap L^{p}(\Omega)),\qquad 1\leq p<+\infty. $$ Extracting a subsequence, using the same notation, $\{{\varrho}_{n}, {{{\bm{u}}}}_{n}, f_{n}, \eta_{n}, \sigma_{n}\}_{n \ge 1}$, we have various limit functions such as $$\begin{split} & {\varrho}_{n} \rightharpoonup {\varrho}\hspace{0.2cm} \text{in} \hspace{0.2cm} L^{\infty}(0,T;L^{p}(\Omega)), \quad {\varrho}\in L^{\infty}(0,T; L^{1}\cap L^{p}(\Omega)), \hspace{0.2cm} 1\leq p<+\infty,\\ & \sqrt{{\varrho}_{n}} {{{\bm{u}}}}_{n} \rightharpoonup v \hspace{0.2cm} \text{in} \hspace{0.2cm} L^{2}(0,T; L^{2}(\Omega)), \quad v \in L^{\infty}(0,T; L^{2}(\Omega)),\\ & {{{\bm{u}}}}_{n} \rightharpoonup {{{\bm{u}}}}\hspace{0.2cm} \text{in} \hspace{0.2cm} L^{2}(0,T; H^{1}(\Omega)),\\ & {\varrho}_{n} {{{\bm{u}}}}_{n} \rightharpoonup m \hspace{0.2cm} \text{in} \hspace{0.2cm} L^{\frac{2p}{p+1}}(\Omega\times(0,T)), \quad m \in L^{\infty}(0,T; L^{\frac{2p}{p+1}}(\Omega)), \hspace{0.2cm} 1\leq p<+\infty, \\ & {\varrho}_{n}{{{\bm{u}}}}_{n_i}{{{\bm{u}}}}_{n_j} \rightharpoonup e_{ij} \hspace{0.2cm} \text{in the sense of measures}, \\ &\hspace{3cm} e_{ij} \hspace{0.2cm} \text{is a bounded measure}, \\ & f_{n} \rightharpoonup f \hspace{0.2cm} \text{in} \hspace{0.2cm} L^{2}(0,T; L^{\frac{6}{5}}(\Omega\times S^{2})),\\ & \eta_{n} \rightharpoonup \eta \hspace{0.2cm} \text{in} \hspace{0.2cm} L^{2}(0,T; H^{1}(\Omega)),\quad \eta \in L^{\infty}(0,T; L^{2}(\Omega)) \cap L^{2}(0,T; H^{1}(\Omega)),\\ & \sigma_{n} \rightharpoonup \sigma \hspace{0.2cm} \text{in} \hspace{0.2cm} L^{2}(0,T; L^{2}(\Omega)), \quad \sigma \in L^{\infty}(0,T; L^{2}(\Omega)) \cap L^{1}(0,T; L^{3}(\Omega)),\\ &\nabla \sigma_{n} \rightharpoonup \nabla \sigma \hspace{0.1cm} \text{in} \hspace{0.1cm} L^{2}(0,T; L^{\frac{4}{3}}(\Omega)), \quad \nabla \sigma \in L^{1}(0,T; L^{\frac{3}{2}}(\Omega)) \cap L^{2}(0,T; L^{\frac{4}{3}}(\Omega)). \end{split} \label{eq:2.16}$$ Finally we state here the following compactness results (for the proof we refer to [@BT2012], Proposition 2.1) \[prop:2.3\] The limit functions in (\[eq:2.16\]) satisfy the following statements. 1. \(i) $v=\sqrt{{\varrho}}{{{\bm{u}}}}$, $m={\varrho}{{{\bm{u}}}}$, $e_{ij}={\varrho}{{{\bm{u}}}}_{i} {{{\bm{u}}}}_{j}$. 2. \(ii) $\eta_{n}$ converges strongly to $\eta$ in $L^{2}(\Omega\times (0,T))$, and $\sigma_{n}$ converges strongly to $\sigma$ in $L^{2}(\Omega \times (0,T))$. 3. \(iii) ${\varrho}_{n}(\eta_{n})^{2}$ converges to ${\varrho}\eta^{2}$ in the sense of distributions. 4. \(iv) ${\varrho}$ and ${{{\bm{u}}}}$ solve (\[eq:1.10 a\]) in the sense of renormalized solutions. 5. \(v) If in addition we assume that ${\varrho}_{n0}$ converges to ${\varrho}_{0}$ in $L^{1}(\Omega)$, $$\nonumber {\varrho}_{n} \rightarrow {\varrho}\hspace{0.2cm} \text{in} \hspace{0.2cm} L^{1}(\Omega\times(0,T)) \cap C([0,T]; L^{p}(\Omega)) \hspace{0.2cm} \text{for all} \hspace{0.2cm} 1\leq p<+\infty.$$ 6. \(vi) Finally, we have the following strong convergence: $$\nonumber \begin{split} & {\varrho}_{n}{{{\bm{u}}}}_{n} \rightarrow {\varrho}{{{\bm{u}}}}\hspace{0.2cm} \text{in} \hspace{0.2cm} L^{p}(0,T; L^{r}(\Omega)) \hspace{0.2cm} \text{for all} \hspace{0.2cm} 1\leq p<\infty, \quad 1\leq r<2, \\ & {{{\bm{u}}}}_{n} \rightarrow {{{\bm{u}}}}\hspace{0.2cm} \text{in} \hspace{0.2cm} L^{q}(\Omega\times (0,T))\cap\{{\varrho}_{n}>0\} \hspace{0.2cm} \text{for all} \hspace{0.2cm} 1\leq q<2, \\ & {{{\bm{u}}}}_{n} \rightarrow {{{\bm{u}}}}\hspace{0.2cm} \text{in} \hspace{0.2cm} L^{2}(\Omega\times (0,T))\cap\{{\varrho}_{n} \ge \delta\} \hspace{0.2cm} \text{for all} \hspace{0.2cm} \delta>0, \\ & {\varrho}_{n}{{{\bm{u}}}}_{ni}{{{\bm{u}}}}_{nj} \rightarrow {\varrho}{{{\bm{u}}}}_{i}{{{\bm{u}}}}_{j} \hspace{0.2cm} \text{in} \hspace{0.2cm} L^{p}(0,T; L^{1}(\Omega)) \hspace{0.2cm} \text{for all} \hspace{0.2cm} 1\leq p<\infty. \end{split}$$ Proof of the main theorem \[MT\] {#S5} ================================ This section is devoted to the proof of the Main Theorem \[MT\]. We start with the proof Theorem \[MT2\], since, as we will see later on the Theorem \[MT\] is a consequence of it. Proof of the Theorem \[MT2\] ---------------------------- For simplicity we divide the proof in different steps.\ [**Step 1: Convergence of $\mathbf{({\varrho}_{n}-1)_{+}}$ to $0$.**]{}\ By combining the energy inequality with we obtain $$\label{e1} \int_{\Omega}({\varrho}^{\gamma_{n}})_{n} dx\leq (\gamma_{n}-1)E_{n_{0}}+\int_{\Omega} ({\varrho}_{n_0})^{\gamma_n} dx \leq (\gamma_{n}-1)E_{n_{0}}+c \gamma_n \leq c\gamma_{n}.$$ Since $\gamma_{n}\to \infty$ there exists $n\in {\mathbb{N}}$ such that $\gamma_{n}>p$, $1<p<+\infty$, then by Hölder inequality we get $$\nonumber \|{\varrho}_{n}\|_{L^{\infty}_{t}L^{p}_{x}}\leq \|{\varrho}_{n}\|_{L^{\infty}_{t}L^{1}_{x}}^{\theta_{n}} \|{\varrho}_{n}\|_{L^{\infty}_{t}L^{1}_{x}}^{1-\gamma_{n}}\leq M_{n}^{\theta_{n}}(c\gamma_{n})^{\frac{1-\theta_{n}}{\gamma_{n}}},$$ where $M_{n}$ is defined in and $\displaystyle{\frac{1}{p}=\theta_{n}+\frac{1-\theta_{n}}{\gamma_{n}}}$. As $n\to \infty$ we have that $\displaystyle{\theta_{n}\to \frac{1}{p}}$ and $$\nonumber \|{\varrho}_{n}\|_{L^{\infty}_{t}L^{p}_{x}}\leq \liminf_{n\to\infty}\|{\varrho}_{n}\|_{L^{\infty}_{t}L^{p}_{x}}\leq M^{1/p}.$$ Let us define the function $\phi_{n}$ as follows $$\phi_{n}=({\varrho}_{n}-1)_{+},$$ by using again the energy inequality we can compute $$\label{e4} \int_{\Omega}(1+\phi_{n})^{\gamma_{n}}\mathbf{1}_{\{\phi_{n}>0\}}dx \leq \int_{\Omega}{\varrho}^{\gamma_{n}} dx\leq c\gamma_{n}.$$ We recall the following inequality $$\nonumber (1+x)^{k}\geq 1+c_{p}k^{p}x^{p}, \quad p>1, k \ \text{large}$$ that holds for any nonnegative $x$, and we apply it with $k=\gamma_{n}$, $x=\phi_{n}$ to the right hand side of , so we get $$\nonumber c_{p}\gamma_{n}^{p}\int_{\Omega}\phi_{n}^{p}dx\leq |\Omega|+c_{p}\gamma_{n}^{p}\int_{\Omega}\phi_{n}^{p}dx\leq\int_{\Omega}(1+\phi_{n})^{\gamma}\mathbf{1}_{\{\phi_{n}>0\}}dx\leq c\gamma_{n}$$ Hence we have $$\nonumber \int_{\Omega}\phi_{n}^{p}dx\leq \frac{c}{c_{p}\gamma_{n}^{p-1}},$$ and, as $n\to \infty$ we obtain $$\nonumber ({\varrho}_{n}-1)_{+}\rightarrow 0 \qquad \text{in $L^{\infty}(0,T;L^{p}(\Omega))$, $1\leq p<+\infty$.}$$\ [**Step 2: $\mathbf{L^{1}}$ uniform bound of $\mathbf{({\varrho}_{n})^{\gamma_{n}}}$.**]{}\ Assume that we know $$({\varrho}_{n})^{\gamma_{n}+1} \quad \text{is uniformly bounded in $L^{1}(0,T;L^{1}(\Omega))$}, \label{e8}$$ then we have $$\begin{split} \int_{0}^{T}\!\!\int_{\Omega}({\varrho}_{n})^{\gamma_{n}}dxdt&=\int_{0}^{T}\!\!\left(\int_{\Omega\cap\{{\varrho}_{n}>1\}}({\varrho}_{n})^{\gamma_{n}}dx+\int_{\Omega\cap\{{\varrho}_{n}\leq1\}}({\varrho}_{n})^{\gamma_{n}}dx\right)dt\\ &\leq \int_{0}^{T}\!\!\left(\int_{\Omega}\left(({\varrho}_{n})^{\gamma_{n}+1} +{\varrho}_{n}\right)dx \right)dt. \end{split} \label{e9}$$ By using and the fact that ${\varrho}_{n}\in L^{\infty}(0,T;L^{1}(\Omega))$, from it follows the uniform $L^{1}$ bound for $({\varrho}_{n})^{\gamma_{n}}$. In order to complete this step we have only to prove . We recall that for ${\varrho}_{n}$ we don’t have $L^{\infty}$ bounds, but on the other hand, because of there exists a constant $\tilde{c}$ such that for any $n\in {\mathbb{N}}$ the following estimate holds $$\|{\varrho}_{n}\|_{L^{\infty}_{t}L^{\gamma_{n}}_{x}}\leq \tilde{c}, \label{e10}$$ where $\displaystyle{\tilde{c}=\sup_{\gamma>0}(c\gamma)^{1/\gamma}}$. We define, now, the operator $\mathcal{B}$ as the inverse of the divergence operator. We denote the solution $v$ of $${\operatorname{div}}v=g \hspace{0.2cm} \text{in} \hspace{0.2cm} \Omega, \quad v=0 \hspace{0.2cm} \text{on} \hspace{0.2cm} \partial \Omega.$$ by $v=\mathcal{B}g$. The operator $\mathcal{B}=( \mathcal{B}_{1}, \mathcal{B}_{2}, \mathcal{B}_{3})$ is the inverse of the divergence operator and it enjoys the following properties $$\mathcal{B}: \Big\{g\in L^{p}; \int_{\Omega}g dx=0 \Big\} \rightarrow W^{1,p}_{0}(\Omega),$$ $$\|\mathcal{B}(g)\|_{W^{1,p}(\Omega)} \leq C \|g\|_{L^{p}(\Omega)}.$$ If $g$ can be written as $g={\operatorname{div}}h$ for a certain $h \in L^{r}$ with $h\cdot \hat{n}=0$ on $\partial\Omega$, then $$\|\mathcal{B}(g)\|_{L^{r}(\Omega)} \leq C \|h\|_{L^{r}(\Omega)}.$$ We will use this operator to obtain higher integrability of ${\varrho}_{n}$. By extending (\[eq:2.10\]) to zero outside $\Omega$ and regularizing it, we have, $$\label{eq:4.1} \partial_{t}b({\varrho}_{n})_{\epsilon}+{\operatorname{div}}(b({\varrho}_{n})_{\epsilon}u)+\Big(\big[ b'({\varrho}_{n}){\varrho}_{n}-b({\varrho}_{n})\big] {\operatorname{div}}{{{\bm{u}}}}_{n} \Big)_{\epsilon}=r_{\epsilon},$$ where as proved in Lions[@Lions1998], $r_{\epsilon} \rightarrow 0$ in $L^{2}((0,T)\times{\mathbb{R}}^{3})$. We are now ready to prove the following result. We take a test function of the form $$\phi_{i}=\chi(t)\mathcal{B}_{i}\Big[b({\varrho}_{n})_{\epsilon}- \oint_{\Omega}b({\varrho}_{n})_{\epsilon}dy\Big],$$ where $$\oint_{\Omega}b({\varrho}_{n})_{\epsilon}dy=\frac{1}{|\Omega|}\int_{\Omega}b({\varrho}_{n})_{\epsilon}dy, \quad \chi \in \mathcal{D}(0,T)$$ and test it against (\[eq:1.10 b\]). Then, with the aid of (\[eq:4.1\]), $$\begin{split} &\int^{T}_{0}\!\!\int_{\Omega} \chi {\varrho}_{n}^{\gamma_{n}}b({\varrho}_{n})_{\epsilon}dxdt \\ &= \int^{T}_{0}\!\!\int_{\Omega} \chi {\varrho}_{n}^{\gamma_{n}} \Big[\oint_{\Omega}b({\varrho}_{n})_{\epsilon}dy \Big]dxdt -\int^{T}_{0}\!\!\int_{\Omega} \chi_{t}{\varrho}_{n} {{{\bm{u}}}}_{n} \cdot \mathcal{B}\Big[ b({\varrho}_{n})_{\epsilon}-\oint_{\Omega}b({\varrho}_{n})_{\epsilon}dy\Big] dxdt\\ &+\int^{T}_{0}\!\!\int_{\Omega} \chi {\varrho}_{n} {{{\bm{u}}}}_{n} \cdot \mathcal{B}\Big[ \big( (b^{'}({\varrho}_{n}){\varrho}_{n}-b({\varrho}_{n})){\operatorname{div}}{{{\bm{u}}}}_{n} \big)_{\epsilon} -\oint_{\Omega} \big( (b^{'}({\varrho}_{n}){\varrho}_{n}-b({\varrho}_{n})){\operatorname{div}}{{{\bm{u}}}}_{n} \big)_{\epsilon}dy\Big]dxdt\\ & -\int^{T}_{0}\!\!\int_{\Omega} \chi {\varrho}_{n} {{{\bm{u}}}}_{n} \cdot \mathcal{B}\Big[ r_{\epsilon} -\oint_{\Omega}r_{\epsilon}dy\Big]dxdt + \int^{T}_{0}\!\!\int_{\Omega}\chi {\varrho}_{n}{{{\bm{u}}}}_{n} \cdot \mathcal{B}\Big[\nabla \cdot \big(b({\varrho}_{n})_{\epsilon} {{{\bm{u}}}}_{n} \big)\Big]dxdt\\ &- \int^{T}_{0}\!\!\int_{\Omega}\chi {\varrho}_{n} {{{\bm{u}}}}_{ni}{{{\bm{u}}}}_{nj} \partial_{i}\mathcal{B}_{j}\Big[ b({\varrho}_{n})_{\epsilon} -\oint_{\Omega}b({\varrho}_{n})_{\epsilon}dy\Big]dxdt \\ & + \int^{T}_{0}\!\!\int_{\Omega}\chi \partial_{i}{{{\bm{u}}}}_{nj}\partial_{i} \mathcal{B}_{j} \Big[ b({\varrho}_{n})_{\epsilon} -\oint_{\Omega}b({\varrho}_{n})_{\epsilon}dy\Big]dxdt\\ & + \int^{T}_{0}\!\!\int_{\Omega}\chi {\operatorname{div}}{{{\bm{u}}}}_{n} \Big[ b({\varrho}_{n})_{\epsilon} -\oint_{\Omega}b({\varrho}_{n})_{\epsilon}dy\Big]dxdt - \int^{T}_{0}\!\!\int_{\Omega}\chi \eta^{2}_{n}\Big[ b({\varrho}_{n})_{\epsilon} -\oint_{\Omega}b({\varrho}_{n})_{\epsilon}dy\Big]dxdt\\ & + \int^{T}_{0}\!\!\int_{\Omega}\chi \sigma_{nij}\partial_{i} \mathcal{B}_{j} \Big[ b({\varrho}_{n})_{\epsilon} -\oint_{\Omega}b({\varrho}_{n})_{\epsilon}dy\Big]dxdt -\int^{T}_{0}\!\!\int_{\Omega}\chi \eta_{n} \Big[ b({\varrho}_{n})_{\epsilon} -\oint_{\Omega}b({\varrho}_{n})_{\epsilon}dy\Big]dxdt\\ &=I_{1} +\cdots +I_{11}. \end{split}$$ By taking into account and the bounds of the previous sections we now estimate $I_{1}, \cdots, I_{11}$. For details, see Feireisl[@Feireisl2001].\ For $I_{1}$ we have $$I_{1} \lesssim C(T).$$ Concerning $I_{2}$ we get $$I_{2} \lesssim \|{\varrho}_{n} {{{\bm{u}}}}_{n} \|_{L^{\infty}(0,T; L^{\frac{2\gamma_{n}}{\gamma_{n}+1}}(\Omega))} \|b({\varrho}_{n})_{\epsilon}\|_{L^{\infty}(0,T; L^{\frac{6\gamma_{n}}{5\gamma_{n}-3}}(\Omega))} \leq C(T) \|b({\varrho}_{n})_{\epsilon}\|_{L^{\infty}(0,T; L^{\frac{6\gamma_{n}}{5\gamma_{n}-3}}(\Omega))}.$$ For $I_{3}$ and $I_{4}$ we get $$\begin{split} I_{3} &\lesssim \|{\varrho}_{n}\|_{L^{\infty}(0,T; L^{\gamma}(\Omega))} \|\nabla {{{\bm{u}}}}_{n} \|^{2}_{L^{2}(\Omega\times(0,T))} \|b({\varrho}_{n})_{\epsilon}\|_{L^{\infty}(0,T;L^{\frac{3\gamma_{n}}{2\gamma_{n}-3}}(\Omega) )} \\ &\leq C(T) \|b({\varrho}_{n})_{\epsilon}\|_{L^{\infty}(0,T;L^{\frac{3\gamma_{n}}{2\gamma_{n}-3}}(\Omega) )}. \end{split}$$ $$I_{4} \lesssim \|{\varrho}_{n} {{{\bm{u}}}}_{n} \|_{L^{\infty}(0,T;L^{\frac{2\gamma_{n}}{\gamma_{n}+1}}(\Omega))} \|r_{\epsilon}\|_{L^{2}(\Omega\times(0,T))} \leq C(T)\|r_{\epsilon}\|_{L^{2}(\Omega\times(0,T))}.$$ We estimate now $I_{5}+I_{6}$, $$\begin{split} I_{5} + I_{6} &\lesssim \|{\varrho}_{n}\|_{L^{\infty}(0,T; L^{\gamma_{n}}(\Omega))} \|\nabla {{{\bm{u}}}}_{n} \|^{2}_{L^{2}(\Omega\times(0,T))} \|b({\varrho}_{n})_{\epsilon}\|_{L^{\infty}(0,T;L^{\frac{3\gamma_{n}}{2\gamma_{n}-3}}(\Omega) )} \\ &\leq C(T) \|b({\varrho}_{n})_{\epsilon}\|_{L^{\infty}(0,T;L^{\frac{3\gamma_{n}}{2\gamma_{n}-3}}(\Omega) )}. \end{split}$$ For $I_{7} + I_{8}$ we have $$I_{7} + I_{8} \lesssim \|\nabla {{{\bm{u}}}}_{n} \|_{L^{2}(\Omega\times(0,T))} \|b({\varrho}_{n})_{\epsilon}\|_{L^{2}(\Omega\times(0,T))} \leq C(T)\|b({\varrho}_{n})_{\epsilon}\|_{L^{2}(\Omega\times(0,T))}.$$ and finally we get $$\begin{split} & I_{9} + I_{10} + I_{11} \\ & \lesssim \Big( \|\eta_{n}\|^{2}_{L^{2}(0,T; L^{6}(\Omega))} + \|\sigma_{n}\|_{L^{1}(0,T; L^{3}(\Omega))} + \|\eta_{n}\|_{L^{1}(0,T; L^{3}(\Omega))}\Big) \|b({\varrho}_{n})_{\epsilon}\|_{L^{\infty}(0,T; L^{\frac{3}{2}} (\Omega))}\\ & \leq C(T) \|b({\varrho}_{n})_{\epsilon}\|_{L^{\infty}(0,T; L^{\frac{3}{2}} (\Omega))}. \end{split}$$ In sum, $$\nonumber \begin{split} & \int^{T}_{0}\!\!\int_{\Omega} \chi {\varrho}_{n}^{\gamma_{n}}(b({\varrho}_{n}))_{\epsilon}dxdt \\ & \leq C(T) + \|b({\varrho}_{n})_{\epsilon}\|_{L^{\infty}(0,T; L^{\frac{6\gamma_{n}}{5\gamma_{n}-3}}(\Omega))}+ \|b({\varrho}_{n})_{\epsilon}\|_{L^{\infty}(0,T; L^{\frac{3\gamma_{n}}{2\gamma_{n}-3}}(\Omega))} \\ & + \|b({\varrho}_{n})_{\epsilon}\|_{L^{\infty}(0,T; L^{\frac{3}{2}}(\Omega))} + \|b({\varrho}_{n})_{\epsilon}\|_{L^{2}(\Omega\times(0,T))}+\|r_{\epsilon}\|_{L^{2}(\Omega\times(0,T))}. \end{split}$$ By taking the limit $\epsilon \rightarrow 0$, $$\nonumber \begin{split} & \int^{T}_{0}\!\!\int_{\Omega} \chi {\varrho}_{n}^{\gamma_{n}}b({\varrho}_{n})dxdt \\ & \leq C(T) + \|b({\varrho}_{n})\|_{L^{\infty}(0,T; L^{\frac{6\gamma_{n}}{5\gamma_{n}-3}}(\Omega))}+ \|b({\varrho}_{n})\|_{L^{\infty}(0,T; L^{\frac{3\gamma_{n}}{2\gamma_{n}-3}}(\Omega))} \\ & + \|b({\varrho}_{n})\|_{L^{\infty}(0,T; L^{\frac{3}{2}}(\Omega))} + \|b({\varrho}_{n})\|_{L^{2}(\Omega\times(0,T))}. \end{split}$$ We approximate the function $z\mapsto z$ by a sequence of $\{b_{n}\}$ in (\[eq:2.10\]), and approximate $\chi$ to the identity function of $(0,T)$. Then, $$\label{eq:4.5} \begin{split} \int^{T}_{0}\!\!\int_{\Omega} {\varrho}_{n}^{\gamma_{n}+1}dxdt &\leq C(T) + \|{\varrho}_{n}\|_{L^{\infty}(0,T; L^{\frac{6\gamma_{n}}{5\gamma_{n}-3}}(\Omega))}+ \|{\varrho}_{n}\|_{L^{\infty}(0,T; L^{\frac{3\gamma_{n}}{2\gamma_{n}-3}}(\Omega))} \\ & + \|{\varrho}_{n}\|_{L^{\infty}(0,T; L^{\frac{3}{2}}(\Omega))} + \|{\varrho}_{n}\|_{L^{2}(\Omega\times(0,T))}. \end{split}$$ By taking into account that ${\varrho}_{n}\in L^{\infty}(0,T;L^{1}(\Omega))$ and and sine $\gamma_{n}\to \infty$ we can always assume that $\gamma_{n}\geq N=3$ we have that the right hand side of is uniformly bounded and we can conclude that $$\int^{T}_{0}\!\!\int_{\Omega} {\varrho}_{n}^{\gamma_{n}+1}dxdt \leq C(T)$$ which completes the proof of .\ [**Step 3: Convergence of the approximating sequence $\mathbf{\{{\varrho}_{n}, {{{\bm{u}}}}_{n}, \eta_{n}, f_{n}\}}$.**]{}\ By using the compactness properties of the approximating sequence $\{{\varrho}_{n}, {{{\bm{u}}}}_{n}, \eta_{n}, f_{n}\}$ stated in the Proposition \[prop:2.3\] and the bounds of the Step 1 and Step 2 we get $$\nonumber \begin{split} & {\varrho}_{n}{{{\bm{u}}}}_{n} \rightarrow {\varrho}{{{\bm{u}}}}\hspace{0.2cm} \text{in} \hspace{0.2cm} L^{p}(0,T; L^{r}(\Omega)) \hspace{0.2cm} \text{for all} \hspace{0.2cm} 1\leq p<\infty, \quad 1\leq r<2, \\ & {{{\bm{u}}}}_{n} \rightarrow {{{\bm{u}}}}\hspace{0.2cm} \text{in} \hspace{0.2cm} L^{p}(\Omega\times (0,T))\cap\{{\varrho}_{n}>0\} \hspace{0.2cm} \text{for all} \hspace{0.2cm} 1\leq p<2, \\ & {{{\bm{u}}}}_{n} \rightarrow {{{\bm{u}}}}\hspace{0.2cm} \text{in} \hspace{0.2cm} L^{2}(\Omega\times (0,T))\cap\{{\varrho}_{n} \ge \delta\} \hspace{0.2cm} \text{for all} \hspace{0.2cm} \delta>0, \\ & {\varrho}_{n}{{{\bm{u}}}}_{ni}{{{\bm{u}}}}_{nj} \rightarrow {\varrho}{{{\bm{u}}}}_{i}{{{\bm{u}}}}_{j} \hspace{0.2cm} \text{in} \hspace{0.2cm} L^{p}(0,T; L^{1}(\Omega)) \hspace{0.2cm} \text{for all} \hspace{0.2cm} 1\leq p<\infty,\\ & ({\varrho}_{n})^{\gamma_{n}}\rightharpoonup \pi, \hspace{0.2cm} \text{where $\pi\in \mathcal{M}((0,T)\times\Omega),$}\\ & \eta_{n} \rightarrow \eta, \hspace{0.2cm} \text{in} \hspace{0.2cm} L^{2}(\Omega\times (0,T)),\\ & \sigma_{n} \rightharpoonup \sigma, \hspace{0.2cm} \text{in} \hspace{0.2cm} L^{2}(\Omega\times (0,T)),\\ & f_{n} \rightharpoonup f \hspace{0.2cm} \text{in} \hspace{0.2cm} L^{2}(0,T; L^{\frac{6}{5}}(\Omega\times S^{2})). \end{split}$$ With the above convergence result we can pass into the weak limit in the system -, and we get that ${\varrho}, {{{\bm{u}}}}, \eta, f$ is a weak solution of the problem - provided we prove the conditions -. This is equivalent to the proof of $$\label{e12} {\varrho}\pi=\pi.$$ Setting $s_{n}={\varrho}_{n}\log{\varrho}_{n}$ and $\bar{s}=\overline{{\varrho}\log{\varrho}}$ its weak limit. and using (\[4.1\]) we get $$\nonumber ({\varrho}_n \log {\varrho}_n)_{t} +\nabla \cdot({\varrho}_n \log \rho_n {{{\bm{u}}}}_n)+ (\nabla \cdot {{{\bm{u}}}}_n){\varrho}_n=0.$$ Next, we apply the differential operator $(-\Delta)^{-1}\nabla \cdot $ to (\[4.2\]). Then $$\frac{d}{dt}\Big[(-\Delta)^{-1}\nabla \cdot({\varrho}_n {{{\bm{u}}}}_n)\Big]+(-\Delta)^{-1}\partial_{i}\partial_{j}({\varrho}_n {{{{\bm{u}}}}_n}_{i}{{{{\bm{u}}}}_n}_{j})+2\nabla \cdot {{{\bm{u}}}}_n -{\varrho}^{\gamma_{n}}-\eta_n^{2}=$$ $$(-\Delta)^{-1}\nabla \cdot (\nabla \cdot \sigma_n-\nabla \eta_n),$$ from which we have $$2\nabla \cdot {{{\bm{u}}}}_n= -\frac{d}{dt}\Big[(-\Delta)^{-1}\nabla \cdot({\varrho}_n {{{\bm{u}}}}_n)\Big]-(-\Delta)^{-1}\partial_{i}\partial_{j}({\varrho}_n {{{{\bm{u}}}}_n}_{i} {{{{\bm{u}}}}_n}_{j}) +{\varrho}_n^{\gamma_{n}}+\eta_n^{2}$$ $$+(-\Delta)^{-1}\nabla \cdot (\nabla \cdot \sigma_n-\nabla \eta_n).$$ The last two relations yield $$\nonumber \begin{split} & 2\Big[({\varrho}_{n} \log {\varrho}_{n})_{t}+\nabla \cdot({\varrho}_{n} \log {\varrho}_{n} {{{\bm{u}}}}_n)\Big]+({\varrho}_{n})^{\gamma_{n}+1} \\ &= -{\varrho}_{n}(\eta_{n})^{2}-{\varrho}_{n}\Big[(-\Delta)^{-1}\nabla \cdot(\nabla \cdot \sigma_{n}-\nabla \eta_{n})\Big] +\frac{d}{dt}\Big[{\varrho}_{n} (-\Delta)^{-1}\nabla \cdot({\varrho}_{n} {{{\bm{u}}}}_{n})\Big] \\ &+ \nabla \cdot\Big[{\varrho}_{n} {{{\bm{u}}}}_{n} (-\Delta)^{-1}\nabla \cdot({\varrho}_{n} {{{\bm{u}}}}_{n})\Big] \\ &+ {\varrho}_{n} \Big[(-\Delta)^{-1}\partial_{i}\partial_{j}({\varrho}_{n} {{{{\bm{u}}}}_n}_{i}{{{{\bm{u}}}}_n}_{j}) -u^{n}\cdot \nabla (-\Delta)^{-1}\nabla \cdot({\varrho}_{n} {{{\bm{u}}}}_{n})\Big]. \end{split}$$ By taking the limit $n\rightarrow \infty$ in the last relation, we get $$\nonumber \begin{split} & 2\Big[\overline{s}_{t}+\nabla \cdot({{{\bm{u}}}}\overline{s})\Big]+\overline{({\varrho}_{n})^{\gamma_{n}+1}} \\ &= -{\varrho}\eta^{2}-{\varrho}\Big[(-\Delta)^{-1}\nabla \cdot(\nabla \cdot \sigma-\nabla \eta)\Big] +\frac{d}{dt}\Big[\rho (-\Delta)^{-1}\nabla \cdot({\varrho}{{{\bm{u}}}})\Big] \\ &+ \nabla \cdot\Big[{\varrho}{{{\bm{u}}}}(-\Delta)^{-1}\nabla \cdot({\varrho}{{{\bm{u}}}})\Big] \\ &+ {\varrho}\Big[(-\Delta)^{-1}\partial_{i}\partial_{j}({\varrho}{{{\bm{u}}}}_{i}u_{j}) -u\cdot \nabla (-\Delta)^{-1}\nabla \cdot({\varrho}{{{\bm{u}}}})\Big], \end{split}$$ where we use Proposition \[prop:2.3\] (iii) to pass to the limit of ${\varrho}_{n}(\eta_{n})^{2}$. Next, we take the limit of (\[4.2\]). By Proposition \[prop:2.3\] (ii), $$\partial_{t}({\varrho}{{{\bm{u}}}}) +\nabla \cdot ({\varrho}{{{\bm{u}}}}\otimes {{{\bm{u}}}})-\Delta {{{\bm{u}}}}-\nabla (\nabla \cdot {{{\bm{u}}}})+\nabla \pi+\nabla \eta^{2}=\nabla \cdot \sigma-\nabla \eta.$$ Let $s={\varrho}\log {\varrho}$. By following the same calculations above, we obtain that $$\nonumber \begin{split} & 2\Big[{s}_{t}+\nabla \cdot({{{\bm{u}}}}{s})\Big]+ {\varrho}\pi \\ &=-{\varrho}\eta^{2} -{\varrho}\Big[(-\Delta)^{-1}\nabla \cdot(\nabla \cdot \sigma-\nabla \eta)\Big] +\frac{d}{dt}\Big[{\varrho}(-\Delta)^{-1}\nabla \cdot({\varrho}{{{\bm{u}}}})\Big] \\ &+ \nabla \cdot\Big[{\varrho}{{{\bm{u}}}}(-\Delta)^{-1}\nabla \cdot({\varrho}{{{\bm{u}}}})\Big] \\ &+ {\varrho}\Big[(-\Delta)^{-1}\partial_{i}\partial_{j}({\varrho}{{{\bm{u}}}}_{i}{{{\bm{u}}}}_{j}) -u\cdot \nabla (-\Delta)^{-1}\nabla \cdot({\varrho}{{{\bm{u}}}})\Big]. \end{split}$$ Comparing the last two relations, we have $$\partial_{t}(\bar{s}-s)+{\operatorname{div}}\left ((\bar{s}-s){{{\bm{u}}}}_{n}\right)=-\overline{{\varrho}{\operatorname{div}}{{{\bm{u}}}}}+{\varrho}_{n}{\operatorname{div}}{{{\bm{u}}}}_{n} \label{e13}$$ and $$\partial_{t}(\bar{s}-s)+{\operatorname{div}}\left ((\bar{s}-s){{{\bm{u}}}}_{n}\right)=\frac{1}{2}\left({\varrho}\pi-\overline{({\varrho}_{n})^{\gamma_{n}+1}}\right). \label{e14}$$ Next, using that $$({\varrho})^{\gamma_{n}}\rightarrow \mathbf{1}_{\{{\varrho}=1\}}, \quad \text{a.e. in $L^{p}((0,T)\times \Omega)$},$$ which yields $$({\varrho})^{\gamma_{n}}({\varrho}_{n}-{\varrho})\rightharpoonup 0,$$ we obtain $$\overline{({\varrho}_{n})^{\gamma_{n}+1}}-{\varrho}\overline{({\varrho}_{n})^{\gamma_{n}}}=\overline{({\varrho}_{n})^{\gamma_{n}}({\varrho}_{n}-{\varrho})}=\overline{(({\varrho}_{n})^{\gamma_{n}}-{\varrho}^{\gamma_{n}})({\varrho}_{n}-{\varrho})}\geq 0. \label{e15}$$ From we deduce, $${\varrho}\pi={\varrho}\overline{({\varrho}_{n})^{\gamma_{n}}}\leq \overline{({\varrho}_{n})^{\gamma_{n}+1}}.\nonumber $$ Integrating in space we get $$\partial_{t}\int_{\Omega}(\bar{s}-s)dx\leq 0. \nonumber $$ Then, since $(\bar{s}-s)|_{t=0}=0$ and by the convexity of $s$ we have $s\leq \bar{s}$ and $s=\bar{s}$. Therefore, from we obtain $${\varrho}\pi=\overline{({\varrho}_{n})^{\gamma_{n}+1}} \label{e18}$$ Since for any $\varepsilon>0$, there exists $\bar{n}$, such that for any $n\geq \bar{n}$ we have the property $x^{\gamma_{n}+1}\geq x^{\gamma_{n}}-\varepsilon$ and applying it to $x={\varrho}$ we have $$({\varrho}_{n})^{\gamma_{n}+1}\geq ({\varrho}_{n})^{\gamma_{n}}-\varepsilon. \label{e19}$$ Passing to the weak limit in and by using we end up with $${\varrho}\pi\geq \pi-\varepsilon,$$ and, as $\varepsilon\to 0$ we conclude with $${\varrho}\pi\geq \pi. \label{e20}$$ Now it remains to prove ${\varrho}\pi\leq \pi$. Since ${\varrho}\pi$ is not defined almost everywhere in order to give a meaning to the inequality we want to prove we denote by $\omega_{k}$ a smoothing sequence in the space and time variables defined as follows $$\omega_{k}=k^{4}\omega(k\cdot),$$ $$\omega\in C^{\infty}({\mathbb{R}}^{4}),\ \omega\geq 0, \quad \int_{{\mathbb{R}}^{4}}\!\!\!\!\!\!\!\!\!-\omega\ dxdt=1,\quad spt(\omega)\in B_{1}({\mathbb{R}}^{4}).$$ We denote by ${\varrho}_{k}$ and $\pi_{k}$ a sequence of smooth functions defined as $${\varrho}_{k}={\varrho}\ast\omega, \qquad \pi_{k}=\pi\ast\omega$$ and we have that $${\varrho}_{k}\rightarrow {\varrho}\quad \text{in $C([0,T];L^{p})\cap C([0,T];H^{-1})$}$$ $$\pi_{k}\rightarrow \pi\quad \text{in $W^{-1,2}\cap L^{1}(L^{q})$}$$ for any $p,q$ such that $1/p +1/q=1$. Hence we can rewrite $({\varrho}-1)\pi$ as $$({\varrho}-1)\pi=({\varrho}_{k}-1)\pi_{k}+({\varrho}-{\varrho}_{k})\pi_{k}+({\varrho}-1)(\pi-\pi_{k}) \label{e21}$$ Since ${\varrho}_{k}\leq 1$ and sending $k$ to $\infty$ in we obtain $${\varrho}\pi-\pi\leq 0 \label{e22}$$ Considering together and and we have and finally we conclude the proof of the Theorem \[MT2\]. Proof of the theorem \[MT\] --------------------------- We can observe that the proof of the Main Theorem is a consequence of the Theorem \[MT2\]. The only think we have to check is that the condition holds in the sense of distribution. This last issue is a consequence of the following lemma (for the proof we refer to [@LionsMasmoudi-1999], Lemma 2.1). Let ${{{\bm{u}}}}\in L^{2}(0,T;H^{1}_{loc}(\Omega))$ and ${\varrho}\in L^{2}_{loc}((0,T)\times \Omega)$ satisfying $$\partial_t {\varrho}_n +{\operatorname{div}}({\varrho}_n {{{\bm{u}}}}_n)=0, \quad \text{in $(0,T)\times \Omega$},$$ $${\varrho}(0)={\varrho}_{0},$$ then the following two assertion are equivalent - ${\operatorname{div}}{{{\bm{u}}}}=0$, a.e. on $\{ {\varrho}\geq 1\}$ and $0\leq {\varrho}_{0}\leq 1$. - $0\leq {\varrho}\leq 1$. We conclude this section with a final remark on how to obtain the energy inequality that we require our global weak solutions have to satisfy. By using the convergences proved in the Theorem \[MT2\] we can pass into the weak limit in the energy inequality and we obtain $$\label{energy-bis} \begin{split} &\int_{\Omega} \Big(\frac{\rho|u|^{2}}{2}+\eta^{2} +\psi \Big)(t)dx + 4\int^{t}_{0}\int_{\Omega}\int_{S^{2}} |\nabla_{\tau}\sqrt{f}|^{2}d\tau dxdt \\ &+4 \int^{t}_{0} \int_{\Omega} \int_{S^{2}} |\nabla \sqrt{f}|^{2}d\tau dxdt +\int^{t}_{0} \int_{\Omega}\Big(|\nabla {{{\bm{u}}}}|^{2} + |{\operatorname{div}}{{{\bm{u}}}}|^{2} +2|\nabla \eta|^{2}\Big)dxdt \\ & \leq \int_{\Omega} \Big(\frac{\rho_{0}|u_{0}|^{2}}{2}+\eta^{2}_{0} +\psi_{0} \Big)dx+\liminf_{n\to \infty}\int_{\Omega}dx\frac{({\varrho}_{n0})^{\gamma_{n}}}{\gamma_{n}},\quad \text{a.e in $t$}. \end{split}$$ Now, if we take, for any $n>2$, ${\varrho}_{n0}={\varrho}_{0}$, $m_{n0}=m_{0}$ and we recall that $0\leq{\varrho}_{0}\leq 1$, then $$\liminf_{n\to \infty}\int_{\Omega}\frac{({\varrho}_{n0})^{\gamma_{n}}}{\gamma_{n}}dx=0$$ and we get the energy inequality . Acknowlegments ============== The work of D.D. was supported by the Ministry of Education, University and Research (MIUR), Italy under the grant PRIN 2012- Project N. 2012L5WXHJ, *Nonlinear Hyperbolic Partial Differential Equations, Dispersive and Transport Equations: theoretical and applicative aspects*. Ê K.T. gratefully acknowledges the support in part by the National Science Foundation under the grant DMS-1614964 and by the Simons Foundation under the Simons Fellows in Mathematics Award 267399. Part of this research was performed during the visit of K.T. at University of L’Aquila which was supported under the grant PRIN 2012- Project N. 2012L5WXHJ, *Nonlinear Hyperbolic Partial Differential Equations, Dispersive and Transport Equations: theoretical and applicative aspects*. [00]{} H. Bae and K. Trivisa, On the Doi model for the suspensions of rod-like molecules: Global-in-time existence, [*Commun. Math. Sci.*]{} [**11**]{} (2013), 831Ð850. H. Bae and K. Trivisa, On the Doi model for the suspensions of rod-like molecules in compressible fluids, [*Math. Models Methods Appl. Sci.*]{} [**22**]{} (2012), 1250027, 39 pp. P. Constantin, C. Fefferman, E.S. Titi and A. Zarnescu, Regularity of coupled two-dimensional nonlinear Fokker-Planck and Navier-Stokes systems, [*Comm. Math. Phys.*]{}, [**270**]{} (2007), no.3, 789–811. P. Constantin and N. Masmoudi, Global well-posedness for a Smoluchowski equation coupled with Navier-Stokes equations in 2D, [*Comm. Math. Phys.*]{}, [**278**]{} (2008), no.1, 179–191. M. Doi and S.F. Edwards, The theory of polymer dynamics, Oxford University press, 1986. E. Feireisl, On compactness of solutions to the compressible isentropic Navier-Stokes equations when the density is not square integrable, [*Comment. Math. Univ. Carolin.*]{}, [**42**]{} (2001), no.1, 83–98. P.L. Lions, Mathematical topics in fluid dynamics, v2, Compressible models, Oxford University Press, 1998. P.L. Lions and N. Masmoudi, On a free boundary barotropic model, [*Ann.  Inst. Henri Poincaré*]{}, [**16**]{} (1999) [no. 3]{}, 373-410. [^1]:
--- abstract: 'We construct an irreducible holomorphic connection with ${\rm SL}(2,{\mathbb{R}})$–monodromy on the trivial holomorphic vector bundle of rank two over a compact Riemann surface. This answers a question of Calsamiglia, Deroin, Heu and Loray in [@CDHL].' address: - 'School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005, India' - 'Université Côte d’Azur, CNRS, LJAD, France' - 'Institute of Differential Geometry, Leibniz Universität Hannover, Welfengarten 1, 30167 Hannover' author: - Indranil Biswas - Sorin Dumitrescu - Sebastian Heller title: | Irreducible flat ${\rm SL}(2,{\mathbb{R}})$-connections on the trivial holomorphic bundle --- Introduction {#sec:intro} ============ Take a compact connected oriented topological surface $S$ of genus $g$, with $g \geq 2$. There is an equivalence between the flat $\rm{SL}(2, {\mathbb{C}})$–connections over $S$ and the conjugacy classes of group homomorphisms from the fundamental group of $S$ into $\rm{SL}(2, {\mathbb{C}})$ (two such homomorphisms are conjugate if they differ by an inner automorphism of $\rm{SL}(2, {\mathbb{C}})$). This equivalence sends a flat connection to its monodromy representation. When $S$ is equipped with a complex structure, a flat $\rm{SL}(2, {\mathbb{C}})$–connection on $S$ produces a holomorphic vector bundle of rank two and trivial determinant on the Riemann surface defined by the complex structure on $S$; this is because constant transition functions for a bundle are holomorphic. In fact, since a holomorphic connection on a compact Riemann surface $\Sigma$ is automatically flat, there is a natural bijection between the following two: 1. pairs of the form $(E,\, D)$, where $E$ is a holomorphic vector bundle of rank two on $\Sigma$ with $\bigwedge^2 E$ holomorphically trivial, and $D$ is a holomorphic connection on $E$ that induces the trivial connection on $\bigwedge^2 E$; 2. flat $\rm{SL}(2, {\mathbb{C}})$–connections on $\Sigma$. This bijection is a special case of the Riemann–Hilbert correspondence (see, for instance, [@De; @Ka]). Consider the flat $\rm{SL}(2, {\mathbb{C}})$–connections on a compact Riemann surface $\Sigma$ satisfying the condition that the corresponding holomorphic vector bundle of rank two on $\Sigma$ is holomorphically trivial; they are known as differential ${\mathfrak s}{\mathfrak l}(2, {\mathbb{C}})$–systems on $\Sigma$ (see [@CDHL]), where ${\mathfrak s}{\mathfrak l}(2, {\mathbb{C}})$ is the Lie algebra of $\rm{SL}(2, {\mathbb{C}})$. In view of the above Riemann–Hilbert correspondence, differential ${\mathfrak s}{\mathfrak l}(2, {\mathbb{C}})$–systems on $\Sigma$ are parametrized by the vector space ${\mathfrak s}{\mathfrak l}(2, {\mathbb{C}})\otimes H^0(\Sigma ,\, K_{\Sigma})$, where $K_{\Sigma}$ is the holomorphic cotangent bundle of $\Sigma$. The zero element of the vector space ${\mathfrak s}{\mathfrak l}(2, {\mathbb{C}})\otimes H^0(\Sigma ,\, K_{\Sigma})$ corresponds to the trivial $\rm{SL}(2, {\mathbb{C}})$–connection on $\Sigma$. A differential ${\mathfrak s}{\mathfrak l}(2, {\mathbb{C}})$–system is called irreducible if the corresponding monodromy representation of the fundamental group of $\Sigma$ is irreducible. We shall now describe a context where irreducible differential ${\mathfrak s}{\mathfrak l}(2, {\mathbb{C}})$–systems appear. For any cocompact lattice $\Gamma\, \subset\, {\rm SL}(2,{\mathbb C})$, the compact complex threefold ${\rm SL}(2,{\mathbb C}) / \Gamma$ does not admit any compact complex hypersurface [@HM p. 239, Theorem 2], in particular, there is no nonconstant meromorphic function on ${\rm SL}(2,{\mathbb C}) / \Gamma$. It is easy to see that ${\rm SL}(2,{\mathbb C}) / \Gamma$ does not contain a ${\mathbb C}{\mathbb P}^1$. It is known that some elliptic curves do exist in those manifolds. A question of Margulis asks whether ${\rm SL}(2,{\mathbb C})/\Gamma$ can contain a compact Riemann surface of genus bigger than one. Ghys has the following reformulation of Margulis’ question: Is there a pair $(\Sigma,\, D)$, where $D$ is a differential ${\mathfrak s}{\mathfrak l}(2, {\mathbb{C}})$–system on a compact Riemann surface $\Sigma$ of genus at least two, such that the image of the monodromy homomorphism for $D$ $$\pi_1(\Sigma)\, \longrightarrow\, \rm{SL}(2, {\mathbb{C}})$$ is a conjugate of $\Gamma$ ? Existence of such a pair $(\Sigma,\, D)$ is equivalent to the existence of an embedding of $\Sigma$ in ${\rm SL}(2,{\mathbb C}) / \Gamma$. Inspired by Ghys’ strategy, the authors of [@CDHL] study the Riemann–Hilbert mapping for the irreducible differential ${\mathfrak s}{\mathfrak l}(2, {\mathbb{C}})$–systems (see also [@BD]). Although some (local) results were obtained in [@CDHL] and [@BD], the question of Ghys is still open. In this direction, it was asked in [@CDHL] (p. 161) whether discrete or real subgroups of $\rm{SL}(2, {\mathbb{C}})$ can be realized as the monodromy of some irreducible differential ${\mathfrak s}{\mathfrak l}(2, {\mathbb{C}})$–system on some compact Riemann surface. Note that if the flat connection on a compact Riemann surface $\Sigma$ corresponding to a homomorphism $\pi_1(\Sigma)\, \longrightarrow\, {\rm SL}(2,{\mathbb C})$ with finite image is irreducible, then the underlying holomorphic vector bundle is stable [@NSe], in particular, it is not holomorphically trivial. Our main result (Theorem \[Main\]) is the construction of a pair $(\Sigma, \, D)$, where $\Sigma$ is a compact Riemann surface of genus bigger than one and $D$ is an irreducible differential ${\mathfrak s}{\mathfrak l}(2, {\mathbb{C}})$–system on $\Sigma$, such that the image of the monodromy representation for $D$ is contained in $\operatorname{SL}(2,{\mathbb{R}})$. Let us mention that the related question of characterizing rank two holomorphic vector bundles $\mathcal L$ over a compact Riemann surface such that for some holomorphic connection on $\mathcal L$ the associated monodromy is real was raised in [@Ka p. 556] attributing it to Bers. The Betti moduli space of a 1-punctured torus {#ADHS1t} ============================================= For $\tau\, \in\, \mathbb C$ with ${\rm Im}\, \tau\, >\, 0$, let $\Gamma\,=\, {\mathbb Z}+\tau{\mathbb Z}\,\subset\, \mathbb C$ be the corresponding lattice. Set $T^2\,:=\,{\mathbb{C}}/\Gamma$, and fix the point $o\,=\,[0]\,\in\, T^2.$ We shall always consider $T^2$ as a Riemann surface, and for simplicity we restrict to the case of $$\tau\,=\,\sqrt{-1}\, .$$ For a fixed $\rho\,\in \, [0,\, \tfrac{1}{2}[$, we are interested in the Betti moduli space $\mathcal M^\rho_{1,1}$ parametrizing flat $\operatorname{SL}(2,{\mathbb{C}})$–connections on the complement $T^2\setminus\{o\}$ whose local monodromy around $o$ lies in the conjugacy class of $$\label{locmon}{{\left(\begin{matrix} e^{2\pi \sqrt{-1} \rho} &0 \\ 0& e^{-2\pi\sqrt{-1} \rho}\end{matrix}\right)}}\,\in\, {\rm SL}(2,{\mathbb C})\, .$$ This Betti moduli space $\mathcal M^\rho_{1,1}$ does not depend on the complex structure of $T^2$. When $\rho\,=\,0$, it is the moduli space of flat $\operatorname{SL}(2,{\mathbb{C}})$–connections on $T^2$; in that case $\mathcal M^\rho_{1,1}$ is a singular affine variety. However, for every $0\,<\,\rho\,<\,\tfrac{1}{2}$, the space $\mathcal M^\rho_{1,1}$ is a nonsingular affine variety. We shall recall an explicit description of this affine variety. Let $x,\, y,\, z$ be the algebraic functions on $\mathcal M^\rho_{1,1}$ defined as follows: for any homomorphism $$h\,\colon\, \pi_1(T^2\setminus\{o\},\, q)\,\longrightarrow\, \operatorname{SL}(2,{\mathbb{C}})$$ representing $[h]\,\in \, {\mathcal M}^\rho_{1,1}$, $$x([h]) \,=\, \operatorname{tr}(h(\alpha)),\ y([h]) \,=\, \operatorname{tr}(h(\beta)),\ z([h])\,=\, \operatorname{tr}(h(\beta\alpha)),\,$$ where $\alpha,\,\beta$ are the standard generators of $\pi_1(T^2\setminus\{o\},\,q)$ (see Figure \[figure1\]). Then the variety ${\mathcal M}^\rho_{1,1}$ is defined by the equation $$\label{M11eq} {\mathcal M}^\rho_{1,1}\,=\,\{(x,y,z)\,\in\,{\mathbb{C}}^3\,\mid \, x^2+y^2+z^2-xyz-2-2\cos(2\pi \rho)\}\, ;$$ the details can be found in [@Gol], [@Magn]. \[irr\] Take any $\rho\,\in\,]0,\,\tfrac{1}{2}[$, and consider a representation $$h\,\colon\, \pi_1(T^2\setminus\{o\},\, q)\,\longrightarrow\, \operatorname{SL}(2,{\mathbb{C}})\, ,$$ with $[h]\,\in\, {\mathcal M}^\rho_{1,1}$. Then, the representation of the free group $F(s,t)$, with generators $s$ and $t$, defined by $$s\,\longmapsto\, X\,:=\,h(\alpha)h(\alpha) \ \ \text{ and } \ \ t\,\longmapsto\, Y\,:=\, h(\beta)h(\beta)$$ is reducible if and only if $$x([h])y([h])\, =\, 0\, ,$$ where $x,\,y$ are the functions in . It is known (see [@Gol]) that, up to conjugation, $$\label{repxy} h(\alpha)\,=\,\begin{pmatrix} x([h])&1\\-1&0\end{pmatrix}, \ \ h(\beta) \,=\, \begin{pmatrix} 0&-\zeta\\ \zeta^{-1}& y([h])\end{pmatrix}\, ,$$ where $$\label{zeta1} \zeta+\zeta^{-1}\,=\,z([h])\, .$$ Note that the two solutions of $\zeta$ satisfying actually give conjugate (= equivalent) representations. A representation generated by two $\operatorname{SL}(2,{\mathbb{C}})$ matrices $A,\,B$ is reducible if and only if $$AB-BA$$ has a non-trivial kernel. Note that $$\text{Det}(XY-YX)\,=$$ $$-x([h])^2y([h])^2\frac{1+\zeta^4-\zeta x([h]) y([h])- \zeta^3 x([h])y([h])+ \zeta^2(-2+x([h])^2+y([h])^2)}{\zeta^2}\, .$$ On the other hand, we have $$2 \cos(2\pi\rho)\,=\,\text{tr}(h(\beta)^{-1}h(\alpha)^{-1}h(\beta)h(\alpha))$$ $$=\,\zeta^{-2}+\zeta^2+x([h])^2-x([h])y([h])\zeta^{-1}-x([h])y([h])\zeta+y([h])^2\, .$$ Therefore, it follows that $$\text{Det}(XY-YX)\,=\,2x([h])^2y([h])^2(1-\cos[2\pi \rho])\, ,$$ and the proof of the lemma is complete. Parabolic bundles and holomorphic connections ============================================= Parabolic bundle {#sec2.1} ---------------- We briefly recall the notion of a parabolic structure, mainly for the purpose of fixing the notation. We are only concerned with the $\operatorname{SL}(2,{\mathbb{C}})$–case, so our notation differs from the standard references, e.g., [@MSe; @Biq; @B]. Instead, we follow the notation of [@Pir] (be aware that Pirola uses a scaling factor 2 of the parabolic weights); see also [@HeHe] for this notation. Let $V\,\longrightarrow\,\Sigma$ be a holomorphic vector bundle of rank two with trivial determinant bundle over a compact Riemann surface $\Sigma$. Let $p_1,\, \cdots ,\, p_n\,\in\,\Sigma$ be pairwise distinct points, and set the divisor $$D\,=\,p_1+\ldots +p_n\, .$$ For every $k\,\in\,\{1,\,\cdots ,\, n\}$, let $$L_k\,\subset\, V_{p_k}$$ be a line in the fiber of $V$ at $p_k,$ and also take $$\rho_k\,\in\, ]0,\, \tfrac{1}{2}[\, .$$ \[def:par\] A [*parabolic structure*]{} on $V$ is given by the data $${\mathcal P}\,:=\, (D,\, \{L_1,\,\cdots ,\, L_n\},\, \{\rho_1,\,\cdots ,\, \rho_k\})\, ;$$ we call $\{L_k\}_{k=1}^n$ the quasiparabolic structure, and $\rho_k$ the parabolic weights. A parabolic bundle over $\Sigma$ is given by a rank two holomorphic vector bundle $V$, with $\bigwedge^2 V\,=\, {\mathcal O}_\Sigma$, equipped with a parabolic structure $\mathcal P$. It should be emphasized that Definition \[def:par\] is very specific to the case of $\operatorname{SL}(2,{\mathbb{C}})$–bundles. The parabolic degree of a holomorphic line subbundle $$F\,\subset\, V$$ is defined to be $$\text{par-deg}(F)\,=\, {\rm degree}(F)+\sum_{k=1}^n \rho^F_k\, ,$$ where $\rho^F_k\,= \, \rho_k$ if $F_{p_k}\,=\,L_k$ and $$\rho^F_k\,=\, -\rho_k$$ if $F_{p_k}\,\neq\, L_k$. A parabolic bundle $(V,\,\mathcal P)$ is called [*stable*]{} if and only $$\text{par-deg}(F)\, <\, 0$$ for every holomorphic line subbundle $F\,\subset \, V$. As before, ${\mathcal P}\,=\,(D\,=\, p_1+\ldots +p_n,\, \{L_1,\,\cdots ,\,L_n\}, \,\{\rho_1,\,\cdots ,\, \rho_k\})$ is a parabolic structure on a rank two bundle $V$ of trivial determinant. A strongly parabolic Higgs field on $(V,\,{\mathcal P})$ is a holomorphic section $$\Theta\, \in\, H^0(\Sigma,\, \text{End}(V)\otimes K_\Sigma\otimes {\mathcal O}_\Sigma(D))$$ such that - $\text{trace}(\Theta) \,=\, 0$, - $L_k\,\subset\,\text{kernel}(\Theta(p_k))$ for all $1\, \leq\, k\, \leq\, n$. This implies that all the residues of a strongly parabolic Higgs field are nilpotent. Deligne extension {#Delext} ----------------- Using the complex structure of $T^2\,=\, {\mathbb C}/\Gamma$, an open subset of the moduli space ${\mathcal M}^\rho_{1,1}$ can be realized as a fibration over a moduli space of parabolic bundles. This map, which will be described in Section \[sec3.e\], is constructed using the Deligne extension (introduced in [@De]). Any flat $\operatorname{SL}(2,{\mathbb{C}})$–connection $\nabla$ on a holomorphic vector bundle $E_0$ over $T^2\setminus\{o\}$, corresponding to a point in ${\mathcal M}^\rho_{1,1}$, locally, around $o\,\in\, T^2$, is holomorphic $\operatorname{SL}(2,{\mathbb{C}})$–gauge equivalent to the connection $$\label{local-normal-form-connection} d+{{\left(\begin{matrix}\rho&0\\0&-\rho\end{matrix}\right)}}\frac{dw}{w}$$ on the trivial holomorphic bundle of rank two, where $w$ is a holomorphic coordinate function on $T^2$ defined around $o$ with $w(o)\,=\, 0$. Take such a neighborhood $U_o$ of $o$, and consider the trivial holomorphic bundle $U_o\times {\mathbb C}^2\, \longrightarrow\, U_o$ equipped with the connection in . Now glue the two holomorphic vector bundles, namely $U_o\times {\mathbb C}^2$ and $E_0$, over $U_o\setminus\{o\}$ such that the connection $\nabla\vert_{U_o\setminus\{o\}}$ is taken to the restriction of the connection in to $U_o\setminus\{o\}$. This gluing is holomorphic because it takes one holomorphic connection to another holomorphic connection. Consequently, this gluing produces a holomorphic vector bundle $$\label{dV} V\,\longrightarrow\, T^2$$ of rank $2$ and degree $0$. Furthermore, the connection $\nabla$ on $E_0\, \longrightarrow\, T^2\setminus\{0\}$ extends to a logarithmic connection on $V$ over $T^2$; this logarithmic connection on $V$ will also be denoted by $\nabla$. (See [@De] for details.) It can be shown that 1. $\bigwedge^2 V\, =\, {\mathcal O}_{T^2}$, where $V$ is the vector bundle in , and 2. the logarithmic connection on $\bigwedge^2 V$ induced by the logarithmic connection $\nabla$ on $V$ coincides with the holomorphic connection on ${\mathcal O}_{T^2}$ induced by the de Rham differential $d$. Indeed, the logarithmic connection on $U_o\times \bigwedge^2{\mathbb C}^2\,=\, U_o\times {\mathbb C}$ induced by the connection in coincides with the trivial connection on $U_o\times\mathbb C$ given by the de Rham differential $d$. On the other hand, the connection on $\bigwedge^2 E_0\,=\, {\mathcal O}_{T^2\setminus\{o\}}$ induced by the connection $\nabla$ on $E_0$ coincides with the trivial connection on ${\mathcal O}_{T^2\setminus\{o\}}$ given by the de Rham differential $d$. The above two statements follow from these. From Atiyah’s classification of holomorphic vector bundles over any elliptic curve, [@At], we know the possible types of the vector bundle $V$ in . \[cort\] The vector bundle $V$ in is one of the following three types: 1. $V\,=\, L\oplus L^{^*}$ with ${\rm degree}(L)\,=\,0$; 2. there is a spin bundle $S$ on $T^2$ (meaning a holomorphic line bundle of order two), such that $V$ is a nontrivial extension $$0\, \longrightarrow\, S\, \longrightarrow\, V\, \longrightarrow\, S \, \longrightarrow\,0$$ of $S$ by itself; and 3. $V\,=\, L\oplus L^{^*}$ with ${\rm degree}(L)\,>\,0$. \[lem3.3\] Consider the vector bundle $V$ in for $\tfrac{1}{2}\,>\,\rho\,>\,0$. Then the last one of the three cases in Corollary \[cort\], as well as the special situation of the first case where $L\,=\, S$ is a spin bundle, cannot occur. Assume that the third case occurs. Then consider the composition of homomorphisms $$L\, \hookrightarrow\, L\oplus L^{^*} \, \stackrel{\nabla}{\longrightarrow}\, (L\oplus L^{^*})\otimes K_{T^2}\otimes {\mathcal O}_{T^2}(o) \, \longrightarrow\, L^{^*}\otimes K_{T^2}\otimes {\mathcal O}_{T^2}(o)\,=\, L^{^*}\otimes {\mathcal O}_{T^2}(o)\, ,$$ where $K_{T^2}\,=\, {\mathcal O}_{T^2}$ is the holomorphic cotangent bundle of $T^2$ and the homomorphism $$(L\oplus L^{^*})\otimes K_{T^2}\otimes {\mathcal O}_{T^2}(o) \, \longrightarrow\, L^{^*}\otimes K_{T^2}\otimes {\mathcal O}_{T^2}(o)$$ is given by the projection $L\oplus L^{^*}\, \longrightarrow\, L^{^*}$. This composition of homomorphisms vanishes identically, because $$\text{degree}(L)\,>\, \text{degree}(L^*\otimes {\mathcal O}_{T^2}(o)) \,=\, 1- \text{degree}(L)$$ (recall that $\text{degree}(L)\, >\, 0$). Consequently, the logarithmic connection $\nabla$ on $V$ preserves the line subbundle $L$. For a holomorphic line bundle $\xi$ with a logarithmic connection singular over $o$, we have $$\label{rd} {\rm degree}(\xi)+\text{Residue}_{\xi}(o)\,=\, 0$$ [@Oh p. 16, Theorem 3]. Now, the logarithmic connection on $L$ induced by $\nabla$ contradicts , because ${\rm degree}(L)+\text{Residue}_L(o)\,> \, 0$; note that $\text{Residue}_L(o)\,\in \,\{\rho,\, -\rho\}$. Therefore, we conclude that the third case can’t occur. If $V\,=\, S\oplus S\,=\, S\otimes {\mathcal O}_{T^2}$, where $S$ is a holomorphic line bundle on $T^2$ of order two, then for a suitable direct summand $S$ of $V$, the residue of the logarithmic connection on it, constructed using the above composition, is $\rho$. This again contradicts . Parabolic structure from a logarithmic connection {#sec3.e} ------------------------------------------------- Consider a logarithmic connection $\nabla$ on a holomorphic bundle $V$ of rank two and with trivial determinant over a compact Riemann surface $\Sigma$. We assume that $\nabla$ is a $\operatorname{SL}(2,{\mathbb{C}})$–connection, i.e., the logarithmic connection on $\bigwedge^2 V\,=\, {\mathcal O}_\Sigma$ induced by $\nabla$ is the trivial connection. Let $p_1,\,\cdots ,\,p_n\,\in\,\Sigma$ be the singular points of $\nabla$. We also assume that the residue $$\text{res}_{p_k}(\nabla)\in\text{End}_0(V_{p_k})$$ of the connection $\nabla$ at every point $p_k$ has two real eigenvalues $\pm\rho_k$ with $\rho_k\,\in\,]0,\, \tfrac{1}{2}[.$ For every $1\, \leq\, k\, \leq\, n$, let $$L_k\,:=\, \text{Eig}(\text{Res}_{p_k}(\nabla),\, \rho_k)\, \subset\, V_{p_k}$$ be the eigenline of the residue of $\nabla$ at $p_k$ for the eigenvalue $\rho_k$. The logarithmic connection $\nabla$ gives rise to the parabolic structure $${\mathcal P}\,=\,(D=p_1+\ldots +p_n,\, \{L_1,\, \cdots,\, L_n\},\, \{\rho_1,\,\cdots ,\, \rho_n\})\, .$$ It is straightforward to check that another such logarithmic connections $\nabla^1$ on $V$ induces the same parabolic structure $P$ if and only if $\nabla -\nabla^1$ is a strongly parabolic Higgs field on $(V,\, {\mathcal P})$. It should be mentioned that in [@MSe], the local form $$d+{{\left(\begin{matrix}\rho&0\\0& 1-\rho\end{matrix}\right)}}\frac{dw}{w}$$ of the connection is used (instead of the local form in ). In that case the Deligne extension gives a rank two holomorphic vector bundle $W$ (instead of $V$) with $\bigwedge^2 W\,=\, {\mathcal O}_{\Sigma}(-D)$ (instead of $\bigwedge^2 V\,=\, {\mathcal O}_{\Sigma}$), while the parabolic weights at $p_k$ become $\rho_k,\, 1-\rho_k$ (instead of $\rho_k,\, -\rho_k$). A theorem of Mehta and Seshadri [@MSe p. 226, Theorem 4.1(2)], and Biquard [@Biq p. 246, Théorème 2.5] says that the above construction of a parabolic bundle $(V,\, {\mathcal P})$ from a logarithmic connection $\nabla$ produces a bijection between the stable parabolic bundles (in the sense of Section \[sec2.1\]) on $(\Sigma,\, D)$ and the space of isomorphism classes of irreducible flat ${\rm SU}(2)$–connections on the complement $\Sigma\setminus D$. See, for example, [@Pir Theorem 3.2.2] for our specific situation. As a consequence of the above theorem of [@MSe] and [@Biq], for every logarithmic connection $\nabla$ on $V$ which produces a stable parabolic structure $\mathcal P$, there exists a unique strongly parabolic Higgs field $\Theta$ on $(V,\, {\mathcal P})$ such that the holonomy of the flat connection $\nabla+\Theta$ is contained in ${\rm SU}(2)$. Moreover, this flat ${\rm SU}(2)$–connection $\nabla+\Theta$ is irreducible. Abelianization ============== In [@He3], the connection $\nabla$ (or more correctly representatives for each gauge class in $\mathcal M_{1,1}^\rho$) is computed for the special case where $\rho\,=\,\tfrac{1}{6},$ $\tau\,=\,\sqrt{-1}$ and $L\,\in \,{\rm Jac}(T^2)\setminus\{S \,\mid\, S^{\otimes 2}\,=\, K_{T^2}\}$. We shall show (see Proposition \[explicit\_coeff\]) that for general $\rho$, but $\tau\,=\,\sqrt{-1}$ and $L\,\in \,{\rm Jac}(T^2)\setminus\{S \,\mid\, S^{\otimes 2}\,=\, K_{T^2}\}$, the corresponding connection $\nabla$ is of the form $$\label{abel-connection} \nabla\,=\,\nabla^{a,\chi,\rho}\,=\,{{\left(\begin{matrix}\nabla^L &\gamma^+_\chi\\ \gamma^-_\chi & \nabla^{L^*} \end{matrix}\right)}}\, ,$$ where $a,\, \chi\,\in\,{\mathbb{C}}$, $$\nabla^L\,=\,d+a\cdot dw+\chi\cdot d\overline{w}$$ is a holomorphic connection on $L$ and $\nabla^{L^*}$ is its dual connection on $L^{^*}$; here $w$ a complex affine coordinate on $T^2\,=\, {\mathbb C}/\Gamma$. The off–diagonal terms in can be described explicitly in terms of the theta functions as explained below. Before doing so, we briefly describe both the Jacobian and the rank one de Rham moduli space for $T^2$ in terms of some useful coordinates. Let $$d\,=\,\partial+\overline\partial$$ be the decomposition of the de Rham differential $d$ on $T^2$ into its $(1,0)$–part $\partial$ and $(0,1)$–part $\overline\partial$. It is well–known that every holomorphic line bundle of degree zero on $T^2$ is given by a holomorphic structure $$\overline{\partial}^\chi\,=\,\overline{\partial} +\chi\cdot d\overline{w}$$ on the $C^\infty$ trivial line bundle $T^2\times {\mathbb C}\,\longrightarrow\, T^2$ for some $\chi\,\in\,{\mathbb{C}}$, where $w$ is an affine coordinate function on ${\mathbb{C}}/({\mathbb{Z}}+\sqrt{-1}{\mathbb{Z}})\,=\,T^2$ (note that $d\overline{w}$ does not depend on the choice of the affine function $w$). Clearly, two such differential operators $$\overline{\partial}^{\chi_1}\ \ \text{ and } \ \ \overline{\partial}^{\chi_2}$$ determine isomorphic holomorphic line bundles if and only if $\overline{\partial}^{\chi_1}$ and $\overline{\partial}^{\chi_2}$ are gauge equivalent. Now, they are gauge equivalent if and only if $$\chi_2-\chi_1\,\in\, \Gamma^*$$ where $$\Gamma^*\,=\,\pi{\mathbb{Z}}+\pi \sqrt{-1}{\mathbb{Z}}$$ (recall that $\tau\,=\, \sqrt{-1}$). \[rs\] The holomorphic line bundle $L(\overline{\partial}^{\chi})\, :=\, [\overline{\partial}^{\chi}]$, given by the Dolbeault operator $\overline{\partial}^\chi$, is a spin bundle if and only if $2\chi\,\in\,\Gamma^*.$ Similarly, flat line bundles on $T^2$ are given by the connection operator $$d^{a,\chi}\,=\,d+a\cdot dw+\chi\cdot d\overline{w}$$ on the line bundle $T^2\times {\mathbb C}\,\longrightarrow\, T^2$, for some $a,\, \chi\,\in\,{\mathbb{C}}$. Moreover two connections $d^{a_1,\chi_1}$ and $d^{a_2,\chi_2}$ are isomorphic if and only if $$(a_2-a_1) + (\chi_2-\chi_1) \,\in\, 2\pi \sqrt{-1} {\mathbb{Z}}\ \ \text{ and }\ \ (a_2-a_1) - (\chi_2-\chi_1)\,\in\, 2\pi \sqrt{-1} {\mathbb{Z}}\, .$$ The (shifted) theta function for ${\mathbb{C}}/ \Gamma$, where as before $\Gamma\,=\,{\mathbb{Z}}+{\mathbb{Z}}\sqrt{-1}$, will be denoted by $\vartheta$. In other words, $\vartheta$ is the unique (up to a multiplicative constant) entire function satisfying $\vartheta(0) \,=\, 0$ and $$\vartheta(w+ 1) \,=\, \vartheta (w),\,\, \vartheta(w+\sqrt{-1}) \,=\, - \vartheta (w)e^{-2\pi \sqrt{-1}w}\, .$$ Then the function $$t_{x}(w) \,:=\, \frac{\vartheta(w-x)}{\vartheta(w)}e^{-\pi x(w-\overline{w})}$$ is doubly periodic on ${\mathbb{C}}\setminus\Gamma$ with respect to $\Gamma$ and satisfies the equation $$(\operatorname{\overline\partial}-\pi xd\overline{w})t_{x}\,=\,0\, .$$ Thus $t_x$ is a meromorphic section of the holomorphic bundle $L(\overline{\partial}^{-\pi x})\, :=\,[\overline{\partial}^{-\pi x}]$ (it is the holomorphic line bundle given by the Dolbeault operator $\operatorname{\overline\partial}-\pi xd\overline{w}$). Notice that for $x\,\notin\,\Gamma$, the section $t_x$ has a simple zero at $w\,=\,x$ and a first order pole at $w \,=\, 0$. Moreover, up to scaling by a complex number, this $t_x$ is the unique meromorphic section of $L(\overline{\partial}^{-\pi x})\, :=\, [\overline{\partial}^{-\pi x}]$ with a simple zero at $o$. \[TdualJ\] Once a base point $o\,\in\, T^2$ has been chosen, we get the well–known isomorphism $$T^2\,\longrightarrow\, {\rm Jac}(T^2)\, ,\ \ [x]\,\longmapsto\, L(\overline{\partial}^{-\pi x})\, :=\,[\overline{\partial}^{-\pi x}]$$ that associating to $[x]$ the divisor of the meromorphic section $t_x$: $$(t_x)\,=\, [x]-o\, .$$ For $\frac{1}{2}\, >\, \rho\, >\, 0$, if $V$ in is of the form $V\,=\, L\oplus L^{^*}$, then from Corollary \[cort\] and Lemma \[lem3.3\] it follows that $\text{degree}(L)\,=\, 0$ and $L$ is not a spin bundle. In other words, $$L\,=\, L(\overline{\partial}+\chi\cdot d\overline{w})$$ for some $\chi\,\in\,\mathbb C$, and $$\chi\,\notin\,\tfrac{1}{2}\Gamma^*\, ;$$ see Remark \[rs\]. \[explicit\_coeff\] For any $\rho\,\in\, [0,\, \tfrac{1}{2}[$, take $[\nabla]\,\in\, {\mathcal M}_{1,1}^\rho$ such that its Deligne extension is given by the holomorphic vector bundle $$V\,=\, L\oplus L^*$$ (see ), where $L\,=\, L(\overline{\partial}+\chi d\overline{w})$ is a holomorphic line bundle on $T^2$ of degree zero which is not a spin bundle. Set $x\,=\,-\frac{1}{\pi }\chi$, so $x\,\notin\,\tfrac{1}{2}\Gamma.$ Then, there exists $$a\,\in\,{\mathbb{C}}$$ such that one representative of $[\nabla]$ is given by $$\nabla^{a,\chi,\rho}$$ as in , where the second fundamental forms $\gamma^+_\chi$ and $\gamma^-_\chi$ in are given by the meromorphic $1$–forms $$\label{gammamp} \gamma^+_\chi([w])\,=\,\rho \tfrac{\vartheta'(0)}{\vartheta(-2x)}t_{2x}(w)dw\ \ \text{ and }\ \ \gamma^-_\chi([w])\,=\,\rho \tfrac{\vartheta'(0)}{\vartheta(2x)}t_{-2x}(w)dw$$ with values in the holomorphic line bundles of degree zero $L([2x]-[0])\,=\,L(\operatorname{\overline\partial}+ 2\chi d\overline{w})$ and $L([-2x]-[0])\,=\,L(\operatorname{\overline\partial}-2\chi d\overline{w})$ respectively. Using Section \[Delext\] we know that there exists a representative $\nabla$ of $[\nabla]$ such that its $(0,1)$–part ${\overline\partial}^\nabla$ is given by $${\overline\partial}^\nabla\,=\,{\overline\partial}+\begin{pmatrix} \chi d\overline{w}&0 \\ 0& -\chi d\overline{w}\end{pmatrix}\, .$$ The $(1,0)$–part $\partial^\nabla$ is given by $$\partial^\nabla\,=\,\partial+\begin{pmatrix} A & B\\ C& -A \end{pmatrix}\, ,$$ where $$\Psi\,=\,\begin{pmatrix} A & B\\ C& -A \end{pmatrix}$$ is a $\text{End}(V)$–valued meromorphic $1$–form on $T^2$, with respect to the holomorphic structure ${\overline\partial}^\nabla$, such that $\Psi$ a simple pole at $o$ and $\Psi$ is holomorphic elsewhere. In particular, $A$ is a meromorphic $1$–form on $T^2$ with simple pole at $o$, and hence by the residue theorem it is in fact holomorphic, i.e., $$A\,=\, adw$$ for some $a\,\in\,{\mathbb{C}}$. Furthermore, $B$ and $C$ are meromorphic $1$–forms with values in the holomorphic bundles $L(\operatorname{\overline\partial}+2\chi d\overline{w})$ and $L(\operatorname{\overline\partial}-2\chi d\overline{w})$, respectively. Note that for $x\,\in\,\tfrac{1}{2}\Gamma$, $L(\operatorname{\overline\partial}+2\chi d\overline{w})$ would be the trivial holomorphic line bundle and $B$ and $C$ could not have non-trivial residues at $o$ by the residue theorem. The determinant of the residue of $\Psi$ at $o$ is $-\rho^2$ by . Therefore, from the holomorphicity of $A$ we conclude that the quadratic residue of the meromorphic quadratic differential $BC$ is $$\text{qres}_o(BC)\,=\,\rho^2\, .$$ From the discussion prior to Remark \[TdualJ\] there is a unique meromorphic section of $L(\operatorname{\overline\partial}\pm2\chi d\overline{w})$ with a simple pole at $o$. Thus, after a possible constant diagonal gauge transformation, from the uniqueness, up to scaling, of the meromorphic section of $L(\operatorname{\overline\partial}\pm2\chi d\overline{w})$ with simple pole at $o$, it follows that $$B\,=\,\gamma^+_\chi \ \ \text{ and } \ \ C\,=\,\gamma^-_\chi\, ,$$ where $\gamma^+_\chi$ and $\gamma^-_\chi$ are the second fundamental forms ; here the assumption that $L$ is not a spin bundle is used. This completes the proof. \[rem:strongparaHiggs\] The off–diagonal parts $\gamma^+_\chi$ and $\gamma^-_\chi$ depend only on $\chi$. Note that $\chi$ also uniquely determines the parabolic structure unless $L(\operatorname{\overline\partial}+\chi d\overline{w})$ is a spin bundle, or equivalently, $2\chi\,\in\, \Gamma^*$. Also note that $L(\operatorname{\overline\partial}-\chi d \overline{w})$ is the dual of $L(\operatorname{\overline\partial}+\chi d\overline{w})$. We also see from Proposition \[explicit\_coeff\] that every strongly parabolic Higgs field on the parabolic bundle corresponding to the connection $\nabla$ in Proposition \[explicit\_coeff\] is of the form $$c \begin{pmatrix} dw &0\\0&-dw\end{pmatrix}$$ for some constant $c\,\in\,{\mathbb{C}}$. \[Pro-stab\] Assume that $\rho\,\in\, ]0,\, \tfrac{1}{2}[$. Take $[\nabla]\, \in\, {\mathcal M}^\rho_{1,1}$ such that the corresponding bundle $V$ in is of the form $L\oplus L^{^*}$ (so $L$ is not a spin bundle but its degree is zero by Corollary \[cort\] and Lemma \[lem3.3\]). Then, the rank two parabolic bundle corresponding to $[\nabla]$ (see Section \[sec3.e\]) is parabolic stable. The two holomorphic line bundles $L$ and $L^{^*}$ are not isomorphic, because $L$ is not a spin bundle. From this it can be shown that any holomorphic subbundle of degree zero $$\xi\, \subset\, V\,=\, L\oplus L^{^*}$$ is either $L$ or $L^{^*}$. Indeed, this follows by considering the two compositions of homomorphisms: $$\xi\, \hookrightarrow\, L\oplus L^{^*}\, \longrightarrow\, L \ \text{ and }\ \xi\, \hookrightarrow\, L\oplus L^{^*}\, \longrightarrow\, L^{^*}\, ;$$ one of them has to be the zero homomorphism and the other an isomorphism. As the residue in is off–diagonal (with respect to the holomorphic decomposition $V\,=\,L\oplus L^*$), the above observation implies that every holomorphic line subbundle $\xi\, \subset\, V$ of degree zero has parabolic degree $-\rho$. On the other hand, the parabolic degree of a holomorphic line subbundle of negative degree is less than or equal to $$-1+\rho\,<\,0\,.$$ Consequently, the parabolic bundle is stable. Outlook: Exceptional bundles ---------------------------- The exceptional cases of non-trivial extensions of a spin bundle $S$ by itself the second case in Corollary \[cort\]) can be described as follows. After a normalization, the holomorphic structure of the vector bundle is given by the Dolbeault operator on the $C^\infty$ trivial bundle $T^2\times {\mathbb C}^2\, \longrightarrow\, T^2$ $$\operatorname{\overline\partial}\,=\,{{\left(\begin{matrix}\operatorname{\overline\partial}^S & d\overline{w}\\ 0 & \operatorname{\overline\partial}^S\end{matrix}\right)}}\, ,$$ where $w$ is the global coordinate on the universal covering ${\mathbb{C}}\, \longrightarrow\, {\mathbb{C}}/\Gamma\,=\,T^2$. The $(1,0)$–type component $\operatorname{\partial}$ of the connection is than given by $$\operatorname{\partial}={{\left(\begin{matrix} \operatorname{\partial}^S+a dw& bdw \\ c dw & \operatorname{\partial}^S-a dw\end{matrix}\right)}}\, ,$$ where $a,\,b,\,c\,\colon \,T^2\setminus\{o\}\,\longrightarrow\, {\mathbb{C}}$ are smooth functions with first order pole like singularity at $o\,\in\, T^2.$ The connection $\nabla\,=\,\operatorname{\partial}+\operatorname{\overline\partial}$ is flat if and only if $$\label{exceptional_flatness} {{\left(\begin{matrix}\operatorname{\overline\partial}a+c d\overline{w}&\operatorname{\overline\partial}b-2 a d\overline{w}\\ \operatorname{\overline\partial}c& -\operatorname{\overline\partial}a-c d\overline{c}\end{matrix}\right)}}\,=\,0\, .$$ Since $c$ has at most a first order pole at $o\,\in\, T^2$, and satisfies the equation $\operatorname{\overline\partial}c\,=\,0$, it must be a constant. This constant turns out to be related to the weight $\rho$ in the following way. If $a$ has a first order pole like singularity at $o$ of the form $$a(w)\,\sim \,\frac{a_1}{w}+a_0+\ldots\, ,$$ then integration by parts yields $$2\pi\sqrt{-1} a_1\,=\,\int_{T^2} \operatorname{\overline\partial}a\wedge dw\,=\,\int_{T^2}c d\overline{w}\wedge dw\, .$$ The connection $\nabla$ is locally gauge equivalent, by a holomorphic gauge that extends smoothly to $o\,\in\, T^2$, to the connection in ; using this it follows that $$a_1\,=\,\pm\rho\, ,$$ and therefore $$\label{exceptional-c-constant} c\,=\,\pm\frac{2\pi\sqrt{-1} \rho}{\int_{T^2}d\overline{w}\wedge dw} \,=\,\pm \pi\rho$$ (recall that $\tau\,=\, \sqrt{-1}$). The sign in tells us whether the induced parabolic structure is stable or not. More precisely, if $0\,<\,\rho\,<\,\tfrac{1}{2},$ then we have for the plus “$+$” sign an unstable parabolic structure, as the parabolic degree of the unique holomorphic line subbundle $L\,=\,S\oplus\{0\}$ of degree $0$ is $$\text{par-deg}(L)\,=\,{\rm degree}(L)+\rho\,>\,0\, .$$ Analogously, the parabolic structure is stable for the minus “$-$” sign in . We have not yet shown that there is actually a flat connection $\nabla$ for each case of $\pm\rho.$ The complex number $c$ is determined by $\rho$ using , and there is a unique solution of $a$, up to an additive constant, for the equation in . Then, for each solution of $a$, there is again a unique solution for $b$, with first order pole like singularity at $o\,\in\, T^2$, of the equation $$\operatorname{\overline\partial}b-2 a d\overline{w}\,=\,0\, ;$$ indeed, this can easily be deduced from Serre duality. Hence, up to two additive constants, the flat connection is unique. But due to the option of the constant gauge transformations $$G\,=\, {{\left(\begin{matrix} 1& h\\0&1\end{matrix}\right)}}$$ of the $C^\infty$ trivial bundle $T^2\times {\mathbb C}^2\, \longrightarrow\, T^2$, where $h\,\in\,{\mathbb{C}}$ is any constant, the isomorphism class of the flat connection does not depend on the choice of the additive constant in the solution $a$. Note that in the unstable case, the gauge transformation $G$ does not alter the parabolic structure, but in the case of the stable parabolic structure we obtain different, but nevertheless gauge equivalent, parabolic structures. Flat connections on the 4-punctured torus ========================================= Consider $$\widehat{T}^2\,=\,{\mathbb{C}}/(2{\mathbb{Z}}+2\sqrt{-1}{\mathbb{Z}})$$ and the 4–fold covering $$\label{mPi} \Pi\,\colon\, \widehat{T}^2\,\longrightarrow\, T^2\,=\,{\mathbb{C}}/({\mathbb{Z}}+\sqrt{-1}{\mathbb{Z}})$$ produced by the identity map of $\mathbb C$. Let $$\{p_1,\,p_2,\, p_3,\, p_4\}\,:=\, \Pi^{-1}(o) \, \subset\, \widehat{T}$$ be the preimage of $o\, \in\, T^2$. Fix $$\rho\,=\, 0\, .$$ We use $\Pi$ in to pull back the connection in to $\widehat{T}^2$. The traces $$T_1(\chi,a)\,=\, \text{tr}(h(\widehat{\alpha}))\ \ \text{ and }\ \ T_2(\chi,a)\,=\, \text{tr}(h(\widehat{\beta}))\, ,$$ of the monodromy representation $h$ for $\Pi^*\nabla^{a,\chi,\rho=0}$ along $$\label{tab} \widehat{\alpha}\,=\, 2\,\in\, 2{\mathbb{Z}}+2 \sqrt{-1}{\mathbb{Z}}\, \subset\, \pi_1(\widehat{T}^2\setminus\{p_1,\cdots ,p_4\}\, ,q)\ \ \text{ and}$$ $$\widehat{\beta}\,=\,2\sqrt{-1}\,\in\, 2{\mathbb{Z}}+2 \sqrt{-1}{\mathbb{Z}}\, \subset\, \pi_1(\widehat{T}^2\setminus\{p_1,\cdots ,p_4\},\, q)$$ (see Figure \[figure2\]), are given by $$T_1(\chi,a)\,=\,e^{-2(a+\chi)}+e^{2(a+\chi)}\ \ \text{ and}$$ $$T_2(\chi,a)\,=\,e^{2\sqrt{-1}(-a+\chi)}+e^{2\sqrt{-1}(a -\chi)}$$ respectively, while the local monodromy of $\Pi^*\nabla^{a,\chi,\rho=0}$ around each of $p_1,\,\cdots ,\,p_4$ is trivial, because $\rho\,=\,0$. In the following, fix $$\label{chifix} \chi\,=\,\frac{\pi}{4}(1-\sqrt{-1})\, ,$$ and consider $$a_k\,=\,-\frac{\pi}{4}(1+\sqrt{-1})+k\pi(1+\sqrt{-1})$$ for all $k\,\in\,{\mathbb{Z}}.$ Then we have $$\label{T1eq}T_1(\chi,a_k)\,=\,-(e^{-2k\pi}+e^{2k\pi})\,\in\, {\mathbb{R}}$$ $$\label{T2eq}T_2(\chi,a_k)\,=\,-(e^{-2k\pi}+e^{2k\pi})\,\in\, {\mathbb{R}}\, ;$$ as before, $T_1(\chi,a_k)$ and $T_2(\chi,a_k)$ are the traces of holonomies of $\Pi^*\nabla^{a_k,\chi,0}$ along $\widehat\alpha$ and $\widehat\beta$ respectively (see ). Moreover, $$\label{der1} \begin{split} \frac{\partial}{\partial s}T_1(\chi,a_k+s+\sqrt{-1}t)&\,=\,-2e^{-2k\pi}(-1+e^{4k\pi})\,\in\, {\mathbb{R}}\\ \frac{\partial}{\partial t}T_1(\chi,a_k+s+\sqrt{-1} t)&\,=\,-2\sqrt{-1}e^{-2k\pi}(-1+e^{4k\pi}) \,\in\, \sqrt{-1}{\mathbb{R}}\setminus\{0\} \end{split}$$ and $$\label{der2} \begin{split} \frac{\partial}{\partial s}T_2(\chi,a_k+s+\sqrt{-1} t)&\,=\,2\sqrt{-1}e^{-2k\pi}(-1+e^{4k\pi}) \,\in\, \sqrt{-1}{\mathbb{R}}\setminus\{0\}\\ \frac{\partial}{\partial t}T_2(\chi,a_k+s+\sqrt{-1} t)&\,=\,-2e^{-2k\pi}(-1+e^{4k\pi})\,\in\, {\mathbb{R}}\, . \end{split}$$ \[real-mon-hatT2\] Let $k\,\in\,{\mathbb{Z}}\setminus\{0\}$, $\chi\,=\,\frac{\pi}{4}(1-\sqrt{-1})$ and $a_k \,=\,-\frac{\pi}{4}(1+\sqrt{-1})+k\pi(1+\sqrt{-1})$. Then there exists $\epsilon\,>\,0$ such that for each $\rho\,\in\,]0,\,\epsilon[$, there is a unique number $a\,\in\,{\mathbb{C}}$ near $a_k$ satisfying the condition that the monodromy of the flat connection $$\Pi^*\nabla^{a,\chi,\rho}$$ on $\widehat{T}^2\setminus\{p_1,\cdots ,p_4\}$ is irreducible and the image of the monodromy homomorphism is conjugate to a subgroup of $\operatorname{SL}(2,{\mathbb{R}})$. Using and , and applying the implicit function theorem to the imaginary parts of the traces $T_1$ and $T_2$, there exists for each sufficiently small $\rho$ a unique complex number $a$ such that the traces $T_1$ and $T_2$, of holonomies of $\nabla^{a,\chi,\rho}$ along $\widehat\alpha$ and $\widehat\beta$ respectively, are real. Because $k\,\neq\, 0$, and $\rho$ is small, we obtain from and that these traces satisfy $$T_1\,<\,-2\ \ \text{ and }\ \ T_2\,<\,-2\, .$$ Recall the general formula $$\label{tXY}\text{tr}(X)\text{tr}(Y)\,=\,\text{tr}(XY)+\text{tr}(XY^{-1})$$ for $X,\,Y\,\in\, \text{SL}(2,{\mathbb{C}})$. Let $$x\,=\,\text{tr}(h(\alpha))\ \ \text{ and } \ \ y\,=\,\text{tr}(h(\beta))$$ be the traces of the monodromy homomorphism $h$ of the connection $\nabla^{a,\chi,\rho}$ on $T^2\setminus\{0\}$ along $\alpha$ and $\beta$ (recall the notation of Section \[ADHS1t\]). Applying to $$X\,=\, h(\alpha)\,=\, Y \ \ \text{( respectively, }\ \ X\,=\, h(\beta)\,=\,Y)$$ we obtain that $x$ (respectively, $y$) must be purely imaginary. Then it can be checked directly that the trace along any closed curve in the 4–punctured torus is real: In fact, that $$z\,=\,\text{tr}(h(\alpha\circ\beta))$$ is real is a direct consequence of and the above observation that $x,\,y\,\in\,\sqrt{-1}{\mathbb{R}}$. Using repeatedly (compare with [@Gol]) it is deduced that the trace of the monodromy along any closed curve on $\widehat{T}^2$ is real. For $\rho\,\neq\,0$ sufficiently small, the connection $\Pi^*\nabla^{a,\chi,\rho}$ on $\widehat{T}^2$ is irreducible as a consequence of Lemma \[irr\] — note that the condition $xy\,\neq\, 0$ follows directly from the fact that $\rho\,\neq\, 0$ — applied to $h(\widehat{\alpha})$ and $h(\widehat{\beta})$ (see ). We will prove that the image of the monodromy homomorphism $h$ is conjugate to a subgroup of $\text{SL}(2,{\mathbb{R}})$. To prove this, since the monodromy is irreducible and has all traces real, the homomorphism $h$ is conjugated to its complex conjugate representation $\overline h$, meaning there exists $C\,\in\, \operatorname{SL}(2,{\mathbb{C}})$ such that $$C^{-1}\overline{h} C\,=\, h\, .$$ Applying this equation twice we get that $$\overline{C}C\,=\, \pm \text{Id}$$ because $h$ is irreducible. Assume that $\overline{C}C\,=\, -\text{Id}$. Then a straightforward computation shows that there exists $D\,\in\, \operatorname{SL}(2,{\mathbb{C}})$ such that $$C\,=\,\pm \overline{D}^{-1} \delta D\, ,$$ with $$\delta\,=\, \begin{pmatrix}0&1\\ -1&0\end{pmatrix}\, .$$ Therefore, the conjugated representation $$H\, :=\, DhD^{-1}$$ is unitary as $$(\overline{H}^t)^{-1}\,=\, \delta^{-1}\overline{H}\delta\,=\, (\pm1)^2\delta^{-1}\overline{D} \overline{h} \overline{D}^{-1}\delta\,=\, H\, .$$ Now, since the traces of some elements in the image of the monodromy are not contained in $[-2,\, 2]$, we get a contradiction. Thus, $$\overline{C}C\,=\, \text{Id}\, ,$$ and a direct computation implies then that $$C\,=\,\overline{D}^{-1} D$$ for some $D\,\in\, \operatorname{SL}(2,{\mathbb{C}})$. Consequently, we have $$DhD^{-1}\,=\, \overline{D}\overline{h} \overline{D}^{-1}\, .$$ Hence the image of the monodromy homomorphism $h$ is conjugate to a subgroup of $\text{SL}(2,{\mathbb{R}})$. Once we know that $x$ and $y$ are purely imaginary and $z$ is real with $|z|\,>\, 2$ ($|z|\,>\,2$ follows from $k\neq0$), here is an alternative argument showing that the monodromy representation in Theorem \[real-mon-hatT2\] is conjugated to an $\operatorname{SL}(2,{\mathbb{R}})$-representation. First observe that both solutions of $$\zeta+\zeta^{-1}\,=\,z$$ are real. A direct calculation shows that for $h(\alpha)$ and $h(\beta)$ as in , all the matrices for a set of generators for the fundamental group of the 4–punctured torus, for example $$\begin{split}&h(\alpha)^2,\\& h(\beta)^2,\\ & h(\beta)^{-1}h(\alpha)^{-1}h(\beta)h(\alpha),\\ h(\alpha)^{-1}&h(\beta)^{-1}h(\alpha)^{-1}h(\beta)h(\alpha)h(\alpha),\\ h(\beta)^{-1}&h(\beta)^{-1}h(\alpha)^{-1}h(\beta)h(\alpha)h(\beta),\\ h(\alpha)^{-1}h(\beta)^{-1}&h(\beta)^{-1}h(\alpha)^{-1}h(\beta)h(\alpha)h(\beta)h(\alpha), \end{split}$$ have the property that the off–diagonal entries are purely imaginary and the diagonal entries are real. Conjugating by $$\begin{pmatrix} e^{\tfrac{\pi \sqrt{-1}}{4}}&0\\0& e^{-\tfrac{\pi \sqrt{-1}}{4}}\end{pmatrix}$$ directly gives a representation into $\operatorname{SL}(2,{\mathbb{R}}).$ We shall use the following theorem. \[thetrivcon\] Let $\chi\,=\,\frac{\pi}{4}(1-\sqrt{-1}).$ For every $\rho\,\in\, [0,\, \tfrac{1}{2}[$, there exists $a^u\,\in\, {\mathbb{C}}$ such that $$\Pi^*\nabla^{a^u,\chi,\rho}$$ is a reducible unitary connection satisfying the following condition: the monodromies of $\Pi^*\nabla^{a^u,\chi,\rho}$ along $$\widehat\alpha\,=\,2 \,\in\, \pi_1(\widehat{T}^2\setminus\{p_1,\cdots ,p_4\},\, q) \ \ and\ \ \widehat\beta\,=\,2\sqrt{-1} \,\in\, \pi_1(\widehat{T}^2\setminus\{p_1,\cdots ,p_4\},\, q)$$ (see ) are both $-{\rm Id}$. First, the parabolic bundle on $T^2$ determined by $\chi\,=\,\frac{\pi}{4}(1-\sqrt{-1})$ is stable; this stable parabolic bundle on $T^2$ will be denoted by $W_*$. Note that all the strongly parabolic Higgs fields on this parabolic bundle are given by constant multiples of $$\begin{pmatrix} dw & 0\\ 0&-dw\end{pmatrix}\, .$$ In view of the theorem of Mehta–Seshadri and Biquard ([@MSe], [@Biq]) mentioned in Section \[sec3.e\], there exists $a^u\,\in\,{\mathbb{C}}$ such that $$\nabla^{a^u,\chi,\rho}$$ has unitary monodromy on $T^2.$ Then, the flat connection $\Pi^*\nabla^{ a^u,\chi,\rho}$ on $\widehat{T}^2$ has unitary monodromy as well, where $\Pi$ is the map in . On the other hand, the pulled back parabolic bundle $\Pi^*W_*$ on $\widehat{T}^2$ is strictly semi-stable, because $\chi\,=\,\frac{\pi}{4}(1-\sqrt{-1})$ and $\widehat{T}^2\,=\,{\mathbb{C}}/(2\Gamma)$ for the specific lattice $2\Gamma\,=\,2{\mathbb{Z}}+2\sqrt{-1}{\mathbb{Z}}$ (it can be proved by a direct computation, but it also follows from [@HeHe]), so that the unitary connection $\Pi^*\nabla^{ a^u,\chi,\rho}$ is automatically reducible. We give an alternative explanation for the semi-stability of the parabolic bundle $\Pi^*W_*$. Take $x \,=\, y \,= \,0$, and the unique positive solution of $z$ in . Note, that if $\rho\,=\,0,$ then $z\,=\,2$ and $a^u\,=\,-\overline\chi$, with $\chi$ given by . Then, using we see that the representation $h$ of the fundamental group of the $1$–punctured torus given by $x(h)\,=\, 0\, =\, y(h)$ and $z(h)\,=\, z$ induces a unitary reducible representation of the fundamental group of the 4–punctured torus for any real $\rho$. The corresponding monodromies along $\widehat\alpha$ and $\widehat\beta$ are given by $h(\alpha) h(\alpha)$ and $h(\beta) h(\beta)$, and both are equal to $-\text{Id}$ by . It is easy to see that, for $\rho\,<\,\tfrac{1}{4}$ (this case suffices for our proof), the parabolic structure on the holomorphic bundle $$L\oplus L^*\,\longrightarrow\, \widehat{T}^2$$ cannot be strictly semi-stable if $L^2$ is not trivial; this is because the lines giving the quasiparabolic structure are not contained in $L$ or $L^*$ by , and these two, namely $L$ and $L^{^*}$, are the only holomorphic subbundles of degree zero by the assumption that $L^2\,\neq\, {\mathcal O}_{\widehat{T}^2}$. By continuity of the monodromy representation of $\Pi^*\nabla^{a^u,\chi,\rho}$ with respect to the parameters $(a^u,\,\chi,\,\rho)$, the representation of $\Pi^*\nabla^{ a^u,\chi,\rho}$ must be the unitary reducible representation $h$ with $x(h)\,=\,0\,=\,y(h)$ and positive $z(h)\,=\,z$. As we already know that the monodromies of $h$ along $\widehat\alpha$ and $\widehat\beta$ are both $-\text{Id}$, this finishes the proof. Flat irreducible $\operatorname{SL}(2,{\mathbb{R}})$–connections on compact surfaces ==================================================================================== We assume that $$\rho\,=\,\frac{1}{2p}\, ,$$ for some $p\,\in\,{\mathbb{N}}$ odd, with $\rho$ being small enough so that Theorem \[real-mon-hatT2\] is applicable. The torus $\widehat{T}^2$ in is of square conformal type, and it is given by the algebraic equation $$y^2\,=\, \frac{z^2-1}{z^2+1}\, .$$ Without loss of any generality, we can assume that the four points $$\{p_1,\,\cdots ,\,p_4\} \,=\, \Pi^{-1}(\{o\})\, ,$$ where $\Pi$ is the map in , are the branch points of $z$, i.e., the $(y,\, z)$ coordinates of $p_1,\,\cdots ,\, p_4$ are $$p_1\,=\,(0,\,1), \ \ p_2\,=\,(\infty,\,\sqrt{-1}),\ \ p_3\,=\,(0,\,-1), \ \ p_4\,=\,(\infty,\,-\sqrt{-1})\, .$$ Define the compact Riemann surface $\Sigma$ by the algebraic equation $$\label{sigmayz} x^{2p}\,=\,\frac{z^2-1}{z^2+1}\, .$$ Consider the $p$–fold covering $$\Phi_p\,\colon\, \Sigma\,\longrightarrow\, \widehat{T}^2\, ,\ \ (x,\, z) \, \longmapsto \, (x^p,\, z)\, ,$$ which is totally branched over $p_1,\, \cdots,\, p_4$. Denote the inverse image $\Phi^{-1}_p(p_i)$, $1\, \leq\, i\, \leq\, 4$, by $P_i$ (see Figure \[figure3\]). For a connection $\nabla^A$ (respectively, $\nabla^B$) on a vector bundle $A$ (respectively, $B$), the induced connection $(\nabla^A\otimes\text{Id}_B)\oplus (\text{Id}_A\otimes\nabla^B)$ on $A\otimes B$ will be denoted by $\nabla^A\otimes\nabla^B$ for notational convenience. There are holomorphic line bundles $$S\,\longrightarrow\, \Sigma$$ of degree $-2$ such that $$S\otimes S\,=\,{\mathcal O}_\Sigma(-P_1-P_2-P_3-P_4)\, .$$ For every such $S$, there is a unique meromorphic connection $\nabla^S$ on $S$ with the property that $$\nabla^S\otimes\nabla^S s_{-P_1-P_2-P_3-P_4}\,=\,0\, ,$$ where $s_{-P_1-P_2-P_3-P_4}$ is the meromorphic section of ${\mathcal O}_\Sigma(-P_1-P_2-P_3-P_4)$ given by the constant function $1$ on $\Sigma$ (this section has simple poles at $P_1,\,\cdots,\,P_4$). Observe that the monodromy representation of $\nabla^S$ takes values in ${\mathbb{Z}}/2{\mathbb{Z}}.$ Also, note that $(S,\,\nabla^S)$ is unique up to tensoring with an order two holomorphic line bundle $\xi$ equipped with the (unique) canonical connection that induces the trivial connection on $\xi\otimes\xi$. \[trivialmon\] For given $\rho\,=\,\tfrac{1}{2p}$ and $\Sigma$, consider $a^u$ and $\chi$ as in Theorem \[thetrivcon\]. There exists a unique pair $(S,\,\nabla^S)$ such that the monodromy of the connection $$\nabla^S\otimes (\Pi\circ\Phi_p)^*\nabla^{a^u,\chi,\rho}$$ is trivial. Since $p$ is odd, $\rho\,=\, \tfrac{1}{2p}$, and $\Phi_p$ is a totally branched covering, the local monodromies of $$(\Pi\circ\Phi_p)^*\nabla^{a^u,\chi,\rho}$$ around the points of $P_i$, $1\,\leq\, i\, \leq\, 4$, are all $-\text{Id}.$ Moreover, from Theorem \[thetrivcon\] it follows easily that the monodromy along any closed curve is $$\pm\text{Id}.$$ The lemma follows from these. The connection $$\nabla^S\otimes (\Pi\circ\Phi_p)^*\nabla^{a^u,\chi,\rho}$$ is defined on the vector bundle $$S\otimes (L\oplus L^*)\, \longrightarrow\, \Sigma\, ,$$ where $L$ is the pull-back, by $\Pi\circ\Phi_p$, of the $C^\infty$ trivial line bundle $T^2\times{\mathbb C}\, \longrightarrow \,T^2$ equipped with holomorphic structure $$\overline{\partial}+\chi d\overline{w}\, .$$ For each $1\, \leq\,i\, \leq\, 4$, the residues of the connection $\nabla^S\otimes (\Pi\circ\Phi_p)^*\nabla^{a^u,\chi,\rho}$ at the points of $P_i\,=\, \Phi^{-1}_p(p_i)$ are $$\label{rescon} \tfrac{1}{2}\begin{pmatrix} 1&-1\\-1&1\end{pmatrix}$$ with respect to any frame at points of $P_i$ compatible with the decomposition $S\otimes (L\oplus L^*)\, =\, (S\otimes L)\oplus (S\otimes L^*)$. As in [@He3 § 3], there exists a holomorphic rank two bundle $V$ on $\Sigma$ with trivial determinant, equipped a holomorphic connection $D$, together with a holomorphic bundle map $$\label{df} F\,\colon\, S\otimes (L\oplus L^*)\, \longrightarrow\, V\, ,$$ which is an isomorphism away from $P_1,\,\cdots ,\, P_4$, such that $$\nabla^S\otimes (\Pi\circ\Phi_p)^*\nabla^{a^u,\chi,\rho}\,=\,F^{-1}\circ D\circ F\, .$$ From Lemma \[trivialmon\] we know that $(V,\, D)$ is trivial. \[Lemired\] Assume $p\,\geq\, 3.$ Consider the strongly parabolic Higgs field $$\Psi\,=\, \begin{pmatrix} dw&0\\0&-dw\end{pmatrix}$$ with respect to the parabolic structure induced by $\nabla^{a^u,\chi,\rho}$. Then, $$\Theta\,= \, F\circ(\Pi\circ\Phi_p)^*\Psi\circ F^{-1}$$ is a holomorphic Higgs field on the trivial holomorphic bundle $(V,\, D^{0,1}) \,=\, (V,\, D'')$ (the Dolbeault operator for the trivial holomorphic structure is denoted by $D''$). Consider the holomorphic Higgs field $$(\Pi\circ\Phi_p)^*\Psi\,\colon\, S\otimes (L\oplus L^*)\,\longrightarrow\, K_\Sigma\otimes S\otimes (L\oplus L^*)$$ on the rank two holomorphic bundle $S\otimes (L\oplus L^*).$ It vanishes of order $p-1\,\geq\,2$ at the singular points $P_1,\,\cdots ,\,P_4.$ Performing the local analysis (as in [@He3 § 3.2]), near $P_k$, of the normal form of the homomorphism $F$ in , we directly see that $\Theta\,=\, F\circ(\Pi\circ\Phi_p)^*\Psi\circ F^{-1}$ has no singularities, i.e., it is a holomorphic Higgs field on the trivial holomorphic bundle $(V,\, D'')$. Indeed, the homomorphism $F$ in has the local form $$\begin{pmatrix} 1&-\tfrac{z}{2}\\1&\tfrac{z}{2}\end{pmatrix}$$ with respect to the frame corresponding to and with respect to a holomorphic coordinate $z$ centered at $P_k$; so by conjugating with $F^{-1}$, the entries of $\Psi$ (with respect to a holomorphic frame) gets multiplied, at worst, with $$\frac{1}{z}\, ,$$ consequently, $\Theta$ does not have poles. \[Main\] There exists a compact Riemann surface $\Sigma$ of genus $g\,>\,1$ with a irreducible holomorphic connection $\nabla$ on the trivial holomorphic rank two vector bundle ${\mathcal O}^{\oplus 2}_\Sigma$ such that the image of the monodromy homomorphism for $\nabla$ is contained in $\operatorname{SL}(2,{\mathbb{R}})$. For $\rho\,=\,\tfrac{1}{2p}$ with $p$ being an odd integer, consider the connection $\nabla^{a,\chi,\rho}$, over bundle on $T^2$, given by Theorem \[real-mon-hatT2\]. Since the image of the monodromy homomorphism for $\Pi^*\nabla^{a,\chi,\rho}$ is conjugate to a subgroup of $\operatorname{SL}(2,{\mathbb{R}})$, and $\nabla^S$ has ${\mathbb{Z}}/2{\mathbb{Z}}$–monodromy, the image of the monodromy homomorphism for the connection $$D\, :=\, \nabla^S\otimes(\Pi\circ\Phi_p)^*\nabla^{a,\chi,\rho}$$ can be conjugated into $\operatorname{SL}(2,{\mathbb{R}})$ as well. The same holds for the connection $$\nabla\,:=\,F\circ(\nabla^S\otimes(\Pi\circ\Phi_p)^*\nabla^{a,\chi,\rho}) \circ F^{-1}$$ because $F$ is a (singular) gauge transformation. By Lemma \[Lemired\], $$\nabla-D$$ is a holomorphic Higgs field on the trivial holomorphic vector bundle $(V,\,D'').$ It remains to show that the monodromy homomorphism for $\nabla$ is an irreducible representation of the fundamental group. Since $\rho\,\neq\,0$ is small, this follows again from Lemma \[irr\]. Indeed, observe that the monodromies along the curves $$\widetilde{\alpha},\, \widetilde{\beta}\,\in\,\pi_1(\Sigma,q)$$ (see Figure \[figure3\]) are given by $$h(\alpha)h(\alpha) \ \ \text{ and }\ \ h(\beta)h(\beta)$$ up to a possible sign. Because $xy\,\neq\, 0$, in view of and and continuity in $\rho$, the monodromy representation must be irreducible by Lemma \[irr\]. Figures ======= ![The 1-punctured torus.[]{data-label="figure1"}](1torus.pdf){width="63.50000%"} ![The 4–punctured torus.[]{data-label="figure2"}](4torus.pdf "fig:"){width="37.50000%"} ![The 4–punctured torus.[]{data-label="figure2"}](immersion.pdf "fig:"){width="50.00000%"} ![The Riemann surface $\Sigma$ for $q\,=\,3$, shown with vertical and horizontal trajectories of $(\Pi\circ\pi_3)^*(dw)^2$. Picture by Nick Schmitt.[]{data-label="figure3"}](lawson5.pdf){width="50.00000%"} [ZZZZZZ]{} M. F. Atiyah, Vector bundles over an elliptic curve, [*Proc. Lond. Math. Soc.*]{} [**7**]{} (1957), 414–452. O. Biquard, Fibrés paraboliques stables et connexions singulières plates, [*Bull. Soc. Math. Fr.*]{} [**119**]{} (1991), 231–257. I. Biswas, A criterion for the existence of a parabolic stable bundle of rank two over the projective line, [*Internat. J. Math.*]{} [**9**]{} (1998), 523–533. I. Biswas and S. Dumitrescu, Riemann-Hilbert correspondence for differential systems over Riemann surfaces, arxiv.org/abs/2002.05927. G. Calsamiglia, B. Deroin, V. Heu and F. Loray, The Riemann-Hilbert mapping for $sl(2)$-systems over genus two curves, [*Bull. Soc. Math. France*]{} [**147**]{} (2019), 159–195. P. Deligne, *Equations différentielles à points singuliers réguliers*, Lecture Notes in Mathematics, Vol. 163, Springer-Verlag, Berlin-New York, 1970. W. Goldman, [*An Exposition of results of Fricke and Vogt*]{}, https://arxiv.org/abs/math/0402103v2. R. C. Gunning, [*Lectures on vector bundles over Riemann surfaces*]{}, University of Tokyo Press, Tokyo; Princeton University Press, Princeton, (1967). L. Heller and S. Heller, Abelianization of Fuchsian systems and Applications, [*Jour. Symp. Geom.*]{} [**14**]{} (2016), 1059–1088. S. Heller, A spectral curve approach to lawson symmetric cmc surfaces of genus $2$, [*Math. Ann.*]{} [**360**]{} (2014), 607–652. N. J. Hitchin, The self-duality equations on a Riemann surface, [*Proc. London Math. Soc.*]{} [**55**]{} (1987), 59–126. A. H. Huckleberry and G. A. Margulis, Invariant Analytic hypersurfaces, [*Invent. Math.*]{} [**71**]{} (1983), 235–240. N. M. Katz, An overview of Deligne’s work on Hilbert’s twenty-first problem, [*Mathematical developments arising from Hilbert problems*]{} (Proc. Sympos. Pure Math., Vol. XXVIII, Northern Illinois Univ., De Kalb, Ill., 1974), pp. 537–557. Amer. Math. Soc., Providence, R. I., 1976. W. Magnus, Rings of Fricke characters and automorphims groups of free groups, [*Math. Zeit.*]{} [**170**]{} (1980), 91–103. V. B. Mehta and C. S. Seshadri, Moduli of vector bundles on curves with parabolic structures, [*Math. Ann.*]{} [**248**]{} (1980), 205–239. M. S. Narasimhan and C. S. Seshadri, Stable and unitary bundles on a compact Riemann surface, [*Ann. of Math.*]{} [**82**]{} (1965), 540–564. M. Ohtsuki, A residue formula for Chern classes associated with logarithmic connections, *Tokyo Jour. Math.* **5** (1982), 13–21. G. Pirola, Monodromy of constant mean curvature surface in hyperbolic space, [*Asian Jour. Math.*]{} [**11**]{} (2007), 651–669. A. Pressley and G. Segal, [*Loop groups*]{}, Oxford Mathematical Monographs. Oxford University Press, New York, 1986. C. T. Simpson, The Hodge filtration on nonabelian cohomology. [*Algebraic geometry*]{} – Santa Cruz 1995, 217–281, Proc. Sympos. Pure Math., 62, Part 2, Amer. Math. Soc., Providence, RI, 1997. C. T. Simpson, A weight two phenomenon for the moduli of rank one local systems on open varieties, [*From Hodge theory to integrability and TQFT $tt^*$–geometry*]{}, 175–214, Proc. Sympos. Pure Math., 78, Amer. Math. Soc., Providence, RI, 2008.
--- abstract: 'For a generic vector field robustly without horseshoes, and an aperiodic chain recurrent class with singularities whose saddle values have different signs, the extended rescaled Poincaré map is associated with a central model. We estimate such central model and show it must have chain recurrent central segments over the singularities. This obstructs the application of central model to create horseshoes, and indicates that, differing from $C^1$ diffeomorphisms, solo using central model method is insufficient as a strategy to prove weak Palis conjecture for higher dimensional ($\geq 4$) singular flows. Our computation is actually based on simplified way of addressing blowup construction. As a byproduct, we are applicable to directly compute the extended rescaled Poincaré map upto second order derivatives, which we believe has its independent interests.' author: - Qianying Xiao and Yiwei Zhang title: A remark on the central model method for the weak Palis conjecture of higher dimensional singular flows --- Introduction ============ One goal of modern differential dynamical system theory is to classify dynamical behaviours for most systems. Under this framework, Palis [@Pa00; @Pa08] proposed several famous density conjectures in the late 20th century, and they attract great interests afterwards [@B; @BDV; @CP; @Ts]. One of the density conjectures concerns two extremely different kinds of systems. Namely Morse-Smale systems which are simple in that they have robustly finite periodic orbits; systems displaying horseshoes which are chaotic because they have robustly infinite periodic orbits. To be more precise, the conjecture is stated as: The collection of Morse-Smale systems and horseshoe systems is $C^r$ open and dense in certain space of dynamical systems with $r\geq 1$. Here the systems can be interpreted continuous or discrete. There are a number of attempts to prove this conjecture. For $C^1$ diffeomorphisms and $C^1$ nonsingular flow, the conjecture is proved positively in [@BGW07; @Cr10; @PS00] and[@AH; @Xiao] respectively. While the progress of the singular flow case is comparatively slow. The core problem is to eliminate generic singular aperiodic chain recurrent classes. The presence of singularities adds huge difficulties, for example, the matching between the hyperbolic splittings of singularities and those of of nearby periodic points. Until recently, Gan-Yang [@GY] eventually prove the conjecture holds for three dimensional $C^1$ singular flows. Besides the idea of generalized linear Poincaré flow introduced in [@LGW], the dimension restriction is crucial in their proof. In fact, the chain recurrent classes under their consideration must be Lyapunov stable(with respect to the flow or its inverse) so that they are able to construct singular return map to deduce a contradiction. While as the dimension increases, the discussions are becoming much more complicated (See for example [@Zheng] and the references therein for detailed arguments). In fact, the difficulties lie in ruling out generic aperiodic singular chain recurrent classes that are not Lyapunov stable. For example, there might exist aperiodic chain recurrent classes which are partially hyperbolic with one dimensional center(with respect to linear Poincaré flow) and therefore are not singular hyperbolic. The motivation of this paper is to eliminate these singular aperiodic chain recurrent classes. To this end, a plausible machinery is by the central model method. This method was firstly introduced by Crovisier [@Cr10], and has successfully dealt with the neutral one dimensional center to create horseshoes in proving weak Palis conjecture for $C^1$ diffeomorphisms. To be more precise, a central model is a pair $ (\hat{K},\hat{f}) $, where $ \hat{K} $ is a compact metric space and $ \hat{f} $ is a continuous map from $\hat{K}\times[0,1] $ to $ \hat{K}\times[0,+\infty) $ such that: - $ \hat{f}(\hat{K}\times\{0\})= \hat{K}\times\{0\}$, and $ \hat{f} $ is a local homeomorphism in a small neighborhood of $ \hat{K}\times\{0\} $; - $ \hat{f} $ is a skew-product: there exist two maps $ \hat{f}_1:\hat{K}\rightarrow\hat{K} $ and $ \hat{f}_2:\hat{K}\times[0,1]\rightarrow[0,+\infty) $ such that for any $ (x,t)\in \hat{K}\times[0,1]$, one has $\hat{f}(x,t)=(\hat{f}_1(x),\hat{f}_2(x,t))$. Suppose the base $\hat{K}\times \{0\} $ is chain transitive. For $\hat{x}\in \hat{K}$ and $0<a<1$, the segment $\{\hat{x}\}\times [0,a] $ is called a *chain recurrent central segment* if it is in the same chain recurrent class as $\hat{K}\times \{0\} $. The birth of horseshoes by central model is based on a dichotomy with the flavor of Conley theory. Namely, either the base is a chain recurrent class and therefore admits arbitrarily small attracting/repelling neighborhoods, or there exists a chain recurrent central segment. Differing from the nonsingular flows, one can not apply central model directly to the Poincaré maps, because the sizes of the domains of the Poincaré maps tend to zero nearby the singularities. Instead, we are inspired by the ideas of Gan-Yang [@GY] and consider rescaled Poincaré maps. In fact, the idea of rescaling by the flow speed dates back to Liao [@Liao74; @Liao89; @Liaobk]. Let us recall the definition quickly. Let $X$ be a $C^1$ vector field on a compact manifold $M$. The flow of $X$ is denoted by $\phi_t$. Given a regular point $x$, $\langle X(x)\rangle^\perp$ is denoted by $\mathcal{N}_x$. For $T>0$ and $0<r\ll 1$, let us denote $\mathcal{N}_x(r)=\{v\in \mathcal{N}_x:\Vert v\Vert <r \}$. The rescaled Poincaré map $\mathcal{P}^*_{T,x} :\mathcal{N}_x(r)\rightarrow \mathcal{N}_{\phi_T(x)}$ is defined as: $$\begin{aligned} \mathcal{P}^*_{T,x}(v)= \frac{\exp^{-1}_{\phi_T(x)}\circ P_{T,x}\circ \exp_x(\Vert X(x)v\Vert)}{\Vert X(\phi_T(x))\Vert}\end{aligned}$$ here $P_{T,x}$ is the Poincaré map. By blowing up the singularities, the rescaled Poincaré maps are uniformly continuous and therefore well-defined on domains with uniformly bounded below sizes. One can refer [@CY; @GY] for the construction of extended rescaled Poincaré map $P^*_T $. For the singular aperiodic chain recurrent classes, we show the extended rescaled Poincaré maps are associated with central models. Meanwhile, there must be chain recurrent central segments over singularities. Let us state our result more mathematically. Suppose $X$ is a generic $C^1$ vector field robustly without horseshoes, $\mathrm{Sing}(X)$ is the collection of singularities of $X$. For $\sigma\in \mathrm{Sing}(X)$, the chain recurrent class and saddle value of $\sigma$ are denoted by $C(\sigma)$ and $\mathrm{sv}(\sigma)$ respectively. Let us define: $$\begin{aligned} G_\sigma=\{L\in PT_\sigma M:&\exists X_n\rightarrow X~in~C^1,x_n\in \mathrm{Per}(X_n)\\ & such~that~\mathcal{O}(x_n)\hookrightarrow C(\sigma),\langle\exp^{-1}_{\sigma}(x_n) \rangle\rightarrow L\} ,\end{aligned}$$ and $ K_\sigma=G_\sigma\cup( C(\sigma)\setminus \mathrm{Sing}(X))$. Suppose there exists $\rho\in C(\sigma)\cap \mathrm{Sing}(X)$ such that $\mathrm{sv}(\sigma)\mathrm{sv}(\rho)<0$. Then the extended rescaled Poincaré map $P^*_1$ over $ K_\sigma$ is partially hyperbolic with one dimensional center. In the same way as [@Xiao Proposition 4.6], there exist a finite cover $\ell:\hat{K}_\sigma\rightarrow K_\sigma$ and a central model $(\hat{K}_\sigma,\hat{f} ) $ to depict the dynamics of the extended rescaled Poincaré map restricted to the one dimensional locally invariant central manifolds. With these conventions, our main results can be concluded as: In the central model $(\hat{K}_\sigma,\hat{f} ) $, there exists $\hat{x}\in \ell^{-1}(G_\sigma)$ and a chain recurrent central segment over $\hat{x}$. In the statement of the main theorem, the central model does not have arbitrarily small trapping/repelling neighborhoods. On the other hand, the existence of chain recurrent central segments over singularities does not increase the dimension of the chain recurrent set along the center, because the zero flow speed needs to be taken into account. Therefore, differing from the nonsingular flow case, the chain recurrent central segment in this central model does not give birth to horseshoes. Thus neither aspects of the dichotomy about the central model create horseshoes. Therefore, the strategy of central model fails to eliminate the non-Lyapunov stable singular aperiodic chain recurrent classes. *This implies solo using central model is insufficient to solve weak Palis conjecture in higher dimensional($\geq 4$) singular flows*. The proof of the main theorem contains three steps. The first step is devoted to the construction of extended rescaled Poincaré maps(Proposition \[beta\]). To this end, the blowup of singularity is introduced. Though there are available references about this topic, for instance [@CY; @T], but we are able to address the blowup construction in a more elementary way so that the construction of extended rescaled Poincaré map is simplified. It is worth to remark that the novelty lies in the reduction to linear vector fields(the second step of the proof). To be more precise, we prove the extended rescaled Poincaré map over singularity equals the counterpart of the linearized vector field (Lemma \[l\]). Furthermore, the linearized vector field is hyperbolic with the stable and unstable subspaces each containing a one dimensional weak direction, and one can choose a unit vector $u$ in the two dimensional center such that $\langle u \rangle\in G_\sigma$. Finally, we show the machinery to associate a central model to the chain recurrent class. The estimation of the extended rescaled Poincaré maps at $\langle u \rangle$ implies the existence of chain recurrent central segment over $\ell^{-1}(\langle u \rangle) $ in the central model as we wanted. In addition, as a byproduct of the blowup construction, we are applicable to compute the second order derivatives of extended rescaled Poincaré maps, which we believe has its independent interests. For example, for linear vector fields on two dimensional Euclidean space, we show that the extended rescaled Poincaré maps are generally nonlinear. This work is organized as following: In section \[blowup\], we address the blowup construction. In section \[linear\] we prove the main theorem, deducing that solo using central model is insufficient to solve weak Palis conjecture in higher dimensional($\geq 4$) singular flows. In the appendix, we compute the second order derivatives of the extended rescaled Poincaré maps of two dimensional linear vector fields. Blowup of singularities {#blowup} ======================= In this section we readdress the blowup construction in a more elementary way compared to the available references, for instance [@CY; @T]. Based on this tool, the construction of extended rescaled Poincaré maps in proving the main theorem is possibly simplified. Meanwhile, as a byproduct, we are able to compute the second order derivatives of the extended rescaled Poincaré map. This result is new and interesting as far as we are concerned so we put it in the appendix. Local: polar coordinate transformation {#local} -------------------------------------- In this subsection we interpret the local construction of blowup of singularity as the polar coordinate transformation. Suppose $n\in\mathbb{N}$, $n\geq 2$. Let $X$ be a a $C^1$ vector field on $\mathbb{R}^n$ with $X(0)=0$. The flow and tangent flow are denoted by $\phi_t$ and $\Phi_t$ respectively. Let us consider the polar coordinate transformation: $$\begin{aligned} J:S^{n-1}\times[0,+\infty) & \rightarrow \mathbb{R}^n\\ (u,s) & \mapsto s\cdot u.\end{aligned}$$ \[pullback\] There exists a continuous vector field $\tilde{X}$ on $S^{n-1}\times [0,+\infty)$ such that for any $(u,s)\in S^{n-1}\times [0,+\infty)$, $$DJ\tilde{X}(u,s)=X(s\cdot u).$$ Meanwhile, the action of $\tilde{X}$ on $S^{n-1}\times \{0\}$ is equal to the normalization of $\Phi_t(0)$. Let us first compute the tangent map $DJ_{(u,s)}:T_uS^{n-1}\times\mathbb{R}\rightarrow\mathbb{R}^n $. Suppose $\{e_1,\cdots,e_{n-1} \}$ is a base of $T_uS^{n-1}$, and $e_n$ is the unit vector of $\mathbb{R}$. This implies $\{e_1,\cdots,e_n \}$ and $\{e_1,\cdots,e_{n-1},u \}$ are basis of $T_uS^{n-1}\times\mathbb{R}$ and $\mathbb{R}^n$ respectively. Under these two basis, the following holds: $$DJ_{(u,s)}=\mathrm{diag}\{s,\cdots,s,1\}.$$ For $s\neq 0$, the vector $X(s\cdot u)$ has an orthogonal decomposition: $$X(s\cdot u)=\big(X(s\cdot u)-\big\langle X(s\cdot u),u \big\rangle u\big)+\big\langle X(s\cdot u),u \big\rangle u.$$ There exists a vector $\tilde{X}(u,s)$ on $T_uS^{n-1}\times\mathbb{R}$ such that $DJ\tilde{X}(u,s)=X(s\cdot u)$: $$\begin{aligned} \tilde{X}(u,s) & =\frac{X(s\cdot u)-\big\langle X(s\cdot u),u \big\rangle u}{s}+\big\langle X(s\cdot u),u \big\rangle e_n\\ & =\int^1_0DX(t\cdot s \cdot u)u-\big\langle DX(t\cdot s\cdot u)u,u \big\rangle u dt +\big\langle X(s\cdot u),u \big\rangle e_n.\end{aligned}$$ Let us define: $$\tilde{X}(u,0)=DX(0)u-\big\langle DX(0)u,u \big\rangle u.$$ By (2.2), the vector field $\tilde{X}$ is continuous on $S^{n-1}\times [0,+\infty)$. Let us consider the flow of $\tilde{X}$. For $s\neq 0$, $$\phi_t(s\cdot u)=\int^1_0\Phi_t(w\cdot s\cdot u)s\cdot u d w.$$ Suppose $\Vert \phi_t(s\cdot u)\Vert=s_t,~\frac{\phi_t(s\cdot u)}{\Vert \phi_t(s\cdot u)\Vert}=u_t $. According to (2.4), the following holds: $$\frac{s_t}{s}\cdot u_t=\int ^1_0\Phi_t(w\cdot s \cdot u)udw.$$ By taking $(u,s)\rightarrow (u_0,0) $, the RHS of (2.5) tends to $\Phi_t(0)u_0$. Therefore, $$\begin{aligned} u_t \rightarrow \frac{\Phi_t(0)u_0}{\Vert \Phi_t(0)u_0 \Vert},~ \frac{s_t}{s} \rightarrow \Vert \Phi_t(0)u_0 \Vert.\end{aligned}$$ Let us define: $$\begin{aligned} &\tilde{\phi}_t(u,s)=(u_t,s_t)=(\frac{\phi_t(s\cdot u)}{\Vert \phi_t(s\cdot u)\Vert},\Vert \phi_t(s\cdot u)\Vert),~s\neq 0,\\ &\tilde{\phi}_t(u,0)= (\frac{\Phi_t(0)u}{\Vert \Phi_t(0)u \Vert},0 ).\end{aligned}$$ From (2.6), (2.7) and (2.8), one can see $ \tilde{\phi}_t$ is a continuous flow on $S^{n-1}\times [0,+\infty)$ that is tangent to $ \tilde{X}$. Meanwhile, (2.8) indicates that on $S^{n-1}\times \{0\}$, the flow $ \tilde{\phi}_t$ is the normalization of $\Phi_t(0)$. The proof of the lemma is finished. \[2\] By (2.3), the vector $ \tilde{X}(u,0)$ is the orthogonal projection of $DX(0)u$ onto $\langle u\rangle^\perp$, and therefore the unit eigenvectors of $DX(0)$ are singularities of $ \tilde{X}$. In order to be boundaryless, let us define an equivalence relation $\sim$ on $S^{n-1}\times [0,+\infty)$ as following: $$(u,s)\sim (u,s),~(u,0)\sim (-u,0).$$ The quotient space $S^{n-1}\times [0,+\infty)/\sim$ is a $C^\infty$ boundaryless manifold. Meanwhile, the map $J$ induces a map from $S^{n-1}\times [0,+\infty)/\sim$ to $\mathbb{R}^n$. Let us denote it by $\hat{J}$. \[hat\] Since $\tilde{X}(u,0)=- \tilde{X}(-u,0)$, the vector field $\tilde{X}$ induces a continuous vector field $\hat{X}$ on $S^{n-1}\times [0,+\infty)/\sim$. According to Lemma \[pullback\], the quotient vector field $\hat{X}$ generates a continuous flow on $S^{n-1}\times [0,+\infty)/\sim$. Global: compactification of manifold minus singularities -------------------------------------------------------- Given a $C^1$ vector field $X$ with non-degenerate singularities on the manifold $M$, the global construction of blowup of singularities is a way to compactify the manifold minus singularities. There exist a compact boundaryless manifold $\hat{M}$, a $C^\infty$ surjective map $\Pi:\hat{M}\rightarrow M$ and a continuous vector field $\hat{X}$ on $\hat{M}$ such that 1. $\Pi|\hat{M}\setminus \Pi^{-1}(\mathrm{Sing}(X))$ is a diffeomorphism onto $ M\setminus \mathrm{Sing}(X)$, $\Pi^{-1}(M\setminus \mathrm{Sing}(X) ) $ is dense in $\hat{M}$; 2. for any $\sigma\in \mathrm{Sing}(X)$, there exists a neighborhood $U$ such that $\Pi: \Pi^{-1}(U)\rightarrow U$ is equal to $\hat{J}$ modulo coordinate charts. 3. $D\Pi(\hat{X})=X $, $\hat{X}$ generates a continuous flow $\hat{\phi}_t$ on $\hat{M}$. Suppose $\mathrm{Sing}(X)=\{\sigma_1,\cdots,\sigma_k \}$. For $i=1,\cdots, k $, let $s_i>0$ be small enough such that - $\exp_{\sigma_i}:B(0,s_i)\subset T_{\sigma_i }M \rightarrow U_i $ is a diffeomorphism; - $U_i=\exp_{\sigma_i}( B(0,s_i)),i=1,\cdots,k$ are pairwise disjoint. Let us define: $\hat{M}=(M\setminus \mathrm{Sing}(X))\cup PT_{\sigma_i}M\cup\cdots\cup PT_{\sigma_k}M$. The space $\hat{M}$ is endowed a topology such that: - the map $j:M\setminus \mathrm{Sing}(X)(\subset M)\rightarrow M\setminus \mathrm{Sing}(X)(\subset\hat{M})$ with $j(x)=x$ is a homeomorphism; - for $i=1,\cdots,k$, the map $I_i:PT_{\sigma_i}M\rightarrow PT_{\sigma_i}M\subset \hat{M}$ such that $I_i(\langle u\rangle)=\langle DX(\sigma_i)u\rangle $ is an embedding. Let us define a map $\Pi:\hat{M}\rightarrow M$ such that $$\begin{aligned} \pi(x)&=x,~x\in M\setminus\mathrm{Sing} (X),\\ \pi(\langle u\rangle)&=\sigma_i,~u\in T^1_{\sigma_i }M,~i=1,\cdots,k.\end{aligned}$$ The nondegeneracy of $\sigma_i$ implies the neighborhood $V_i=U_i\setminus \{\sigma_i\}\cup PT_{\sigma_i }M$ of $PT_{\sigma_i }M $ is homeomorphic to $T^1_{\sigma_i }M\times [0,s_i)\diagup (v,0)\sim (-v,0)$ by the following map: $$\begin{aligned} \varphi_i:T^1_{\sigma_i }M\times [0,s_i)\diagup \sim & \rightarrow V_i\\ (u,s) & \mapsto \exp_{\sigma_i }(s\cdot u),s>0,\\ (u,0) & \mapsto \langle u\rangle.\end{aligned}$$ Therefore $\Pi:V_i\rightarrow U_i $ is equal to $\exp_{\sigma_i}\circ \hat{J}\circ\varphi_i^{-1} $ for $i=1,\cdots,k$, and Item 2 of this lemma holds. On the other hand, the coordinate charts of $M\setminus \mathrm{Sing}(X)$ are $C^\infty$ consistent with $\{ \varphi_i\}$. Hence $\hat{M}$ is a $C^\infty$ compact manifold under these coordinate charts and $\phi_i,i=1,\cdots,k$. Item 1 is deduced directly from the choice of the topology of $\hat{M} $. Item 3 follows from Remark \[hat\] and Item 2. The proof of this lemma is finished. Suppose $\xi:TM\rightarrow M$ is the tangent bundle. Let $\Pi^\ast(\xi):\Pi^\ast(TM)\rightarrow \tilde{M}$ be the pullback of $\xi$ by $\Pi$: $$\xymatrix{ \Pi^\ast(TM)\ar[d]^{\Pi^\ast(\xi)} \ar[r]^{} & TM \ar[d]^{\xi}\\ \hat{M} \ar[r]^{\Pi} &M}$$ By the choice of the topology on $\hat{M}$, $\Pi^\ast(\xi)$ admits a continuous line field $\mathcal{L}$ such that $$\begin{aligned} \mathcal{L}_x=\langle X(x)\rangle~for~x\in M\setminus \mathrm{Sing}(X),~\mathcal{L}_{\langle u\rangle }=\langle DX(\sigma_i)u\rangle.\end{aligned}$$ Let us recall the definition of the normal bundle $\mathcal{N}$ of $X$: $$\mathcal{N}=\{v\in T_xM:x\in M\setminus \mathrm{Sing}(X),\langle v,X(x) \rangle=0\}.$$ Let $\hat{N}$ be the orthogonal complement of $\mathcal{L}$. Then the restriction of $\hat{N}$ to $M\setminus \mathrm{Sing}(X) $, namely $\hat{N}_{M\setminus \mathrm{Sing}(X)}$, is isomorphic to $\mathcal{N}$ by (2.9). The definition of $\mathcal{L}$ in (2.9) implies the Nash blowup of singularities in [@LGW] is homeomorphic to our blowup construction. With $\mathcal{L}$ as reference lines, the generalized linear poincaré flow introduced in [@LGW] is well-defined in $\hat{N}$ as following: $$\psi_t:\hat{N}\rightarrow\hat{N},~\psi_t(v)=\pi(\Phi_t(u) ),$$ with $\pi$ the orthogonal projection from $\Pi^\ast(TM)$ to $\hat{N}$. Proof of the main theorem {#linear} ========================= The proof of the main theorem contains three steps. The first step is the construction of extended rescaled Poincaré map. It is not new, but simple and important for the construction of central model. Second, we show the reduction to linear vector fields. Third, we show the existence of central model, and in this central model there must be chain recurrent central segments over singularities through estimations of extended rescaled Poincaré map. Extended rescaled Poincaré map ------------------------------ It is proved that the rescaled Poincaré map are defined on domains with uniformly bounded below sizes and can be compactified in [@CY; @GY; @Wx]. To be more precise, \[beta\] For any $T> 0$, there exists $\beta>0$ such that the rescaled Poincaré map $ \mathcal{P}^*_T$ is well-defined on the normal bundle $\mathcal{N}(\beta)$ and can be extended to a continuous map $P^*_T:\hat{N}(\beta)\rightarrow\hat{N}$. The elementary way of addressing blowup construction can clarify the construction of extended rescaled Poincaré maps. Meanwhile, Proposition \[beta\] is crucial for the construction of central model in proving the main theorem. So let us give a proof of this Proposition. Let us consider the neighborhood of a singularity and modulo the local coordinate transformations. Suppose $X$ is a $C^1$ vector field on $\mathbb{R}^d$ such that $X(0)=0$. For $x\neq 0,t\in \mathbb{R}$, $0<r\ll 1$ and $y\in N_x(r)$, let $\tau+t=\tau(t,x,y)+t$ be the first time for $y$ to reach $N_{\phi_t(x)}$. The rescaled Poincaré map satisfies: $$\begin{aligned} \mathcal{P}^*_{t,x}(y)=&\frac{1}{\Vert X(\phi_t(x))\Vert}\mathcal{P}_{t,x}(\Vert X(x) \Vert y)\\ =&\frac{1}{\Vert X(\phi_t(x))\Vert}\exp^{-1}_{\phi_t(x) }\circ P_{t,x}\circ \exp_x(\Vert X(x) \Vert y )\\ =&\frac{1}{\Vert X(\phi_t(x))\Vert}\exp^{-1}_{\phi_t(x) }\circ \phi_{\tau+t}\circ\exp_x( \Vert X(x) \Vert y)\\ =&\frac{1}{\Vert X(\phi_t(x))\Vert}\big(\phi_{\tau+t}(x+\Vert X(x) \Vert y)-\phi_t(x) \big).\end{aligned}$$ For $(u,s)\in S^{d-1}\times (0,+\infty)$, $\tau\in \mathbb{R}$, $x=s\cdot u$ and $y\in N_x$, let us define: $$F(t,u,s,\tau,y ) =\frac{1}{\Vert X(\phi_t(x))\Vert}\big(\phi_{\tau+t}(x+\Vert X(x) \Vert y)-\phi_t(x) \big),$$ $$=\frac{\Vert X(x) \Vert}{\Vert X(\phi_t(x))\Vert}\int^1_0d\phi_{\tau+t}\big(s\cdot u+w\cdot\Vert X(x) \Vert y \big)ydw+\frac{\phi_{\tau+t}(x)-\phi_t(x)}{\Vert X(\phi_t(x))\Vert}.$$ For $s=0$, let us define $F(t,u,0,\tau,y)$ such that $$F(t,u,0,\tau,y)=\frac{\Vert DX(0)u \Vert}{\Vert DX(0)d\phi_t(0)u \Vert}d\phi_{\tau+t }(0)y+\frac{d\phi_{\tau+t }(0)u-d\phi_t(0)u}{\Vert DX(0)d\phi_t(0)u \Vert}.$$ Equations (3.1) and (3.2) imply $F$ is a continuous map. For $s\neq 0$, the first order derivatives of $F$ are: $$\begin{aligned} \frac{\partial F}{\partial y}(t,u,s,\tau,y)=&\frac{ \Vert X(x) \Vert}{\Vert X(\phi_t(x))\Vert}d\phi_{\tau+t}\big(x+\Vert X(x) \Vert y\big),\\ \frac{\partial F}{\partial \tau}(t,u,s,\tau,y)=&\frac{1}{\Vert X(\phi_t(x))\Vert}X\big(\phi_{\tau+t}(x+\Vert X(x) \Vert y) \big)\\ =&\frac{\Vert X(x) \Vert}{\Vert X(\phi_t(x))\Vert}\int^1_0DX\big(\phi_{\tau+t}(x+w\cdot\Vert X(x) \Vert y) \big)\\ &\cdot d\phi_{\tau+t}\big(x+w\cdot\Vert X(x) \Vert y\big)ydw+\frac{X(\phi_{\tau+t}(x) )}{\Vert X(\phi_t(x))\Vert}.\end{aligned}$$ Let us define: $$\begin{aligned} \frac{\partial F}{\partial \tau}(t,u,0,\tau,y )=&\frac{\Vert DX(0)u \Vert}{\Vert DX(0)d\phi_t(0)u \Vert}DX(0)d\phi_{\tau+t }(0)y+\frac{DX(0)d\phi_{\tau+t}(0)u}{\Vert DX(0)d\phi_t(0)u \Vert},\\ \frac{\partial F}{\partial y}(t,u,0,\tau,y)=&\frac{\Vert DX(0)u \Vert}{\Vert DX(0)d\phi_t(0)u \Vert}d\phi_{\tau+t }(0),\end{aligned}$$ By (3.3)-(3.8), the first order derivatives $\frac{\partial F}{\partial \tau}$ and $\frac{\partial F}{\partial y} $ are continuous. Let us define $H(t,u,s,\tau,y)$ as following: $$H(t,u,s,\tau,y)=\big\langle F(t,u,s,\tau,y ),\frac{X\big(\phi_t(x)\big)}{\big\Vert X\big(\phi_t(x)\big)\big\Vert} \big\rangle.$$ From (3.2) and (3.4) one can see: $$\begin{aligned} H(t,u,s,0,0)&=0,\\ \frac{\partial H}{\partial \tau}(t,u,s,0,0)&=\big\langle \frac{X\big(\phi_t(x)\big)}{\big\Vert X\big(\phi_t(x)\big)\big\Vert} ,\frac{X\big(\phi_t(x)\big)}{\big\Vert X\big(\phi_t(x)\big)\big\Vert}\big\rangle=1.\end{aligned}$$ By the Explicit Function Theorem, there exists a map $\tau=\tau(t,u,s,y)$ such that $$H\big(t,u,s,\tau( t,u,s,y),y\big)=0.$$ Meanwhile, the following holds: - $\frac{\partial\tau}{\partial y }$ is continuous; - for fixed $t=T$ and $S>0$, there exists $\alpha>0$ such that for any $s\leq S$ the sizes the domains of $\tau=\tau(T,u,s,\cdot)$ are greater than $\alpha$. By (3.10), the time for $y$ to reach $N_{\phi_t(x)}$ for $s\neq 0$ is $t+\tau(t,u,s,y)$, and therefore $$F\big(t,u,s,\tau( t,u,s,y),y\big)=\mathcal{P}^*_{t,x}(y).$$ Let us define: $$\tilde{P}^*(t,u,s,y)=F\big(t,u,s,\tau( t,u,s,y),y\big).$$ By (3.11) and (3.12), $\tilde{P}^*(t,u,s,y)=\mathcal{P}^*_{t,x}(y)$ for $s\neq 0$. By (3.2) and (3.9), one has $\tau( t,u,0,y)=\tau( t,-u,0,-y) $, and therefore $$F\big(t,u,0,\tau( t,u,0,y),y\big)=-F\big(t,-u,0,\tau( t,-u,0,-y),-y\big).$$ According to the definition of $\hat{N}$ and (3.12), $\tilde{P}^*$ induces a map $P^* $ in the neighborhood of $\hat{N}_{\Pi^{-1}(0)}(\alpha)$. Meanwhile, (3.11) implies $P^*$ is the extension of $\mathcal{P}^*_T$ nearby the singularity. On the other hand, given a regular point $x$ and for any $y$ close to $x$, the domain of the rescaled Poincaré maps $ \mathcal{P}^*_{T,y}$ has uniformly bounded below sizes. Therefore there exists $0<\beta\leq \alpha$ such that the rescaled Poincaré map $ \mathcal{P}^*_T$ is well-defined on $\mathcal{N}(\beta)$. So we have proved that the rescaled Poincaré map $ \mathcal{P}^*_T$ can be extended to a continuous map $P^*_T:\hat{N}(\beta)\rightarrow\hat{N}$. \[d\] The generalized rescaled linear Poincaré flow $\psi_t^*:\hat{N}\rightarrow\hat{N}$ is defined as: $$\psi_t^*(v)=\frac{\psi_t(v)}{\Vert \Phi_t|\mathcal{L}_x \Vert}~for~x\in\hat{N}_x.$$ \[r\] The derivative of the extended rescaled Poincaré map $P^*_t$ is equal to the generalized rescaled linear Poincaré map $\psi^*_t$. Let us compute directly from (3.12). For $s\neq 0$, $$\begin{aligned} \frac{\partial P^*}{\partial y}(t,u,s,0)(v)=&\frac{\partial F}{\partial y}(t,u,s,0,0)v+\frac{\partial F}{\partial \tau}(t,u,s,0,0 )\big\langle\frac{\partial \tau}{\partial y}(0),v\big\rangle\\ =&\frac{\big\Vert X(x)\big\Vert}{\big\Vert X\big(\phi_t(x) \big)\big\Vert}\Phi_t(x)v+\frac{X\big(\phi_t(x)\big)}{\big\Vert X\big(\phi_t(x)\big)\big\Vert}\big\langle\frac{\partial \tau}{\partial y}(0),v\big\rangle\\ =&\frac{\big\Vert X(x)\big\Vert}{\big\Vert X\big(\phi_t(x) \big)\big\Vert}\big(\Phi_t(x)v+\frac{\big\langle\frac{\partial \tau}{\partial y}(0),v\big\rangle}{\big\Vert X(x)\big\Vert} X\big(\phi_t(x)\big)\big)\in \mathcal{N}_{\phi_t(x)} \\ =&\frac{\big\Vert X(x)\big\Vert}{\big\Vert X\big(\phi_t(x) \big)\big\Vert}\pi\big(\Phi_t(x)(v)\big)=\psi^*_t(v),\end{aligned}$$ Since $\frac{\partial P^*}{\partial y} $ is continuous, one has for $s=0$ $$\begin{aligned} \frac{\partial P^*}{\partial y}(t,u,0,0)(v)= \frac{\big\Vert DX(0)u\big\Vert}{\big\Vert DX(0)\Phi_t(0)u \big\Vert}\pi\big(\Phi_t(0)(v)\big)=\psi^*_t(v).\end{aligned}$$ The proof of this lemma is finished. Reduction to linear vector fields --------------------------------- As stressed in the introduction, we want to eliminate singular aperiodic chain recurrent classes of vector fields robustly without horseshoes. In fact, these chain recurrent classes are usually not Lyapunov stable, the dimension is greater than 3. Suppose $\dim M\geq 4$, $X$ is a $C^1$ generic vector field robustly without horseshoes. For $\sigma\in \mathrm{Sing}(X)$, $T_\sigma M=E^s_\sigma\oplus E^u_\sigma $ is the hyperbolic splitting, the Lyapunov exponents are: $\lambda_1\leq \cdots\leq\lambda_i<0<\lambda_{i+1}\leq\cdots\leq \lambda_d$. The saddle value $\mathrm{sv}(\sigma)$ is defined as: $\mathrm{sv}(\sigma)=\lambda_i+\lambda_{i+1}$. Suppose there exists $\rho\in \mathrm{Sing}(X)\cap C(\sigma)$ such that $\mathrm{sv}(\sigma)\mathrm{sv}(\rho) <0$. Let us recall the definition of $G_\sigma$ in the introduction: $$\begin{aligned} G_\sigma=\{L\in PT_\sigma M:&\exists X_n\rightarrow X~in~C^1,x_n\in \mathrm{Per}(X_n)\\ & such~that~\mathcal{O}(x_n)\hookrightarrow C(\sigma),\langle\exp^{-1}_{\sigma}(x_n) \rangle\rightarrow L\} ,\end{aligned}$$ and $ K_\sigma=G_\sigma\cup( C(\sigma)\setminus \mathrm{Sing}(X))$. \[G\] 1. The hyperbolic splitting of $\sigma$ satisfies: $$E^s_\sigma=E^{ss}_\sigma\oplus E^{cs}_\sigma,~E^u_\sigma=E^{cu}_\sigma\oplus E^{uu}_\sigma,$$ with $\dim E^{cs}_\sigma=\dim E^{cu}_\sigma =1$. Moreover, $G_\sigma\subset E^c_\sigma=E^{cs}_\sigma\oplus E^{cu}_\sigma $. 2. $K_\sigma$ admits a partially hyperbolic splitting with respect to the generalized rescaled linear Poincaré flow: $$\hat{N}_{K_\sigma}=N^s\oplus N^c\oplus N^u,~\dim N^c=1.$$ The definition of $G_\sigma$ and Item 1 of Lemma \[G\] imply the periodic points of nearby vector fields whose orbits are close to $C(\sigma) $ accumulate $\sigma$ only along the two dimensional center direction. According to [@Zheng Lemma 3.3.4], - the singularity $\sigma$ has a splitting $E^{ss}_\sigma\oplus E^{cs}_\sigma\oplus E^{cu}_\sigma\oplus E^{uu}_\sigma $ with $\dim(E^{cs}_\sigma)=\dim(E^{cu}_\sigma)=1 $; - $\dim E^{ss}_\sigma\neq 0,\dim E^{uu}_\sigma\neq 0$, $W^{ss}(\sigma)\cap C(\sigma)=\{\sigma\}$, and $W^{uu}(\sigma)\cap C(\sigma)=\{\sigma\}$; - $K_\sigma$ has a partially hyperbolic splitting with respect to the generalize linear Poincaré flow: $$\hat{N}_{K_\sigma}=N^s\oplus N^c\oplus N^u,~\dim N^c=1$$ By the same arguments as in [@LGW Lemma 4.4], one has the following: $$\begin{aligned} G_\sigma\subset( E^{ss}_\sigma\oplus E^{cs}_\sigma\oplus E^{cu}_\sigma)\cap (E^{cs}_\sigma\oplus E^{cu}_\sigma\oplus E^{uu}_\sigma)= E^{cs}_\sigma\oplus E^{cu}_\sigma.\end{aligned}$$ Therefore Item 1 is proved. The proof of Item 2 is based on the following claim: $N^s$ is dominated by $\mathcal{L}_{K_\sigma}$ and $\mathcal{L}_{K_\sigma}$ is dominated by $N^u$. For $L\in G_\sigma\subset E^c$, one has $N^s_L=E^{ss}_\sigma,~N^u_L=E^{uu}_\sigma$. Then the claim follows by similar arguments as [@LGW Lemma 5.3]. By definition \[d\] and the claim, $N^s$/$N^u$ is contracted/expanded by the generalized rescaled linear Poincaré flow. Therefore the splitting (3.15) is as wanted in the statement of Item 2. The proof of this lemma is finished. In order to estimate the generalized rescaled Poincaré maps over $G_\sigma$, let us fix a local chart of $\sigma$ as in the global construction of blowing up singularities. Let us first compare the extended rescaled Poincaré map over a singularity with the counterpart of the linearized vector field. Suppose $0\in \mathbb{R}^d$ corresponds to the singularity $\sigma$, $E^{ss}_\sigma,E^{cs}_\sigma,E^{cu}_\sigma,E^{uu}_\sigma$ are pairwise orthogonal, and $X(x)=Ax+f(x)$ with $f(0)=0,Df(0)=0$. Moreover, there exist $A^{ss}\in \mathrm{Gl}(i-1,\mathbb{R})$ and $A^{uu}\in \mathrm{Gl}(d-i-1,\mathbb{R})$ such that for any $x=(x^{ss},x^{cs},x^{cu},x^{uu})$, $$Ax=(A^{ss}x^{ss},\lambda_i x^{cs},\lambda_{i+1} x^{cu},A^{uu}x^{uu} ).$$ \[l\] The extended rescaled Poincaré maps of $X$ over $PT_\sigma M$ are equal to the counterpart of the vector field $Y=Ax$. Recall the extended rescaled Poincaré map $P^*$ satisfies: $$\begin{aligned} &P^*(t,u,s,y)=F\big(t,u,s,\tau( t,u,s,y),y\big),\\ & H\big(t,u,s,\tau( t,u,s,y),y\big)=0,\\ &H(t,u,s,\tau,y)=\big\langle F(t,u,s,\tau,y ),\frac{X\big(\phi_t(x)\big)}{\big\Vert X\big(\phi_t(x)\big)\big\Vert} \big\rangle .\end{aligned}$$ From (3.2), (3.18) and (3.19), one can see $\tau(t,u,0,y )$ satisfies: $$\big\langle \frac{\big\Vert DX(0)u \big\Vert}{\big\Vert DX(0)d\phi_t(0)u \big\Vert}d\phi_{\tau+t }(0)y+\frac{d\phi_{\tau+t }(0)u-d\phi_t(0)u}{\big\Vert DX(0)d\phi_t(0)u \big\Vert}, \frac{ DX(0)d\phi_t(0)u}{\big\Vert DX(0)d\phi_t(0)u \big\Vert} \big\rangle=0.$$ Since (3.20) is independent of $f(x)$, one has $\tau(t,u,0,y )$ and therefore $P^*(t,u,0,y ) $ are also independent of $f$. The proof of Lemma \[l\] is finished. \[k\] Lemma \[l\] indicates that the extended rescaled Poincaré maps over singularity are independent of the nearby regular orbits; Chain recurrent central segment over singularity ------------------------------------------------ Let us first show the construction of central model. \[c\] There exists a central model $(\hat{K}_\sigma ,\hat{f})$, a finite cover $ \ell:\hat{K}_\sigma\rightarrow K_\sigma$ and a continuous map $\alpha: \hat{K}_\sigma\times[0,+\infty)\rightarrow \hat{N}$ such that 1. the map $\ell$ is at most two folds, $\hat{K}_\sigma$ is chain transitive, and the derivative of $\alpha$ with respect to the second variable is continuous; 2. for any $\hat{x}\in \hat{K}_\sigma$ and $x=\ell(\hat{x})$, the center plaque $F_x=\alpha(\{\hat{x}\}\times[0,1) )\subset \hat{N}_x$ is tangent to $N^c_x $, the family $\{F_x\}_{x\in K_\sigma}$ is locally invariant under the extended rescaled Poincaré map $P^*_1$; 3. the map $\alpha$ semi-conjugates $\hat{f}$ and $\hat{P}^*_1 $: $\alpha\circ\hat{f}|_{\{\hat{x} \}\times[0,1)}=\hat{P} ^*_{1,x}\circ \alpha|_{\{\hat{x} \}\times[0,1)}$. As indicated by Lemma \[c\], the central model $(\hat{K}_\sigma ,\hat{f})$ depicts the dynamics of the extended rescaled Poincaré map $\hat{P} ^*_1$ along the one dimensional center direction $N^c$. According to Lemma \[r\] and Item 2 of Lemma \[G\], the extended rescaled Poincaré map $P^*_1 $ is partially hyperbolic with one dimensional center. Then one can see the lemma holds by following the arguments in [@Xiao Proposition 4.6]. With the preparations, let us begin the proof of the main theorem. By Item 1 of lemma \[G\], there exists $$u=(0^{ss},\cos\theta,\sin\theta,0^{uu}),$$ such that $\langle u\rangle \in G_\sigma$, $\theta\neq \frac{k\pi}{2}$. According to Item 2 of Lemma \[G\], one has $$N^c_{\langle u\rangle }=\langle v\rangle,~with~v=(0^{ss},-\sin\theta,\cos\theta,0^{uu} ) .$$ By Lemma \[l\], the following holds for $\lambda_i=-1,~\lambda_{i+1}=1$: $$\begin{aligned} \psi_t^*(v)=&\frac{\big\Vert Au\big\Vert}{\big\Vert Ae^{tA}u \big\Vert}\big(e^{tA}v-\frac{\langle e^{tA}v,Ae^{tA}u\rangle}{\langle Ae^{tA}u,Ae^{tA}u\rangle}Ae^{tA}u \big)\\ =&\frac{\cos^2\theta-\sin^2\theta}{\big(\sqrt{e^{-2t}\cos^2\theta+e^{2t}\sin^2\theta}\big)^3}(0^{ss}, e^t\sin\theta,e^{-t}\cos\theta,0^{uu}).\end{aligned}$$ From (3.22) one can see $$\lim_{t\rightarrow\pm\infty}\psi_t^*(v)= 0 .$$ Therefore $N^c_{\langle u\rangle }$ is contracted exponentially by the generalized rescaled linear Poincaré flow as $t\rightarrow\pm\infty$. By Item 2 of Lemma \[c\] and (3.14), the center plaque $F_{\langle u\rangle}$ is contracted exponentially by both the extended rescaled Poincaré map $P^*_1$ and its inverse. From Item 3 of Lemma \[c\], for any $\hat{x}\in\ell^{-1}(\langle u\rangle)$, the fiber $\{ \hat{x}\}\times[0,1]$ contains a segment $\gamma$ that is contracted by both $\hat{f}$ and $\hat{f}^{-1}$. The segment $\gamma$ is in the same chain recurrent class as $\hat{K}_\sigma $ and therefore is a chain recurrent central segment. The proof of the main theorem is finished. The assumption $\lambda_i=-1,~\lambda_{i+1}=1 $ is not essential in the proof of the main theorem, but it simplifies the computation. In fact, (3.23) holds for any $\lambda_i<0,~\lambda_{i+1}>0 $. Central model isinsufficient to solve weak Palis conjecture in higher dimensional singular flow ------------------------------------------------------------------------------------------------ The main theorem illustrates central model is insufficient to eliminate non-Lyapunov stable singular aperiodic chain recurrent classes. Let us explain it explicitly. Suppose $(\hat{K},\hat{f}) $ is a central model and the base $\hat{K}\times \{0\} $ is chain transitive. The creation of horseshoe via center model is based on the following dichotomy with the flavor of Conley theory: - either there exists chain recurrent central segment; - or the base $ \hat{K}\times \{0\}$ admits arbitrarily small attracting/repelling neighborhoods. But in the central model $(\hat{K}_\sigma ,\hat{f})$ given by Lemma \[c\], neither aspects of the dichotomy create horseshoes. First, the central model given by Lemma \[c\] admits no chain recurrent central segments over regular orbits. To be more precise, \[n\] In the central model $(\hat{K}_\sigma ,\hat{f})$ of Lemma \[c\], for any $\hat{x}\in\ell^{-1}( C(\sigma)\setminus \mathrm{Sing}(X))$ and any $0<a<1$, the segment $\{\hat{x}\}\times [0,a]$ is not a chain recurrent central segment. Suppose $\hat{x}\in\ell^{-1}( C(\sigma)\setminus \mathrm{Sing}(X))$ such that $\{\hat{x}\}\times [0,a]$ is a chain recurrent central segment, $x=\ell(\hat{x})$, $\hat{\gamma}=\alpha(\{\hat{x}\}\times [0,a] )$, and $$\gamma=\exp_x(\Vert X(x)\Vert \hat{\gamma }).$$ By Lemma \[c\], one has $$T_x\gamma=N_x,~\gamma\subset C(\sigma).$$ Let $\mathcal{O}(p)$ be a periodic orbit close to $C(\sigma )$ and passing nearby $\gamma$. Let us show that $\mathcal{O}(p)$ and $\gamma$ form a heteroclinic cycle by Figure 1. As indicated by the figure, there exists a pseudo-orbit from $\mathcal{O}(p)$ to the strong unstable manifold of $\mathcal{O}(p)$, reaching the strong stable manifold of a point $x\in \gamma$, then going inside $C(\sigma)$ from $x$ to certain point $y\in \gamma$, going on along the strong unstable manifold of $y$, until reaching the strong stable manifold of $\mathcal{O}(p) $, and along the strong stable manifold of $\mathcal{O}(p)$ back to $\mathcal{O}(p)$. Therefore $\mathcal{O}(p) $ is contained in the same chain recurrent class as the segment $\gamma$. Meanwhile, (3.25) implies $\mathcal{O}(p)\subset C(\sigma) $, a contradiction to the assumption of $C(\sigma)$ being aperiodic. Therefore there exist no chain recurrent central segments over regular points in the central model $(\hat{K}_\sigma ,\hat{f})$. (0,0) – (1,0) – (1,1.2) – (0,1.2) – (0,0); (0,0) – (-0.8,-0.3) – (-0.8, 0.9) – (0,1.2) – (0,0); (0,0.7) – (0.8,0.7); (0.8,0.7) – (0.4,0.2); (0.4,0.2) – (-0.6,0.2); (-0.6,0.2) – (0,.425); at (-0.8,0.9) [$ss$]{}; at (0,1.2) [$\gamma$]{}; at (0.1,0.7) [$y$]{}; at (0.4,0.2) [$\mathcal{O}(p)$]{}; at (0,.44) [$x$]{}; at (1,1.2) [$uu$]{}; Second, as one can infer from (3.24), once the zero flow speed is taken into account, the existence of chain recurrent central segment in the main theorem does not increase the dimension of the chain recurrent class along the center direction. Therefore in the central model $(\hat{K}_\sigma,\hat{f} ) $, the chain recurrent central segment guaranteed by the main theorem does not create horseshoes. Third, the dichotomy about central model implies $(\hat{K}_\sigma,\hat{f})$ does not have arbitrarily small trapping/repelling neighborhoods. Therefore the other mechanism for the birth of horseshoes by central model does not work. So we come to the conclusion: The strategy of central model does not work to eliminate generic aperiodic chain recurrent classes with singularities whose saddle values have different signs. Differing from $C^1$ diffeomorphisms and nonsingular flows, solo using central model isbe insufficient to solve weak Palis conjecture in higher dimensional ($\geq4$) singular flows. Appendix {#appendix .unnumbered} ======== As indicated by Remark \[k\], the extended rescaled Poincaré maps over singularity are determined exclusively by the linearized vector field of the singularity. Therefore we believe it is interesting to calculate the extended rescaled Poincaré map of linear vector fields. We compute upto the second order derivatives. It turns out the extended rescaled Poincaré maps of two dimensional linear vector fields are generally nonlinear. A1. Extended rescaled Poincaré map under moving orthogonal frame {#a1.-extended-rescaled-poincaré-map-under-moving-orthogonal-frame .unnumbered} ---------------------------------------------------------------- For $A\in \mathrm{Gl}(2,\mathbb{R})$, the solution of the differential equation $$\dot{x}=Ax,$$ is $\phi_t(x)=e^{tA}x$. Assume $u=(x_1,x_2)$ is a unit vector, $y\in \mathbb{R}$, $(Au )^\perp $ is a rotation of $Au$ by $\frac{\pi}{2}$. Let us define the *extended rescaled Poincaré map under moving orthogonal frame* by the following equation: $$\begin{aligned} P^*\big(t,u,0,y\frac{(Au )^\perp}{\Vert Au \Vert }\big)=F^*_{t,u}(y)\frac{(Ae^{tA}u )^\perp}{\Vert (Ae^{tA}u )^\perp \Vert }.\end{aligned}$$ The extended rescaled Poincaré map $P^*$ satisfies: $$\begin{aligned} P^*\big(t,u,0,y\frac{(Au )^\perp}{\Vert Au \Vert }\big)=\frac{e^{\big(\tau(y)+t\big)A }y(Au )^\perp}{\Vert Ae^{tA}u \Vert}+\frac{e^{\big(\tau(y)+t\big)A }u-e^{tA}u}{\Vert Ae^{tA}u \Vert},\end{aligned}$$ with a $\tau=\tau(y)$ such that $$\big\langle \frac{e^{\big(\tau(y)+t\big)A }y(Au )^\perp}{\Vert Ae^{tA}u \Vert}+\frac{e^{\big(\tau(y)+t\big)A }u-e^{tA}u}{\Vert Ae^{tA}u \Vert},\frac{Ae^{tA}u}{\Vert Ae^{tA}u \Vert} \big\rangle =0.$$ *The second order derivative of the extended rescaled Poincaré map satisfies: $$\begin{aligned} \frac{d^2F^*_{t,u}}{dy^2}(0)=2\frac{\big\langle Ae^{tA }(Au )^\perp,(Ae^{tA}u )^\perp \big\rangle}{\big\langle Ae^{tA}u,Ae^{tA}u \big\rangle}\frac{d\tau}{dy}(0)+\frac{\big\langle A^2e^{tA }u,(Ae^{tA}u )^\perp \big\rangle}{\big\langle Ae^{tA}u,Ae^{tA}u \big\rangle}\big(\frac{d\tau}{dy}(0) \big)^2.\end{aligned}$$* Let us define $H(t,u,\tau,y)$ by: $$H(t,u,\tau,y)=\big\langle \frac{e^{(\tau+t)A }y(Au )^\perp}{\Vert Ae^{tA}u \Vert}+\frac{e^{(\tau+t)A }u-e^{tA}u}{\Vert Ae^{tA}u \Vert},\frac{Ae^{tA}u}{\Vert Ae^{tA}u \Vert} \big\rangle.$$ Then $H(t,u,0,0)=0 $. Meanwhile, $\frac{d\tau}{dy}(0) $ satisfies: $$\begin{aligned} \frac{d\tau}{dy}(0)&=-\frac{\frac{\partial H}{\partial y}(t,u,0,0)}{\frac{\partial H}{\partial \tau}(t,u,0,0)}=-\frac{\big\langle e^{tA }(Au )^\perp , Ae^{tA}u \big\rangle}{\big\langle Ae^{tA}u,Ae^{tA}u \big\rangle}.\end{aligned}$$ We can see the following holds: $$\begin{aligned} F^*_{t,u}(y)=&\big\langle P^*\big(t,u,0,y\frac{(Au )^\perp}{\Vert Au \Vert }\big) ,\frac{(Ae^{tA}u )^\perp}{\Vert (Ae^{tA}u )^\perp \Vert } \big\rangle=\big\langle Q\big(t,u,\tau(y),y\big),\frac{(Ae^{tA}u )^\perp}{\Vert (Ae^{tA}u )^\perp \Vert } \big\rangle,\end{aligned}$$ with $Q(t,u,\tau,y)$ defined as following $$\begin{aligned} Q(t,u,\tau,y)=\frac{e^{(\tau+t)A }y(Au )^\perp}{\Vert Ae^{tA}u \Vert}+\frac{e^{(\tau+t)A }u-e^{tA}u}{\Vert Ae^{tA}u \Vert}.\end{aligned}$$ Therefore the second order derivative of $F^*_{t,u}(y) $ satisfies: $$\begin{aligned} \frac{d^2F^*_{t,u}}{dy^2}(0)=&\big\langle 2\frac{\partial^2 Q(t,u,0,0)}{\partial y\partial\tau}\frac{d\tau}{dy}(0),\frac{(Ae^{tA}u )^\perp}{\Vert (Ae^{tA}u )^\perp \Vert } \big\rangle\\ &+\big\langle \frac{\partial^2 Q(t,u,0,0)}{\partial \tau^2}(0)\big(\frac{d\tau}{dy}(0) \big)^2 ,\frac{(Ae^{tA}u )^\perp}{\Vert (Ae^{tA}u )^\perp \Vert } \big\rangle\\ =&2\frac{\big\langle Ae^{tA }(Au )^\perp,(Ae^{tA}u )^\perp \big\rangle}{\big\langle Ae^{tA}u,Ae^{tA}u \big\rangle}\frac{d\tau}{dy}(0)+\frac{\big\langle A^2e^{tA }u,(Ae^{tA}u )^\perp \big\rangle}{\big\langle Ae^{tA}u,Ae^{tA}u \big\rangle}\big(\frac{d\tau}{dy}(0) \big)^2.\end{aligned}$$ To see whether the second order derivatives vanish, we need to compute the following four inner products: $$\begin{aligned} &\big\langle e^{tA }(Au )^\perp , Ae^{tA}u \big\rangle,~\big\langle Ae^{tA }(Au )^\perp , (Ae^{tA}u )^\perp \big\rangle,\\ &\big\langle A^2e^{tA }u , (Ae^{tA}u )^\perp \big\rangle,~\big\langle Ae^{tA}u,Ae^{tA}u \big\rangle.\end{aligned}$$ A2. The non-vanishing second order derivatives {#a2.-the-non-vanishing-second-order-derivatives .unnumbered} ----------------------------------------------- Suppose $A\in \mathrm{Gl(2,\mathbb{R})}$. Then $A$ is similar to one of the following three types: (1):$\left( \begin{array}{cc} \lambda_1 & 0 \\ 0 & \lambda_2 \\ \end{array} \right)$, (2):$\left( \begin{array}{cc} \lambda &0\\ 1& \lambda\\ \end{array} \right)$, (3):$\left( \begin{array}{cc} \alpha & -\beta \\ \beta & \alpha\\ \end{array} \right)$. ($\lambda_1\neq \lambda_2,\lambda>0,\alpha^2+\beta^2>0$.) \[4.1\] Since a unit eigenvector is a singularity of the extended flow by Remark \[2\], the extended rescaled Poincaré map over the eigenvector is the identity. The sense we mean by ’generally’ in item (3) will be illustrated in the proof. **The third type:** Suppose $A=\left( \begin{array}{cc} \alpha &\-\beta\\ \beta & \alpha\\ \end{array} \right)$, $x=r(\cos \theta,\sin \theta)$. Then $e^{tA}x= re^{t\alpha}\big(\cos(\theta+t\beta ),\sin(\theta+t\beta)\big)$. This implies that $\phi_t=e^{tA}$ is conformal. Therefore, for any unit vector $u$, the orthogonal section to $Ax$ at $u$ is mapped by $\phi_t$ to the orthogonal section at $e^{tA}u$. Consequently, one has $\tau(t,u,y)=0$ and the following holds: $$\begin{aligned} F^*_{t,u}(y)=&\big\langle \frac{e^{tA }y(Au )^\perp}{\Vert Ae^{tA}u \Vert},\frac{(Ae^{tA}u )^\perp}{\Vert (Ae^{tA}u )^\perp \Vert } \big\rangle\\ =&y\big\langle \frac{(Ae^{tA }u )^\perp}{\Vert Ae^{tA}u \Vert},\frac{(Ae^{tA}u )^\perp}{\Vert (Ae^{tA}u )^\perp \Vert } \big\rangle\\ =&y.\end{aligned}$$ So we have shown that the extended rescaled Poincaré map under moving frame $ F^*_{t,u}$ is linear if the singularity is a focus. **The first type:** Suppose $A=\left( \begin{array}{cc} \lambda_1 & 0 \\ 0 & \lambda_2 \\ \end{array} \right) $, $ \lambda_1\neq\lambda_2,\lambda_1\lambda_2\neq 0 $. For any unit vector $u=(x_1,x_2)$, the following equations hold: $$\begin{aligned} \frac{d^2F^*_{t,u}}{dy^2}(0)=&2\frac{\big\langle Ae^{tA }(Au )^\perp , (Ae^{tA}u )^\perp \big\rangle}{\big\langle Ae^{tA}u,Ae^{tA}u \big\rangle}\frac{d\tau}{dy}(0)+\frac{\big\langle A^2e^{tA }u , (Ae^{tA}u )^\perp \big\rangle}{\big\langle Ae^{tA}u,Ae^{tA}u \big\rangle}\big(\frac{d\tau}{dy}(0) \big)^2\\ =&\frac{\lambda_1^2\lambda_2^2x_1x_2e^{t(\lambda_1+\lambda_2) }(e^{ 2t\lambda_1}-e^{2t\lambda_2 } )}{(\lambda_1^2x_1^2e^{2t\lambda_1 }+\lambda_2^2x_2^2e^{2t\lambda_2 } )^3}\\ &\cdot\big(S(\lambda_1,x_1,\lambda_2,x_2 )e^{ 2t\lambda_1}+ S(\lambda_2,x_2,\lambda_1,x_1 )e^{ 2t\lambda_2}\big),\end{aligned}$$ with $S(\lambda_1,x_1,\lambda_2,x_2 )=(2\lambda_1^2x_1^2+\lambda_2^2x_2^2+\lambda_1\lambda_2x_2^2 )\lambda_1x_1^2$. Let us define: $R(\lambda_1,x_1,\lambda_2,x_2 )=2\lambda_1^2x_1^2+\lambda_2^2x_2^2+\lambda_1\lambda_2x_2^2$. For $u=(x_1,x_2)$ such that $x_1x_2\neq 0$, the equation $\lambda_1\neq\lambda_2$ implies $R(\lambda_1,x_1,\lambda_2,x_2 ) $ and $R(\lambda_2,x_2,\lambda_1,x_1 ) $ can not vanish simultaneously. By (3.27) and (3.28), the second order derivative of the extended rescaled Poincaré map $F^*_{t,u}$ does not vanish. **The second type:** Suppose $A=\left( \begin{array}{cc} \lambda &0\\ 1& \lambda\\ \end{array} \right)$, $\lambda\neq 0$. Let $u=(x_1,x_2) \in S^1$. The second order derivative of the extended rescaled Poincaré map is $$\begin{aligned} \frac{d^2F^*_{t,u}}{dy^2}(0)=&2\frac{\big\langle Ae^{tA }(Au )^\perp , (Ae^{tA}u )^\perp \big\rangle}{\big\langle Ae^{tA}u,Ae^{tA}u \big\rangle}\frac{d\tau}{dy}(0)+\frac{\big\langle A^2e^{tA }u , (Ae^{tA}u )^\perp \big\rangle}{\big\langle Ae^{tA}u,Ae^{tA}u \big\rangle}\big(\frac{d\tau}{dy}(0) \big)^2\\ =&\frac{\lambda^2\big(\lambda^2tx_1^2-\lambda^2tx_2^2-2\lambda tx_1x_2-tx_1^2-\lambda x_1(\lambda x_2+x_1 )t^2 \big)}{\big(\lambda^2x_1^2+(\lambda x_2+\lambda tx_1+x_1)^2 \big)^3}\cdot P,\end{aligned}$$ with the coefficient of $t^2$ in the polynomial $P $ equal to $- \lambda x_1^2\big((2\lambda^2+1 )x_1^2+3\lambda x_1 x_2+2\lambda^2x_2^2 \big)$. The coefficient of the highest order term of $t$ in $\frac{d^2F^*_{t,u}}{dy^2}(0)$ is $$\frac{\lambda^4 x_1^3(\lambda x_2+x_1 )\big((2\lambda^2+1 )x_1^2+3\lambda x_1 x_2+2\lambda^2x_2^2 \big)}{\big(\lambda^2x_1^2+(\lambda x_2+x_1)^2 \big)^3 }.$$ Since the polynomial $ (2\lambda^2+1 )x_1^2+3\lambda x_1 x_2+2\lambda^2x_2^2 $ is positive definite, (3.29) does not vanish if $x_1\neq 0,\lambda x_2+x_1\neq 0 $. For $x_1=0$, $u$ is an eigenvector. For $ \lambda x_2+x_1= 0$, the computation is involved so we prefer not to check out whether $\frac{d^2F^*_{t,u}}{dy^2}(0)$ vanishes. In conclusion, the extended rescaled Poincaré map $F^*_{t,u}$ is generally nonlinear. Therefore the proof of the proposition is finished. Acknowledgements {#acknowledgements .unnumbered} ================ The authors express their deep gratitude to Prof. Lan Wen and Prof. Shaobo Gan for useful discussions and encouragements. Y. Zhang is partially supported by the NSFC grant 11701200 and Hubei Providence Youth Science and Technology Scholar funding. [99]{} A. Arroyo and F. Rodriguez Hertz, Homoclinic bifurcations and uniform hyperbolicity for three dimensional flows, *Ann. I. H. Poincaré-AN* , **20** (2003), 805–841. C. Bonatti, Survey, towards a global view of dynamical systems, for the $C^1$ topology, *Ergodic Theory Dynam. Systems*, **31** (2011), 959–993. C. Bonatti, L. Díaz and M. Viana, *Dynamics beyond uniform hyperbolicity*, A global geometric and probabilistic perspective, Encyclopaedia of Mathematical Sciences, 102. Mathematical Physics, III. Springer-Verlag, Berlin, 2005. C. Bonatti, S. Gan and L. Wen, On the existence of non-trivial homoclinic classes, *Ergodic Theory Dynam. Systems*, **26** (2007), 1473–1508. S. Crovisier, Birth of homoclinic intersections: a model for the central dynamics of partially hyperbolic systems, *Ann. Math.*, **172** (2010), 1641–1677. S. Crovisier and E. Pujals, Essential hyperbolicity and homoclinic bifurcations: a dichotomy phenomenon/mechanism for diffeomorphisms, *Invent. Math.*, **201** (2015), 385–517. S. Crovisier and D. Yang, Homoclinic tangencies and singular hyperbolicity for three-dimensional vector fields, *arXiv: 1702.05994v1*. S. Gan and D. Yang, Morse-Smale systems and horseshoes for three dimensional singular flows, *J. Eur. Math. Soc., to appear*. M. Hirsch, C. Pugh and M. Shub, *Invariant manifolds*, Lecture Notes in Mathematics, Vol. 583. Springer-Verlag, Berlin-New York, 1977. S. Liao, Standard systems of differential equations, *Acta. Math. Sinica*, **17** (1974), 100–109, 175–196, 270–295.(in chinese) S. Liao, On $(\eta,d)$-contractible orbits of vector fields, *Systems Science and Math. Sciences*, **2** (1989),193–227. S. Liao, *Qualitative Theory of Differentiable Dynamical Systems*, Science Press of China, Beijing, 1996. M. Li, S. Gan and L. Wen, Robustly transitive singular sets via approach of extended linear Poincaré flow, *Discrete Contin. Dyn. Syst.*, **13** (2005), 239–269. J. Palis, A global view of dynamics and a conjecture on the denseness of finitude of attractors, *Géométrie complexe et systems dynamiques, Astérisque*, **261** (2000), 335–347. J. Palis, Open questions leading to a global perspective in dynamics, *Nonlinearity*, **21** (2008), T37–T43. E. Pujals and M.Sambarino, Homoclinic tangencies and hyperbolicity for surface diffeomorphisms, *Ann. Math.*, **151** (2000), 961–1023. F. Takens, Singularities of vector fields, *Publ. Math. Inst. Hautes Études Sci.*, **43** (1974), 47–100. M. Tsujii, Physical measures for partially hyperbolic surface endomorphisms, *Acta Math.*, **194** (2005), 37–132. L. Wen and X. Wen, A rescaled expansiveness for flows, *Tran. Amer. Math. Soc., to appear*. Q. Xiao and Z. Zheng, $C^1$ weak Palis conjecture for nonsingular flows, *Discrete Contin. Dyn. Syst., to appear*. R. Zheng, *Partial Hyperbolicity of Vector Fields Away from Horseshoes*, Ph.D Thesis, Peking University, 2015.
--- abstract: | Collaborative machine learning algorithms are developed both for efficiency reasons and to ensure the privacy protection of sensitive data used for processing. Federated learning is the most popular of these methods, where 1) learning is done locally, and 2) only a subset of the participants contribute in each training round. Despite of no data is shared explicitly, recent studies showed that models trained with FL could potentially still leak some information. In this paper we focus on the quality property of the datasets and investigate whether the leaked information could be connected to specific participants. Via a differential attack we analyze the information leakage using a few simple metrics, and show that reconstruction of the quality ordering among the training participants’ datasets is possible. Our scoring rules are only using an oracle access to a test dataset and no further background information or computational power. We demonstrate two implications of such a quality ordering leakage: 1) we utilized it to increase the accuracy of the model by weighting the participant’s updates, and 2) using it to detect misbehaving participants. author: - 'Balázs Pejó [^1]' bibliography: - 'ArXiv.bib' title: '*The Good*, *The Bad*, and *The Ugly*: Quality Inference in Federated Learning' --- Federated Learning; Inference Attack; Data Quality Introduction {#sec:intro} ============ Machine Learning (ML) has received much attention over the last decades. For ML tasks, it is well known that more training data will lead to a more accurate model. Unfortunately, in reality, the data is scattered among different entities, hence, data holders could potentially increase their local model’s accuracy by training together a common model with others [@pejo2019together]. Several methods were proposed in the literature to tackle this problem. Probably the least privacy friendly method is centralized learning, where a server pools all participants’ data and trains the desired model. On the other end of the privacy spectrum is multi-party computation [@mpc], a cryptographic technique which guarantee that only the final model is revealed to legitimate collaborators and nothing more. Neither of these extremes are acceptable for real-world use-cases: while first requires participants to directly share their datasets, the latter requires too much computational resource to be a reasonable solution. Somewhere between these (in terms of privacy protection) is collaborative learning, where first the central node initializes the model and broadcast it to all participants, than the following repeats until convergence: 1) the participants update the model based on their training data and send it back to the server 2) who averages the received updates to improve the global model and broadcasts it to the participants. **Federated Learning** (FL) [@konecny_federated_2016; @suresh_distributed_2016] is similar, which mitigates the communication bottleneck of collaborative learning by **selecting a random subset of participants in each round who calculates and sends their model updates** instead of all participants. These methods provide some privacy protection by design as the actual data never leaves the hardware located within the participants’ premises. Yet, there are considerable amount of literature that from these updates (i.e., gradients) a handful of things can be learned about the underlying training dataset, detailed in the related works. Several techniques have been developed to conceal the participants updates from the aggregator server, such as adding pairwise noise to them [@mcmahan2016communication], or using MPC [@goldreich1998secure], which could eliminate the need for a central server in the first place. These techniques protect the participants updates, while the aggregated average enjoys no protection. Without a specific background knowledge **it is unlikely that in the collaborative learning scenario an attacker could link the leaked information with a specific participant** as the aggregation provides a ’hiding in the crowd’ type of protection. Differential Attack ------------------- On the other hand, **if the training is FL**, where different set of participants contributes in each round, **via a differential attack it is possible to tied to specific participants the extracted information**. This is especially important, as the aggregated model is broadcasted to all participants, so besides the aggregator server (if exists) this information is available to everyone participating in FL, **independently of any secure aggregation protocol**. ### Example {#example .unnumbered} In Table \[tab:diffattack\] we illustrate the differential attack for Membership and Quality Inference attacks. In this example 6 participants train a word predictor model together where in each round 3 randomly selected participant contributes. The membership attack indicates the presence of a specific location and email address in 1st round. Due to safe aggregation, without any background knowledge it is not possible to single out the participants who these data belongs to. The same attack does not indicates the presence of the mail address in the 2nd round, hence, supposedly the mail address belong to F’s dataset.[^2] The location does appear in the 2nd and 4th round while it does not in the 3rd and 5th round, but so is both A and E, hence only after the 6th round can we connect the location to E. Round A B C D E F Location E-Mail Quality ------- --- --- --- --- --- --- ---------- -------- --------- 1 A E F x x + 2 A B E x + 3 B C D - 4 A B E x + 5 C D F x - 6 B D E x : Example Federated Learning scenario to illustrate our Differential Attack. Participants: A-F. Leaked information: location$\rightarrow$E, e-mail address$\rightarrow$F, high/low quality data$\rightarrow$A/C or E/D.[]{data-label="tab:diffattack"} Concerning the dataset qualities, within a specific round the selected participants’ updates are hidden, but their aggregated update is public. If a particular round improves the model poorly (or significantly), it could be postulated that some of the participants contributed in that round have low (or high) quality data. By keeping track of such events, the participants could be separated into low/medium/high quality data holders with various confidence. In the example above, the 1st, 2nd, and 4th round the model improved significantly (so either A or E have high quality data as both participated in these rounds), while in the 3rd and 5th round the model did not improved, hence, either C or D have low quality data. Since the last round is neither good nor bad, either both low and high quality data is present or neither of them. Consequently, either A and C or E and D has high and low quality data respectively.[^3] Contributions ------------- In this paper we employ rigorous statistical analysis by adopting a stochastic viewpoint of the updates, however, due to the complexity of the task in hand, we turn towards empirical evaluation. We **utilize the information leakage from the aggregated update when a safe aggregation mechanism is in place**, i.e., where the participant updates (i.e., individual gradients) are hidden. We focus our attention on the honest-but-curious attackers with limited power and resources, i.e., assuming **the attacker can only eavesdrop, and it has no background information (besides access to an evaluation oracle) or any computation resource** which would enable her to do intricate calculations (concerning the attack). For this reason, **we do not consider any existing attacks, as they all require either some computational resources** (e.g., training shadow models [@shokri2017membership], utilizing GANs [@hitaj2017deep], etc.) **or some background information** (e.g., data distribution/subset of the training sample [@nasr2019comprehensive], etc.). **Our novel attack aims to** recover the quality of the aggregated updates; consequently, the quality of the contributing participants’ datasets. To obtain this quality information, we take advantage of the inferred information across multiple rounds’ aggregated updates and the subset of participants associated with the corresponding aggregates. Of course, such a quality measure is relative to the particular task and to the other participants’ datasets, so we aim to **retrieve a relative quality ordering of the participants** (compared to each other for the particular use-case). The quality inference (i.e., relative quality ordering reconstruction) attack works by evaluating the aggregated updates in each round (based on a test dataset which is available for all participants and easily obtainable for the server) and assign scores to the contributors based on three simple rules called *The Good*, *The Bad*, and *The Ugly*. These accumulated scores (after many rounds) form a quality-wise ordering of the participants. Although the inferred ordering is only partially correct according to our experiments, it is **successfully separates the participants** in less fine-grained quality bins **such as low, medium and high** quality participants. We run experiments on two architecture and two datasets (we assume the pariticpants have IID dataset, which also simplifies the simulation and the measurement of the dataset qualities). We conclude that the quality inference accuracy depends on the complexity of the FL task itself as well as on the complexity of the model, which is being trained: **with more complexity comes higher quality inference. In the most simple case** (MNIST - MLP), **our inferred quality ordering is barely better than a random guess**, while **in the most complex case** (CIFAR - CNN), **it is more than two times ($\approx2.2$) better than a random guess**. We consider two application of quality inference: misbehaving detection and training efficiency boosting. Concerning misbehaving we investigated two attacks: gradient inverting and freeriding. While the first actively pulls back the learning, the second is neutral to the learning process. This is reflected in their detection rates as well: **the detection rate is at least twice as good as a random guess for both gradient inverting participant** after few rounds **and for freerider** after many rounds. Concerning the training efficiency boosting we found that **weighting the participant’s contributions** based on the inferred quality scores **improves more the accuracy of the simple cases** ($>1.1\%$) **than of the complex ones** ($<0.25\%$). Organization ------------ In section \[sec:model\] we introduce the used variables through the paper and the model the data quality leakage in FL. In section \[sec:QI\] we describe how we simulate different datasets quality and detail our three quality scoring rules. In section \[sec:qimeasure\] besides elaborating on the experiments’ settings, we present our quality inference metric and the base attack performance. In section \[sec:fine\] we consider further increasing the quality inference accuracy by parameter fine tuning. In section \[sec:app\] we dive into the details of some possible applications of the inferred dataset qualities. In section \[sec:defense\] we discuss some possible mechanism to mitigate quality inference leakage. In section \[sec:rw\] we mention a handful of related works, while in section \[sec:con\] we conclude the paper and mention some possible future works. The Theoretic Model {#sec:model} =================== In this section we introduce the theoretical model of quality inference and highlight its complexity. We note with $n$ a participant in FL while $N$ denotes the number of all participants. Similarly, $i$ denotes a round in FL, while $I$ denotes the number of all rounds. $S_i$ contains the randomly selected participants for round $i$. $b=|S_i|$ capture the number of selected participants. $D_n$ is the $n$th participant dataset, which consist of $(x,y)\in D_n$ data-label pairs. A summary of the variables in this paper are listed in Table \[tab:param\]. Variable Description ---------------------- ------------------------------------------- $n\in [1,2,\dots,N]$ Participants $i\in [1,2,\dots,I]$ Training rounds $S_i$ Selected participants for round $i$ $b$ Num. of selected participants $(x,y)\in D_n$ Participants $n$’s dataset $q(n)$ Par. $n$th inferred quality-wise position $\hat{q}$ Quality Inference’s accuracy $\alpha$ Num. of cheating participants $r$ Num. of (last) observed positions $c$ Cheater detection rate $\kappa$ Weight updating rate : The notation used in the paper.[]{data-label="tab:variables"} We assume **participant $\pmb{n}$ is associated with a single scalar quantity, measuring the quality of its dataset**, named $\pmb{u_n}$. Essentially, the quality of the aggregated gradients (noted as $v_i$ for the $i$th round) form a linear equation system $Au=v$, where $u=[u_1,\dots,u_N]$, $v=[v_1, \dots, v_I]$, and $a_{n,i}\in A_{N\times I}$ indicates whether participant $n$ is selected for round $i$. Depending on the dimensions of $A$, the system can be under- or over-determined. In case $I<N$ (i.e., no solution exists) the problem and the solution is shown in Equation (\[eq:over\]), while if $I>N$ (i.e., many solution exists) the problem and the solution is shown in Equation (\[eq:under\]) [@selesnick2013least]. $$\label{eq:over} \min_u||v-Au||_2^2 \hspace{0.5cm}\Rightarrow\hspace{0.5cm} u=(A^TA)^{-1}A^Tv$$ $$\label{eq:under} \min_u||u||_2^2 \text{ s.t. } Au=v \hspace{0.25cm}\Rightarrow\hspace{0.25cm} u=A^T(AA^T)^{-1}v$$ The above equations do not take into account the randomness explicitly. Since the training is stochastic, we consider **the quality of the $\pmb{n}$th participant’s update** (i.e., gradient) as a **random variable** $\pmb{\theta_n}$ sampled from a distribution with parameter $u_n$. Moreover, we can represent $\pmb{\theta_n=u_n+e_n}$ where $\pmb{e_n}$ correspond to **a random variable sampled from a distribution with zero mean**. We can further assume expected characteristic of the noise (i.e, error), namely that $e_n$ and $e_{n'}$ are IID for $n\not=n'$. As a result, we can express $v_i=\sum_na_{n,i}u_n+E$ for $E$ sampled from the convolution of the PDF of $e$. In this case, due to the Gauss–Markov theorem [@harville1976extension], the solution in Equation (\[eq:over\]) is the best linear unbiased estimator (BLUE), with error $||v-Au||_2^2=v^T(\textbf{I}-A(A^TA)^{-1}A^T)v$ (where $\textbf{I}$ is the identity matrix) which expected value is $b(I-N)$. Note, that with more iteration, more information is leaking, making the error less-and-less. However this is not captured via the Gauss-Markov theorem as that considers every round as new constraint. On the other hand, in our case there is only $\binom{N}{b}$ different constrains (with noise) which is the number of possible rounds with different participants. All-in-all, this problem is within estimation theory [@ludeman2003random], from which we already know that **estimating a single random variable with added noise is already hard**, not even mentioning the fact that in our setting, **we have multiple, forming an equation system**. Moreover, **these random variables are varying round-wise**, which we were ingroring so far. Nevertheless, in each iteration, a different contribution level is expected, as the early iterations improve the model’s accuracy greater than later one’s. Consequently, **to estimate the dataset qualities we must know the expected learning curve which depends on exactly that**. For this reason, we do not wish to pursue this theoretical direction; instead, **focus on the empirical direction to break this circle**. Quality Scoring Rules {#sec:QI} ===================== In this section we describe how we simulate different datasets quality and detail our three quality scoring rules. Quality Simulation ------------------ Data quality could mean several things; [@batini2016data] defined 8 dimension of it (accuracy, completeness, redundancy, readability, accessibility, consistency, usefulness, and trust), several having its own subcategories. Even restricting ourselves to images (used for the experiments), it spans over multiple dimensions. Image quality is relative for two reasons: it can only be considered in terms of the proposed use, and in relation to other examples. Visual perception is a complex process; hence, **we do not manipulate the images themselves to simulate different qualities**. **Rather**, since we focus on supervised machine learning, **we modify the label** $y$ corresponding to a specific image $x$.[^4] For our experiments, we assume the aggregator does have a test dataset (e.g., a publicly available dataset) or at least a query access to an evaluator oracle. Consequently, we split the dataset randomly into $N+1$ parts, representing the $N$ datasets of the participants and the test set used to determine the quality of the aggregated updates. The splitting is done in a way that the resulted datasets are IID, otherwise the splitting would introduce some quality difference between the participants. Since the **participant’s datasets are from the same underlying distribution**, they quality is assumed to be identical.[^5] To have a clear quality-wise ordering between the datasets, we perturbed the labels of the participants differently: the $N$th participant’s dataset is not pertubed, while the $1$st participant’s dataset is fully pertubed (i.e., all labels are randomized). Each label for the rest of the participant’s datasets are randomized with a linearly decreasing probability from 1 to 0. Mathematically this is described in Equation (\[eq:scramble\]). $$\label{eq:scramble} \Pr(y_k\text{ is randomized }|(x_k,y_k)\in D_n)=\frac{N-n}{N-1}$$ Assigning the qualities linearly following the participant IDs does not introduce any bias in our experiments since both the initial datasets splitting and the round-wise participant selection are random. Scoring Rules ------------- Based on the round-wise improvements, we created three fairly simple heuristic scoring rules to reward or punish the participants. We name them *The Good*, *The Bad*, and *The Ugly*, as the first one rewards the more useful contribution, the second punishes the less useful ones, while the last punishes when the contribution is just plain useless: - *The Good*: all the **participants who** contribute in a round which **improve** the model **more** than the previous round **receive** $\pmb{+1}$ score. - *The Bad*: all the **participants who** contribute in a round which **improve** the model **less** than the following round **receive** $\pmb{-1}$ score. - *The Ugly*: all the **participants who** contribute in a round which **do not improve** the model (i.e., decrease the accuracy) will **receive** $\pmb{-1}$ score. It is expected that consecutive rounds’ improvements are decreasing: first the model improves rapidly, while in later rounds it increases with a much lower pace. The first two scoring rules (*The Good* and *The Bad*) captures the deviation from this pattern: we can postulate that 1) high dataset quality increase the improvement more than in the previous round, and 2) low dataset quality decrease the improvement, which would be compensated in the following round. Our last scoring rules (*The Ugly*) assumes that if a particular round do not improve the model, there is a higher chance that some of the contributors’ dataset qualities are low. Independently from the contributors dataset qualities, 1) the round-wise improvements could deviate from this pattern due to the stochastic nature of the learning, and 2) the improvement could be negative after sufficient training rounds as the model starts to overfit. We assume both of this affects all participants evenly, so the relation between the scores are not significantly affected by this ’noise’. Measuring the Quality Inference {#sec:qimeasure} =============================== In this section besides elaborating on the experiments’ settings (i.e., which datasets and model structures are used with what parameters) we present our quality inference metric and the base attack performance. Datasets & Models & Experiment Setup ------------------------------------ For our experiments, we used the MNIST [@deng2012mnist] and the CIFAR [@krizhevsky2014cifar] datasets. MNIST contains 70.000 hand-written digits in a form of 28x28 gray-scale pictures, while CIFAR consist of 60.000 32x32 colour images of airplanes, automobiles, birds, cats, deers, dogs, frogs, horses, ships, and trucks. For training we use multi-layer preceptor (MLP) and convolutional neural network (CNN) architecture. For MLP, we used a three-layered structure with hidden layer size 64, while for CNN, we used two convolutional layer with 10 and 20 kernels of size 5x5 followed by two fully connected hidden layer of sizes 120 and 84. For the optimizer we used SGD with learning rate 0.01 and drop out rate 0.5. In the rest of the paper, we will refer to these four use-case as MM for MNIST-MLP, CM for CIFAR-MLP, MC for MNIST-CNN, and CC for CIFAR-CNN. We run every experiment 10-fold. The implementation could be found at [@git]. The exact parameters used for our experiments are presented in Table \[tab:param\]. $N$ $b$ $i$ $\alpha$ $r$ ----- ----- -------------------------- ----------- ------------ 5 2 $\{10,20,30,40,50\}$ $\{1\}$ {1,2} 25 5 $\{10,20,30,40,50\}$ $\{1,2\}$ $\{2,4\}$ 100 10 $\{50,100,150,200,250\}$ $\{2,4\}$ $\{5,10\}$ : The used parameters for the experiments.[]{data-label="tab:param"} The round-wise accumulated quality scores (averaged over all use-cases, i.e., over MM, CM, MC, and CC) using all 3 rules for the 3 experiment (detailed in Table \[tab:param\]) are presented in Figure \[fig:QI\] where each participant’s dataset quality is degraded proportionally to their ID (i.e., noise is added according to Equation (\[eq:scramble\]): 1’s dataset has the lowest, while $N$’s has the highest quality.) ![The average scores across the 4 use-cases (i.e., MM, CM, MC, and CC) for each participant when $N=5, b=2, i=\{10,20,30,40,50\}$ (top left), $N=25,b=5,i=\{10,20,30,40,50\}$ (top right) and $N=100,b=10,i=\{50,100,150,200,250\}$ (bottom). []{data-label="fig:QI"}](qi){width="8cm"} It is visible, that **after a few rounds there is no significant difference** between participants’ quality scores with low and high dataset quality (i.e., highest light-blue curve) for all participants. On the other hand, **the difference keeps growing as the number of rounds grow**, mostly by decreasing the scores of the low ID participants (who correspond to low dataset quality) more than the other participants. Note, that even the $N$th participant (corresponding to the highest dataset quality) quality score is decreasing with more rounds. This is an expected characteristic of the scoring rules for two reasons: 1) as the model overfits, all participants’ quality scores will decrease due to the *The Ugly* scoring rule, and 2) there is only one rule increasing the score (*The Good*) while two decreasing it (*The Bad* and *The Ugly*). It is visible that the three heuristic scoring rules combined **fairly well recovers the original dataset quality order** of the participants. As the number of participants grows, the difference of the dataset qualities between the $n$th and the $n+1$th participant shrinks. Consequently, it is harder and harder to correctly order them: it is **unrealistic to recover the exact quality ordering** for more than a handful of participants. On the other hand, our quality inference method is suitable to give a high-level view (e.g., high/medium/high) of the participants dataset qualities in relation to each other. In turn, the difference of the dataset quality of two participants with very different IDs are significant, so our heuristic scoring rule is capable of differentiating the two: **if we define 3 dataset quality classes** (e.g., low, high and medium), **we can perfectly classify the lowest and highest dataset quality participants**. For instance, in case of $N=25$ ($N=100$) participants the 8 (26) lowest dataset quality participants’ score is always below -30 (-100), while the 11 (23) highest dataset quality participant’s score is always above -20 (-80), so the best and worst quarter of participants can be separated from each other. In Figure \[fig:QI-100-case\], we show the scores with $N=100$ participants for the four experiments (i.e., MM, CM, MC, CC) separately. One can see that for simple models such as MLP, the quality scores are less punctual than for more complex algorithms such as CNN. It is also visible that the complexity of the task (i.e., MNIST or CIFAR) only plays a minor role. ![The case-wise quality scores of each participant when $N=100$ and $b=10$ for $i=\{50,100,150,200,250\}$.[]{data-label="fig:QI-100-case"}](100-case){width="8cm"} Quantifying the Quality Inference --------------------------------- To quantify the inferred quality ordering of the participants, we need to convert the relation between the quality scores into a single value. For this purpose, we use the **Spearman’s distance $d_S$** [@diaconis1977spearman], which **measures the sum of the absolute differences of all participants’ inferred** (i.e., $q(n)$) **and correct position** (i.e., $n$ due to Equation (\[eq:scramble\])) in the quality ordering. Note, that Spearman’s distance handles equally any misalignment irrespective of the position. It is calculated according to the left side of Equation (\[eq:qi\]). $$\label{eq:qi} d_S = \sum_{n=1}^N ||(n,q(n))||_1 \hspace{1cm} % \mathbb{E}_N(d_S)=\frac{N^2-1}{3} \hat{q}=1-\frac{d_S}{\frac{N^2}{2}}$$ Since $d_S$ depends on the number of participants, we have to **normalize it to have an uniform quality metric $\hat{q}$** and divide it with its maximum value which is $\frac{N^2}{2}$ (corresponding to $[1,2,\dots,N]\rightarrow[N,\dots,2,1]$). Finally, we invert this value, so it **is within \[0,1\] where 1 represents perfect inference**, as shown on the right side of Equation (\[eq:qi\]). The $\hat{q}$ values averaged over the 4 use-case corresponding to 5 and 25 participant (i.e., as in Figure \[fig:QI\]) are $\{1,1,1,1,1\}$ and $\{0.77, 0.89, 0.87, 0.85, 0.85\}$ for $i=\{10,20,30,40,50\}$ respectively. The $\hat{q}$ values for 100 participants case-wise (i.e. as in Figure \[fig:QI-100-case\]) are presented in Figure \[fig:qiter\]. ![The case-wise quality inference performance when $N=100$ and $b=10$ for $i=\{50,100,150,200,250\}$.[]{data-label="fig:qiter"}](Qiter){width="8cm"} It is visible that for complex arhitecture (i.e., CNN) the quality inference accuracy almost linearly grows in respect to the rounds, while for simpler architecture (i.e., MLP) it is rather decreasing. Note, that since the expected value of $\hat{q}$ for a random ordering is roughly $\frac{N^2}{3}$ [@diaconis1977spearman], **the baseline is** $\pmb{\hat{q}=0.\dot{3}}$, which we highlight on all figures with a square on the scale. Fine-tuning the Scoring Rules {#sec:fine} ============================= In this section we consider further increasing the quality inference accuracy $\hat{q}$ by parameter fine tuning. We vary the thresholds which determines when to trigger which rule, we measure the accuracy for different combinations of the rules, consider ignoring the first few rounds, and use the actual improvement difference as a score. Threshold Optimization ---------------------- Since the learning is stochastic, it is expected that participants with low (high) dataset quality by chance receive positive (negative) scores in some rounds. To mitigate these effects, we considered to use a threshold: for *The Ugly*, we score only if the improvement is below some negative value (instead of 0). In contrast, for *The Good* and *The Bad* we score only if the improvement difference is above or below such a threshold respectively. In Figure \[fig:treshold\] we show $\hat{q}$ for all the use-cases with 100 participants after 250 rounds for each scoring rule separately and accumulated. As the performance is similar for all the considered threshold across all use-cases, we present the average value as a line on a shorter scale on the right. *The Good* performs the best with threshold $t=0.05$, *The Bad* with $t=0.15$ and *The Ugly* with $t=0.15$.[^6] The bottom right reflects **the accuracy** when all the scoring rules are combined, which performes **the best when** $\pmb{t=0.15}$[^7]. ![$\hat{q}$ with various thresholds for $N=100$, $b=10$, and $i=250$. The left scale is for the columns, the right is for the average line.[]{data-label="fig:treshold"}](treshold){width="8cm"} Rule Combinations ----------------- Not surprisingly, **the combination of all the scoring rules outperforms all** the single rules. This holds for all **possible combination** as we show in Table \[tab:compare\] below. $t=$ Good Bad Ugly G&B G&U B&U All ------ ------- ------- ------- ------- ------- ------- ------- 0.00 0.458 0.546 0.533 0.556 0.548 0.547 0.560 0.15 0.490 0.587 0.541 0.620 0.588 0.600 0.632 : The average $\hat{q}$ scores for the 4 use-cases with different combinations of the 3 scoring rule when $N=100$, $b=10$, $i=250$ and $t=\{0.00, 0.15\}$. []{data-label="tab:compare"} Improvement Value as Scores --------------------------- Scoring the participants contribution with $\pm1$ ignores the actual accuracy changes: for example in case of *The Ugly*, it does not matter how negative a round’s improvement is, the corresponding participants receive $-1$ uniformly. Taking such information into account might improve the quality inference in our scoring rules, so we consider alternative rule variants when the improvement values are used instead of $\pm1$. We compared $\hat{q}$ **using the actual improvement differences** (referred to as Value) within the 3 rules **instead of** essentially **counting by** $\pmb{\pm1}$ how many times the rules have been applied (referred to as Count). Hence, we considered adding to the participants scores the actual negative improvement in case of *The Ugly* and the improvement differences in case of *The Good* and *The Bad* with the previous and following round respectively. As seen on Figure \[fig:valuescore\] (which show the average $\hat{q}$ over all use-cases corresponding to 100 participants after 250 rounds), these **results are inconsistent**: $\hat{q}$ improves slightly for *The Ugly*, inconclusive in case of *The Bad* and counterproductive in case of *The Good*. ![The average $\hat{q}$ over MM, CM, MC, and CC using the Count and the Value as scoring rules for $N=100$, $b=10$, $i=250$, and $t=\{0, 0.05, 0.1, 0.15, 0.2, 0.25\}$. []{data-label="fig:valuescore"}](comparebase){width="8cm"} Round Skipping -------------- The above result is surprising, since by considering the actual values instead of counting the events we do consider more information in our scoring rules. We anticipate, this is due to the learning curve: **in the first few rounds the improvement is so vast, the scores** for the participants selected in those rounds **barely change afterwards** as in the succeeding rounds the improvements (i.e., scores) are insignificant in magnitude compared to them. Hence, **we consider not scoring the participants during the early rounds** to mitigate this effect. We show the corresponding $\hat{q}$ for both Count and Value when all 3 rules are applied with and without threshold optimization in Table \[tab:ignorefirst\].[^8] 0 2 4 6 8 10 -- --------- ------- ------- ------- ------- ------- ------- $t=0$ 0.560 0.556 0.551 0.544 0.536 0.529 $t=.15$ 0.632 0.640 0.632 0.628 0.628 0.616 $t=0$ 0.510 0.630 0.630 0.632 0.634 0.635 $t=.15$ 0.509 0.625 0.631 0.638 0.640 0.634 : $\hat{q}$ for Count and Value with $t=\{0.00,0.15\}$ with various amount of first rounds skipping for $N=100$, $b=10$ and $i=250$.[]{data-label="tab:ignorefirst"} Several things are visible: **concerning Value, skipping the first 2 rounds improves** $\pmb{\hat{q}}$ **considerably** ($51\%\rightarrow63\%$), while later round skipping only corresponds to minor improvements. **Concerning Count, skipping is actually counter-productive**, as the scores are essentially normalized to $\pm1$, so skipping rounds only results in information loss. It is also visible that parameter fine-tuning does not really effects Value, while for Count it is non-negligible ($56\%\rightarrow63\%$). Finally, we can conclude that **neither is superior to the other**, as the same accuracy could be reached via Count with parameter tuning and Value with iteration skipping. Discussion {#discussion .unnumbered} ---------- We started from 3 straightforward scoring rule, which, if combined, achieved $0.36$ for MM, $0.47$ for CM, $0.72$ for MC, and $0.79$ for CC without any parameter fine-tuning (i.e., without any background information). Note that the baseline is $0.33$, so in the simplest case (MM), $\hat{q}$ is barely better than a random guess. On the other hand, **as the complexity of the model and the task grows, so is the quality inference performance $\pmb{\hat{q}}$**: in the more complex cases (MC, CC), it is more than two times better than a random guess. These values could be improved to $0.46$ for MM, $0.49$ for CM, $0.80$ for MC, and $0.82$ for CC via parameter tuning. Hence, **fine-tuning helps when the task is simple** (i.e., MNIST) while it barely improves in case of complex data/task (i.e., CIFAR). Without fine-tuning on average $\pmb{\hat{q}}$ **performs** $\pmb{+25\%}$ **better than the baseline** $33\%$. **Fine-tuning increase this further with** $\pmb{+7\%}$. This extra $7\%$ comes with a cost though: **fine-tuning the parameters is possible via shadow models [@shokri2017membership], which does require access to computational resources and datasets** (instead of only an evaluation oracle). Due to these reasons in the **rest of the paper we use the base Count method**. Application of QI {#sec:app} ================= In this section, we dive into the details of some possible applications of the inferred dataset qualities. Although we foresee many, we detail only two: misbehavior detection and training efficiency boosting. Catching Attackers & Freeriders ------------------------------- The leaking information about the participant’s dataset quality could be used to isolate potential misbehaving. We consider two kinds of deviation. - *Inverting*: The participant’s goal is to **worsen the quality of the aggregated model** actively. One was to achieve this is to **submit the additive inverse of the** calculated correct **gradient**. - *Freeride*: The participant’s goal is to **benefit from the aggregated model passively**. One was to achieve this is not to calculate the correct gradients but instead **submit zero gradient**. We assume the **rest of the participant’s datasets are of equal quality**; hence, we expect that the cheating participants should be at the bottom of the inferred quality order. We note the catching probability of the cheaters with $c(r)$, which depends on the number of the last observed positions $r$ rather than on the number of cheaters $\alpha$. $\pmb{c(r)}$ **measures the fraction of the cheaters who are isolated in the last** $\pmb{r}$ **places of the inferred quality ordering**, i.e., the accuracy is shown on Equation (\[eq:cheater\]) where $a_j$ is the $j$th attacker and $n_r$ it the participants with the $r$th lowest quality score. $$\label{eq:cheater} \begin{split} c(r)=\frac{\#\{a_j\hspace{0.1cm}|\hspace{0.1cm}q(a_j)\le q(n_r)\}_{j=1}^\alpha}{\alpha}\\ BaseLine_r=\sum_{j=1}^\alpha\frac{\binom{\alpha}{j}\cdot\binom{N-\alpha}{r-j}}{\binom{N}{r}}\cdot\frac{j}{\alpha}\approx \frac{r}{N} \end{split}$$ The baseline (i.e., the value of $c$ with random ordering) is also shown above, which is independent of the number of cheaters. For instance, $c(10)=0.8$ means that $0.8$ fraction of the cheaters were in the last ten places (e.g., 4 in case of 5 cheaters). Obviously $c(0)=0$ and $c(N)=1$. The average $c$ values of the four use-cases based on the 10-fold experiments with the settings defined in Table \[tab:param\] are shown in Figure \[fig:QI\_attack\]. The first corresponds to $N=5$ with $\alpha=1$, the second and third to $N=25$ with $\alpha=1$ and $2$ respectively, while the fourth and fifty corresponds to $N=100$ with $\alpha=2$ and $4$ respectively. ![The average $c$ values of the 4 experiment (columns, left scale) and the average and the highest position of the cheater (lines, right scale) for each participant when $N=5,b=2,r=\{1,2\},i=\{10,20,30,40,50\},\alpha=1$ (first), $N=25,b=5,r=\{2,4\},i=\{10,20,30,40,50\},\alpha=1$ (second) and $\alpha=1$ (third), and $N=100,b=10,r=\{5,10\},i=\{50, 100, 150, 200, 250\},\alpha=2$ (fourth) and $\alpha=4$ (fifth).[]{data-label="fig:QI_attack"}](5-2_attack "fig:"){width="7cm"} ![The average $c$ values of the 4 experiment (columns, left scale) and the average and the highest position of the cheater (lines, right scale) for each participant when $N=5,b=2,r=\{1,2\},i=\{10,20,30,40,50\},\alpha=1$ (first), $N=25,b=5,r=\{2,4\},i=\{10,20,30,40,50\},\alpha=1$ (second) and $\alpha=1$ (third), and $N=100,b=10,r=\{5,10\},i=\{50, 100, 150, 200, 250\},\alpha=2$ (fourth) and $\alpha=4$ (fifth).[]{data-label="fig:QI_attack"}](25-5_attack "fig:"){width="7cm"} ![The average $c$ values of the 4 experiment (columns, left scale) and the average and the highest position of the cheater (lines, right scale) for each participant when $N=5,b=2,r=\{1,2\},i=\{10,20,30,40,50\},\alpha=1$ (first), $N=25,b=5,r=\{2,4\},i=\{10,20,30,40,50\},\alpha=1$ (second) and $\alpha=1$ (third), and $N=100,b=10,r=\{5,10\},i=\{50, 100, 150, 200, 250\},\alpha=2$ (fourth) and $\alpha=4$ (fifth).[]{data-label="fig:QI_attack"}](100-10_attack "fig:"){width="7cm"} Although the baseline of $c$ (which is highlighted with a square on the scale with the corresponding color for both $r$) does not depend on $\alpha$, it is negatively effected by it: **it gets harder-and-harder to detect the cheaters when there are more-and-more of them** (i.e., see the difference between the 2nd and 3rd, and the 4th and 5th figure). It is visible that in case of *inverting* after few rounds $c$ already outperformed the baseline. On the other hand, in case of *freeride*, $c$ is not better than a random guess after few rounds. Not surprisingly, **the detection** gets more accurate when the quality scores are based on more rounds. According to our experiments, **in case of *inverting*** (which is obviously easier to detect than *freeride*) $c$ **gets 3-4 times higher than the baseline. Even for *freeride* the detection rate is still twice as much**. These figures also showed the average and highest inferred positions of the cheaters: as expected, the position decreases with more rounds and increases with more cheaters. Note, that **the highest positions reached by a cheater were never in the top** $\pmb{20\%}$ of the participants, even for *freeride*. Boosting the Training --------------------- Based on the data quality, it is expected that both the training speed and the obtained accuracy could be improved when putting more emphasis on high-quality data. Hence, we consider **weighting the participant’s updates based on their quality scores**. We adopt a **multiplicative weight update approach** [@arora2012multiplicative], which multiplies the weights (which are initially uniformly 1) with a fixed rate $\kappa$ when any of the scoring mechanism applies. This method is shown in Algorithm \[alg:weight2\] where $S_i$ notes selected participants for the $i$th round (declared in line 4), and $imp$ captures the round-wise improvements (declared in line 8 using the accuracy $Acc$ difference of the current and previous model). The weights ($w_1,\dots,w_N$) are updated in the $i$th round with $\kappa<1$ each time on of the three scoring mechanism applies[^9] (line 10, 11, and 13 for *The Good*, *The Bad*, and *The Ugly* respectively). For our experiments we set $\kappa=\{1.00, 0.95, 0.90\}$ where the first corresponds to the baseline without participant weighting. $S_{b\times I}$; $imp=[imp_1,\dots,imp_I]$; $W=[W_1,\dots,W_N]$ $W=[1,\dots,1]$ Select $b$ contributors ($S_i$) Update model ($Model_{i-1}\rightarrow Model_i(c)$) Aggregate ($Model_i=Avg([W_c\cdot Model_i(c)]_{c\in S_i})$) $imp_i=Acc(Model_i)-Acc(Model_{i-1})$ $W_c=W_c\cdot\kappa^{-1}$ $W_c=W_c\cdot\kappa$ $W_c=W_c\cdot\kappa$ **Concerning** the size of $\pmb{\kappa}$, we observe that **low dataset quality participants’ weights are more sensitive** than others: a decrease in $\kappa$ results in more weight drop for them. As this is exactly the same effect what we already captured in Figure \[fig:QI\] for quality scores in relation with the round number, so we do not visualize our results here. Besides these expected characteristics, we did not find any universal findings in relation with the size of $\kappa$: **neither higher nor lower rates does consistently outperform the other**, and the achieved accuracy varies greatly primary on the used architecture and secondly on the dataset. One thing which is conclusive though is that **using weights based on our scoring rules improves the original accuracy** in most of the studied cases. We present our results (averaged over 10 executions) corresponding to the accuracy improvement when $\kappa=\{0.95,0.90\}$ in Table \[tab:weights\] for the four use-cases and for the 3 experiment from Table \[tab:param\] with $i=\{50,50,250\}$ respectively. ----------- ------ ------ ------- ------- ------- ------- $\kappa$ 0.95 0.90 0.95 0.90 0.95 0.90 MNIST/MLP 1.70 2.56 3.80 1.62 2.93 1.15 CIFAR/MLP 0.19 0.36 0.99 1.24 1.32 1.02 MNIST/CNN 0.05 0.02 0.04 -0.02 0.00 0.02 CIFAR/CNN 0.13 0.24 -0.49 -0.32 -0.63 -0.49 ----------- ------ ------ ------- ------- ------- ------- : The accuracy improvements due to the multiplicative weighting with $\kappa=\{0.95,0.90\}$ on the 4 experiment when $N=5,b=2,i=50$, $N=25,b=5,i=50$ and $N=100,b=10,i=250$.[]{data-label="tab:weights"} **These results are** surprising and **counter-intuitive**: one would **expect more gain from the weighting when the quality scoring captures the dataset qualities better** (i.e., in case of more complex model such as CNN). On the other hand, **precisely the opposite can be seen**. While the quality inference performs consistently better on the most complex case (i.e., CIFAR/CNN), the weighting barely or not improves the accuracy (i.e., $<0.25\%$), while the highest improvement (i.e., $>1.1\%$) by weighting is achieved in the most straightforward task (i.e., MNIST/MLP), where the quality inference is barely better than the baseline random ordering. Mitigation Strategies {#sec:defense} ===================== In this section, we discuss some possible mechanism to mitigate against the quality inference. Note, that **this leakage is** not intended, so this **is a bug, rather than a feature** in FL. The simplest and most straight forward way to mitigate this risk is to enforce all participant to contribute in each round. However, this is not feasible for thousands of participants. The leakage of the data’s quality inevitably present in the aggregated updates. How often this information is available to the participants plays a significant role in the success of the QI attack, as Figure \[fig:QI\], \[fig:QI-100-case\], and \[fig:qiter\] already demonstrated. Hence, on way to mitigate this leakage is to decrease the access to these updates. It can be done in many ways, for instance 1) **limiting the number of rounds** by allowing the participants to train multiple epochs within one round, or 2) instead of broadcasting the updated model, the server send it only to the participants who are going to contribute in the next round. Another technique is **adaptive selection of the participants** for each round (instead of random), however this sword has two edges: adaptive selection can be used to mitigate the information leakage as well as increase it. Yet another approach is to **hide the participant’s IDs, so no-one knows which participant collaborated in which round** beside the participants themselves. This can be achieved with various techniques such as mix nets [@danezis2003mix] and MPC [@goldreich1998secure]. Finally, the the aggregation itself could be done in a diferentially private manner as well, where a carefully calculated noise is added to each round. Moreover **client-level DP** [@geyer2017differentially] would by default **hide the dataset quality** of the participants, although that require large volume of noise. On the other hand, **using the shuffle mode** [@balle2019privacy; @cheu2019distributed] could solve this problem. Related Works {#sec:rw} ============= In this section we enlist the related works, including but not limited to well known privacy attacks against machine learning and data quality. Concerning freeriding, [@feldman2006free] deals with this problem in peer-to-peer systems, and introduce the penalty mechanism, which could be build on top of our scoring rules. A more recent work [@lin2019free] presents a freerider detection mechanisms for collaborative learning which works only if no secure aggregation is in place. Privacy Attacks --------------- There are several indirect threats concerning ML models. According to a recent survey [@Mireshghallah2020privacy], these could be categorized into model inversion or attribute inference (e.g., [@fredrikson2015model]), membership inference and reconstruction attacks (e.g., [@shokri2017membership; @zhu2019deep]), (hyper)parameter inference (e.g. ([@tramer2016stealing; @wang2018stealing]), and **property inference** (e.g., [@melis2019exploiting]). Our **quality inference could be considered as an instance of** the last. Another property inference attack is quantity composition attack [@wang_eavesdrop_2019], which aim is to infer the proportion of training labels among the participants in FL. The authors showed that an attacker participating in the training (with minimal power) could extract valuable information from training data without requiring access to the individual updates. Consequently, the attack is successful even with secure aggregation protocols or under the protection of DP. Our setting is similar, as we require even less knowledge and computational resource from the attacker while allowing secure aggregation to be in place. Privacy Defenses ---------------- As we simulate different dataset qualities with the amount of added noise, essentially, what we want to prevent the leakage of the added noise size. Consequently, this **problem also relates to the private privacy parameter selection**, as label perturbation [@papernot2016semi; @papernot2018scalable] (which is used to mimic different dataset quality levels) is one of the 5 known techniques [@Mireshghallah2020privacy] to achieve differential privacy (DP) [@dwork2006calibrating; @desfontaines2019sok]. In previous works the authors set the privacy parameter for DP using economic incentives [@hsu2014differential; @pejo2019together] or offer the selection as a service [@krehbiel2019choosing]. We are not aware of any research (both within and outside DP literature) which does consider to define the privacy parameter itself also privately. Data Quality ------------ In this work we naively assumed the data quality is in a direct relation with the added noise present in the data. This served our purpose right, however, there is a computer science discipline about data quality. For a comprehensive survey we refer the reader to the book [@batini2016data]. A complementary work is **Data Shapley** [@ghorbani2019data], which **determines the value of datasets** used for FL. Originally the Shapley value [@shapley1953value] was designed to allocate goods to players proportionally to their contributions. The Shapley value is **the only fair payment rule**, i.e., it satisfies the four properties: efficiency (all the gain is distributed among the players), symmetry (players with same contributions receive the same payment), linearity (additivity of the Shapley value between games) and null player (players contributing nothing receive no payment). The main drawback of this payment distribution is that it is computationally not feasible as **it requires exponentially more computations** than the number of participants. Moreover, besides this computational burden, to calculate the Shapley values, one **must have access to all datasets**. Although the first problem could be solved by approximating the Shapley value via sampling [@ghorbani2019data], accessing the datasets remains an issue. Consequently, **our scoring mechanism could be interpreted as an approximation of a solution concept**. Conclusion {#sec:con} ========== Federated learning is the most popular collaborative learning framework, wherein each round only a subset of participants update a common model. In this paper, we devised three quality scoring rules which could successfully recover the relative ordering of the participant’s dataset qualities using the size of the improvement of each training round. Our method does neither require any computation power (such as shadow models) nor any background information besides a small dataset (or access to an evaluator oracle) in order to be able to evaluate the improvement of the model accuracy after each round. Our results are twofold: first, we conclude that the quality inference accuracy does depend on the complexity of the model, which is being trained: **with more complexity comes higher quality inference accuracy** (i.e., for a simple case it is barely better than a random guess, while for more complex ones it is more than twice of that). Second, paradoxically to the first, **weighting the participants** based on the inferred quality scores **have a minor effect in the complex case while it improves the** final **accuracy of the simple case** consistently with more than one percent. Such a quality inference within federated learning could have several applications. Besides the already mentioned weighting, it could be used to identify freeriders and cheaters of the supposedly commonly trained model. In this paper we also showed that **catching such cheaters based on the scoring rules is twice as effective as random guessing**. Future Work {#future-work .unnumbered} ----------- The paper barely scratched the surface of a potentially fruitful direction, namely the quality inference using aggregated updates. Besides the already mentioned two directions (misbehavior detection and participant weighting) there are several other, such as **approximating the Shapley value using the introduced scoring rules**. The privacy implications of the this information leakage is also of interest: could such a **quality information be considered private?** The scoring rules themselves could also be a subject of further research as they can be improved, replaced, weighted, etc. Finally, the **theoretical analysis of** the quality inference is an orthogonal direction to this empirical study: **attempting to reconstruct the dataset quality order** is similar to the problem studied in [@dinur2003revealing], which aims to reconstruct the entire dataset based on query outputs. Acknowledgment {#acknowledgment .unnumbered} -------------- This work has received support from the EU/EFPIA Innovative Medicines Initiative 2 Joint Undertaking (MELLODDY grant nr 831472). [^1]: Email: pejo@crysys.hu, Affiliation: Laboratory of Cryptography and System Security (CrySyS Lab), Department of Networked Systems and Services (HIT), Faculty of Electrical Engineering and Informatics (VIK), Budapest University of Technology and Economics (BME) [^2]: We assume within a round the participants train for an entire epoch, i.e., use all their data in the rounds they are selected. [^3]: This is a hypothetical scenario, in real life no such a claims can be made without longer observations of the round-wise improvements. [^4]: Label perturbation [@papernot2016semi; @papernot2018scalable] could also be used to achieve differential privacy [@dwork2006calibrating], hence, in this case data quality could be interpreted as the noise size or the privacy parameter. [^5]: There can be slight variations due to the random splitting, however, we run our experiments 10-fold, which mitigates this issue sufficiently. [^6]: For *The Ugly* the threshold was considered to be negative. [^7]: We set the thresholds separately for each scoring rule; however, the final accuracy was not better than when we set it uniformly to $t=0.15$, so we only present this latter. [^8]: $t=0.15$ performs the best in case of Value as well. [^9]: We multiply with $\frac1\kappa>1$ in case of *The Good* as that is a reward, not a punishment.
--- abstract: 'Most existing methods for biomedical entity recognition task rely on explicit feature engineering where many features either are specific to a particular task or depends on output of other existing NLP tools. Neural architectures have been shown across various domains that efforts for explicit feature design can be reduced. In this work we propose an unified framework using bi-directional long short term memory network (BLSTM) for named entity recognition (NER) tasks in biomedical and clinical domains. Three important characteristics of the framework are as follows - (1) model learns contextual as well as morphological features using two different BLSTM in hierarchy, (2) model uses first order linear conditional random field (CRF) in its output layer in cascade of BLSTM to infer label or tag sequence, (3) model does not use any domain specific features or dictionary, i.e., in another words, same set of features are used in the three NER tasks, namely, disease name recognition ([*Disease NER*]{}), drug name recognition ([*Drug NER*]{}) and clinical entity recognition ([*Clinical NER*]{}). We compare performance of the proposed model with existing state-of-the-art models on the standard benchmark datasets of the three tasks. We show empirically that the proposed framework outperforms all existing models. Further our analysis of CRF layer and word-embedding obtained using character based embedding show their importance.' address: | Department of Computer Science and Engineering\ Indian Institute of Technology Guwahati, India\ [{sunil.sahu, anand.ashish}@iitg.ernet.in]{} author: - 'Sunil Kumar Sahu, Ashish Anand' bibliography: - 'acl2016.bib' title: 'Unified Neural Architecture for Drug, Disease and Clinical Entity Recognition' --- `Drug Name Recognition, Disease Name Recognition, Clinical Entity Recognition, Recurrent Neural Network, LSTM Network` Introduction ============ Biomedical and clinical named entity recognition (NER) in text is one of the important step in several biomedical and clinical information extraction tasks [@Rosario04; @segura2015exploring; @uzuner2010]. State-of-art methods formulated NER task as a sequence labeling problem where each word is labeled with a tag and based on tag sequence entities of interest get identified. It has been observed that named entity recognition in biomedical and clinical domain is difficult [@leaman09; @uzuner10a] compared to the generic domain. There are several reasons behind this, including use of non standard abbreviations or acronyms, multiple variations of same entities etc. Further clinical notes are more noisy, grammatically error prone and contain less context due to shorter and incomplete sentences [@uzuner2010]. Most widely used models such as CRF, maximum entropy Markov model (MEMM) or support vector machine (SVM), use manually designed rules to obtain morphological, syntactic, semantic and contextual information of a word or of a piece of text surrounding a word, and use them as features for identifying correct label [@Lafferty:2001; @MahbubChowdhury10; @jiang2011study; @rocktaschel2013wbi; @bjorne2013uturku]. It has been observed that performance of such models are limited with the choice of explicitly designed features which are generally specific to task and its corresponding domain. For example, Chawdhury and Lavelli [@MahbubChowdhury10] explained several reasons why features designed for biological entities such as protein or gene are not equally important for disease name recognition. Deep learning based models have been used to reduce manual efforts for explicit feature design in [@collobert11a]. Here distributional features were used in place of manually designed features and multilayer neural network were used in place of linear model to overcome the needs of task specific meticulous feature engineering. Although proposed methods outperformed several generic domain sequence tagging tasks but it fails to overcome state-of-art in biomedical domain [@LinYao2015]. There are two plausible reasons behind that, first, it learned features only from a word level embedding and second, it took into account only a fixed length context of the word. It has been observed that word level embeddings preserve syntactic and semantic properties of word, but may fail to preserve morphological information which can also play important role in biomedical entity recognition [@dos2014; @lample2016neural; @MahbubChowdhury10; @LeamanG08]. For instance, drug names [*Cefaclor, Cefdinir, Cefixime, Cefprozil, Cephalexin*]{} have common prefix and [*Doxycycline, Minocycline, Tetracycline*]{} have common suffix. Further, window based neural architecture can only consider contexts falling within the user decided window size and will fail to pick important clues lying outside the window. This work aims to overcome the above mentioned two issues. The first one to obtain both morphologically as well as syntactic and semantically rich embedding, two BLSTMs are used in hierarchy. First BLSTM works on each character of words and obtain morphologically rich word embedding. Second BLSTM works at word level of a sentence to learn contextually reach feature vectors. The second one to make sure all context lying anywhere in the sentence should be utilized, we consider entire sentence as input and use first-order linear chain CRF in the final prediction layer. The CRF layer accommodates dependency information about tags. We evaluate the proposed model on three standard biomedical entity recognition tasks namely [*Disease NER*]{}, [*Drug NER*]{} and [*Clinical NER*]{}. To the best of our knowledge this is the first work which explores single model using character based word embedding in conjunction with word embedding for drug and clinical entity recognition tasks. We compare the proposed model with the existing state-of-the-art models for each task and show that it outperforms them. Further analysis of the model indicates the importance of using character based word embedding along with word embedding and CRF layer in the final output layer. Method {#ner_method} ====== Bidirectional Long Short Term Memory {#sec:rnn} ------------------------------------ Recurrent neural network (RNN) is a variant of neural networks which utilizes sequential information and maintains history through its recurrent connection [@Graves:2009; @Graves13]. RNN can be used for a sequence of any length, however in practice it fails to maintain long term dependency due to vanishing and exploding gradient problems [@bengio2013; @bengio2013advances]. Long short term memory (LSTM) network [@Hochreiter97] is a variant of RNN which takes care of the issues associated with vanilla RNN by using three gates (input, output and forget) and a memory cell. We formally describe the basic equations pertaining to LSTM model. Let $h^{(t-1)}$ and $c^{(t-1)}$ be hidden and cell states of LSTM respectively at time $t-1$, then computation of current hidden state at time $t$ can be given as: $$\begin{aligned} &i^{(t)} = \sigma ( U^{(i)} x^{(t)} + W^{(i)} h^{(t-1)} + b^i)\\ &f^{(t)} = \sigma (U^{(f)} x^{(t)} + W^{(f)} h^{(t-1)} + b^f)\\ &o^{(t)} = \sigma (U^{(o)} x^{(t)} + W^{(o)} h^{(t-1)} + b^o)\\ &g^{(t)} = tanh(U_l^{(g)} x^{(t)} + W^{(g)} h^{(t-1)} + b^{g}) \\ &c^{(t)} = c^{(t-1)} * f^{(t)} + g^{(t)} * i^{(t)} \\ &h^{(t)} = tanh(c^{(t)}) * o^{(t)},\end{aligned}$$ where $\sigma$ is sigmoid activation function, $*$ is an element wise product, $x^{(t)} \in \mathbb{R}^d$ is the input vector at time $t$, $U^{(i)}$, $U^{(f)}$, $U^{(o)}$, $U^{(g)} \in \mathbb{R}^{N \times d}$, $W^{(i)}$, $W^{(o)}$, $W^{(f)}$, $W^{(g)} \in \mathbb{R}^{N \times N}$, $b^i$, $b^f$, $b^o$, $b^g \in \mathbb{R}^{N}$, $h^{(0)}$, $c^{(0)} \in \mathbb{R}^N$ are learning parameters for LSTM. Here $d$ is dimension of input feature vector, $N$ is hidden layer size and $h^{(t)}$ is output of LSTM at time step $t$. It has become common practice to use LSTM in both forward and backward directions to capture both past and future contexts respectively. First LSTM computes its hidden states in forward direction of input sequence and second does it in backward direction. This way of using two LSTMs is referred to as bidirectional LSTM or simply BLSTM. We have also used bi-directional LSTM in our model. Final output of BLSTM at time $t$ is given as: $$h^{(t)} = \overrightarrow{h^{(t)}} \oplus \overleftarrow{h^{(t)}}$$ Where $\oplus$ is concatenation operation and $\overrightarrow{h^{(t)}}$ and $\overleftarrow{h^{(t)}}$ are hidden states of forward and backward LSTM at time $t$. ![Bidirectional recurrent neural network based model for biomedical entity recognition. Here $w_1 w_2 ... w_m$ is the word sequence of the sentence and $t_1$ $t_2$ ... $t_m$ is its computed label sequence and $m$ represents length of the sentence.[]{data-label="fig:ner_model"}](rnn_ner1.png){width="90.00000%"} Model Architecture {#sec:ner_model} ------------------ Similar to any named entity recognition task, we formulate the biomedical entity recognition task as a token level sequence tagging problem. We use BIO tagging scheme in our experiments [@settles2004]. Architecture of the proposed model is presented in Figure \[fig:ner\_model\]. Our model takes whole sentence as input and compute a label sequence as output. First layer of the model learns local feature vectors for each word in the sentence. We use concatenation of word embedding, PoS tag embedding and character based word embedding as a local feature for every word. Character based word embedding is learned through applying a BLSTM on the character vectors of a word. We call this layer as [*Char BLSTM*]{} ( \[sec:crnn\] ). Subsequent layer, called *Word BLSTM* ( \[sec:crf\]), incorporates contextual information on it through a separate BLSTM network. Finally we use a CRF to encode correct label sequence on the output of [*Word BLSTM*]{} ( \[sec:crf\] ). Now onwards, the proposed framework will be referred to as *CWBLSTM*. Entire network parameters are trained in end-to-end manner through cross entropy loss function. We next describe each part of the model in detail. Features Layer {#sec:feat} -------------- Word embedding or distributed word representation is a compact vector represent of a word which preserve lexico-semantic properties [@Bengio03]. It is a common practice to initialize word embedding with a pre-trained vector representation of words. Apart from word embedding in this work PoS tag and character based word embedding are used as features. The output of feature layer is a sequence of vectors say $x_1, \cdots x_m$ for the sentence of length $m$. Here $x_i \in \mathbb{R}^d$ is the concatenation of word embedding, PoS tag embedding and character based word embedding. We next explain how character based word embedding is learned. ### Char BLSTM {#sec:crnn} Word embedding is crucial component for all deep learning based NLP task. Capability to preserve lexico-semantic properties in vector representation of a word made it a powerful resource for NLP [@collobert11a; @Turian10]. In biomedical and clinical entity recognition tasks apart from semantic information, morphological structure such as prefix, suffix or some standard patterns of words also give important clues [@MahbubChowdhury10; @leaman09]. The motivation behind using character based word embedding is to incorporate morphological information of words in feature vectors. To learn character based embeddings, we maintained a vector for every characters in a embedding matrix [@dos2014; @lample2016neural]. These vectors are initialized with random values in the beginning. To illustrate, suppose [*cancer*]{} is a word for which we want to learn an embedding (represented in figure \[fig:charRNN\]), we use a BLSTM on the vector of each characters of [*cancer*]{}. As mentioned earlier forward LSTM maintained information about past in computation of current hidden state and backward LSTM obtained futures contexts, therefore after reading entire sequence, last hidden states of both RNN must have knowledge of whole word with respect to their directions. The final embedding of a word would be: ![Learning character based word embedding[]{data-label="fig:charRNN"}](charRNN.png){width="60.00000%"} $$v_{cw} = \overrightarrow{h^{(m)}} \oplus \overleftarrow{h^{(m)}}$$ Where $\overrightarrow{h^{(m)}}$ and $\overleftarrow{h^{(m)}}$ is the last hidden states of forward and backward LSTMs respectively. Word BLSTM Layer {#sec:global} ---------------- The output of feature layer is a sequence of vectors for each word of the sentence. These vectors have local or individual information about the words. Although local information plays important role in identifying entities, but a word can have different meaning in different contexts. Earlier works in [@collobert11a; @LeamanG08; @MahbubChowdhury10; @LinYao2015] use a fixed length window to incorporate contextual information. However important clues can lie anywhere in the whole sentence. This limit the learned vectors to obtain knowledge about complete sentence. To overcome this, we use a separate BLSTM network which takes local feature vectors as input and outputs a vector for every word based on both contexts and current feature vectors. CRF Layer {#sec:crf} --------- The output of [*Word BLSTM*]{} layer is again a sequence of vectors which have contextual as well as local information. One simple way to decode the feature vector of a word into its corresponding tag is to use word level log likelihood (WLL) [@collobert11a]. Similar to [*MEMM’s*]{} it will map the feature vector of a word to a score vector of each tag by a linear transformation and every word will get its label based on its scores and independent of labels of other words. One limitation of this way of decoding is it does not take into account dependency among tags. For instance in [*BIO tagging*]{} scheme a word can only be tagged with [*I-Entity*]{} (standing for Intermediate-Entity) only after a [*B-Entity*]{} (standing for Beginning-Entity). We use CRF [@Lafferty:2001] on the feature vectors to include dependency information in decoding and decode whole sentence together with its tag sequence. CRF maintained two parameters for decoding, $W_{u} \in R^{k\times h}$ linear mapping parameter and $T \in R^{h\times h}$ pairwise transition score matrix. Here $k$ is the size of feature vector, $h$ is the number of labels present in task and $T_{i,j}$ implies pair wise transition score for moving from label $i$ to label $j$. Let $[v]_{1}^{|s|}$ be a sequence of feature vectors for a sentence $[w]_{1}^{|s|}$ and suppose $[z]_{1}^{|s|}$ is the unary potential scores obtained after applying linear transformation on feature vectors (here $z_i \in R^h$) then CRF decodes this with tag sequence using: $$P( [y]_{1}^{|s|} | [w]_{1}^{|s|} ) = \operatorname*{argmax}_{t\in Q^{|s|}} \frac{ \exp \Psi ( [z]_{1}^{|s|} , [t]_{1}^{|s|})} { \sum_{t^{\psi} \in Q^{|s|}} \exp \Psi ( [z]_{1}^{|s|} , [t^{\psi}]_{1}^{|s|} ) }$$ where $$\Psi( [z]_{1}^{|s|} , [t]_{1}^{|s|}) = \sum_{1 \le i \le |s|} (T_{t_{i-1},t_{i}} + z_{t_i})$$ Here $Q^{|s|}$ is a set contain all possible tag sequence of length $|s|$, $t_j$ is tag for the $j^{th}$ word. Highest probable tag sequence is estimated using Viterbi algorithm [@rabiner1989; @collobert11a]. Training and Implementation --------------------------- We use cross entropy loss function to train the model. Adam’s technique [@adam2014] is used for updating entire neural network and embedding parameters of our model. We use mini batch size of $50$ in training for all tasks. Entire implementation is done in python language using [*Tensorflow*]{}[^1] package. In all our experiments, we use pre-trained word embedding of length $100$, which was trained on PubMed corpus using GloVe [@pennington14; @muneeb15], PoS tag embedding vector of length $10$ and character based word embedding of length $20$. We used $l_2$ regularization with $0.001$ as corresponding parameter value. These hyperparameters are obtained using validation set of [*Disease NER*]{} task. The corresponding training, validation and test sets for *Disease NER* task is available as separate files with NCBI disease corpus. For the other two tasks, we used the same set of hyperparameters as obtained on *Disease NER*. The Benchmark Tasks {#sec:dataset} =================== In this section, we briefly describe the three standard tasks on which we examine the CWBLSTM model. Statistics of corresponding benchmark datasets is given in Table \[tab:ner\_stats\]. Disease NER ----------- Identifying disease named entity in text is crucial for disease related knowledge extraction [@bundschus2008; @agarwal2008]. Furthermore, It has been observed that disease is one of the most widely searched entities by users on PubMed [@Dogan12]. We use [*NCBI disease corpus*]{}[^2] to investigate performance of the model on [*Disease NER*]{} task. This dataset was annotated by a team of $12$ annotators (2 persons per annotation) on 793 PubMed abstracts [@Dogan12; @Dogan14]. Drug NER -------- Identifying drug name or pharmacological substance are important first step for drug drug interaction extraction and for other drug related knowledge extraction tasks. Keeping this in mind a challenge for recognition and classification of pharmacological substances in text was organized as part of SemEval 2013. We use SemEval-2013 task 9.1 [@segura2013] dataset for this task. The dataset shared in this challenge were annotated from two sources [*DrugBank*]{}[^3] documents and [*MedLine*]{}[^4] abstracts. This dataset has four kind of drugs as entities, namely [*drug*]{}, [*brand*]{}, [*group*]{} and [*drug\_n*]{}. Here [*drug*]{} represent generic drug name, [brand]{} is brand name of a drug, [*group*]{} is family name of drugs and [*drug\_n*]{} is active substance not approved for human use [@segura2011chal]. In this case while processing the dataset, $79$ entities ($56$ [*drug*]{}, $18$ $group$ and $5$ $brand$) from training set and $5$ entities ($4$ $drug$ and $1$ $group$) from test set were missed. Missed entities of test set are treated as false negative in our evaluation scheme. Clinical NER ------------ For clinical entity recognition we used publicly available (under license) i2b2/VA[^5] challenge dataset [@uzuner2010; @uzuner10a]. This dataset is a collection of discharge summaries obtained from Partners Healthcare, Beth Israel Deaconess Medical Center, and the University of Pittsburgh Medical Center. The dataset was annotated for three kinds of entities namely [*problem*]{}, [*treatment*]{} and [*test*]{}. Here *problems* indicate phrases that contain observations made by patients or clinicians about the patient’s body or mind that are thought to be abnormal or caused by a disease. *Treatments* are phrases that describe procedures, interventions, and substances given to a patient in an effort to resolve a medical problem. *Tests* are procedures, panels, and measures that are done to a patient or a body fluid or sample in order to discover, rule out, or find more information about a medical problem. The downloaded dataset for this task was only partially available (only discharge summaries from Partners Healthcare and Beth Israel Deaconess Medical Center) compared to the full dataset originally used in the challenge. We performed our experiments on currently available partial dataset. The dataset is available in pre-processed form, where sentence and word segmentations were alredy done. We removed patient’s information from each discharge summary before training and testing, because that never contains entities of interest. Results and Discussion ====================== Experiment Design ----------------- We perform separate experiments for each task. We use [*train set*]{} for learning optimal parameters of the model for each dataset and evaluation is performed on [*test set*]{}. Performance of each trained model is evaluated based on strict matching sense, where exact boundaries as well as class need to be correctly identified for consideration of true positive. For strict matching evaluation scheme, we use CoNLL 2004[^6] evaluation script to calculate precision, recall and F1 score in each task. Baseline Methods ---------------- We use following methods as a common baseline for comparison with the proposed models in all of the considered tasks. The selected baseline methods are implemented by us: [**SENNA:**]{} SENNA uses window based neural network on embedding of a word with its context to learn global feature [@collobert11a]. To make inference it also uses CRF on the output of window based neural network. We set the window size $5$ based on hyperparameter tuning using validation set ($20\%$ of training set), and rest all the hyperparameters are set similar to our model. [**CharWNN:**]{} This model [@dos2014] is similar to SENNA but uses word as well as character based embedding in the chosen context window [@dos2015]. Here character based embeddings are learned through convolution neural network with max pooling scheme. [**CharCNN:**]{} This method [@sunil16b] is similar to the proposed model *CWBLSTM* but instead of using BLSTM, it uses convolution neural network for learning character based embedding. Comparison with Baseline ------------------------ Table \[tab:comp\_base\] presents comparison of *CWBLSTM* with different baseline methods on disease, drug and clinical entity recognition tasks. We can observe that it outperforms all three baselines in each of the three tasks. In particular, when comparing with *CharCNN*, differences are significant for *Drug NER* and *Disease NER* tasks but difference is insignificant for *Clinical NER*. The proposed model improved the recall by $5\%$ to gain about $2.5\%$ of relative improvement in F1 score over the second best method *CharCNN* for the *Disease NER* task. For the *Drug NER* task, relative improvement of more than $3\%$ is observed for all three measures, precision, recall and F1 score over the *CharCNN* model. The relatively weaker performance on *Clinical NER* task could be attributed to use of many non standard acronyms and abbreviations which makes it difficult for character based embedding models to learn appropriate representation. One can also observe that, even though [*Drug NER*]{} has sufficiently enough training dataset, all models gave relatively poor performance compared to the performance in other two tasks. One reason for the poor performance could be the nature of the dataset. As discussed [*Drug NER*]{} dataset constitutes texts from two sources, [*DrugBank*]{} and [*MedLine*]{}. Sentences from *DrugBank* are shorter and are comprehensive as written by medical practitioners, whereas *MedLine* sentences are from research articles which generally tend to be longer. Further the *training set* constitutes $5675$ sentences from [*DrugBank*]{} and $1301$ from [*MedLine*]{}, whereas *test set* this distribution is reversed, i.e. more sentences are from *MedLine* ($520$ in comparison to $145$ sentences from *DrugBank*). Smaller set of training instances from *MedLine* sentences do not give sufficient examples to model to learn. Comparison with Other Methods ----------------------------- In this section we compare our results with other existing methods present in literature. We do not compare results on *Clinical NER* as the complete dataset (as was available in i2b2 challenge) is not available and results in literature are with respect to the complete dataset. ### Disease NER {#disease-ner-1 .unnumbered} Table \[tab:sta\_disease\] shows performance comparison of different existing methods with *CWBLSTM* on NCBI disease corpus. *CWBLSTM* improved the performance of BANNER by $1.89\%$ in terms of F1 Score. BANNER is a CRF based method which primarily uses orthographic, morphological and shallow syntactic features [@LeamanG08]. Many of these features are specially designed for biomedical entity recognition tasks. The proposed model also gave better performance than another BLSTM based model [@sunil16b] by improving recall by around $12\%$. BLSTM model in [@sunil16b] used BLSTM network with word embedding only whereas the proposed model make use of extra features in terms of PoS as well as character based word embeddings. ### Drug NER {#drug-ner-1 .unnumbered} Table \[tab:sta\_drug\] reports performance comparison on [*Drug NER*]{} task with submitted results in SemEval-2013 Drug Named Recognition Challenge [@segura2013]. *CWBLSTM* outperforms the best result obtained in the challenge (WBI-NER[@rocktaschel2013wbi]) by a margin of $1.8\%$. [*WBI-NER*]{} is the extension of ChemSpot chemical NER[@chemspot2012] system which is a hybrid method for chemical entity recognition. ChemSpot primarily uses features from dictionary to make sequence classifier using CRF. Apart from that WBI-NER also used features obtained from different domain dependent ontologies. Performance of the proposed model is better than LASIGE [@grego2013lasige] as well as UTurku [@bjorne2013uturku] system’s by a significant margin. LASIGE is also a CRF based method and UTurku uses Turku Event Extraction System (TEES), which is a kernel based model for entity and relation exaction tasks. Feature Ablation Study ---------------------- We analyze importance of each feature type by performing feature ablation. The corresponding results are presented in Table \[tab:effect\_feature\]. In this table first row present performance of the proposed model using all feature types in all three tasks and second, third and fourth rows shows performance when character based word embedding, PoS tag embeddings and pre-trained word embedding are removed from the model subsequently. Removal of pre-trained word embedding implies use of random vectors in place of pre-trained vectors. Through the table we can observe that after removal of character based word embedding, $3.6\%$, $ 5.8\%$ and $1.1\%$ relative decrements in F1 Score on [*Disease NER*]{} and [*Drug NER*]{} and [*Clinical NER*]{} tasks are observed. This demonstrate the importance of character based embedding. As mentioned earlier character based word embedding helps our model in two ways, first, it gives morphologically rich vector representation and secondly, through character based word embedding we can get vector representation for OoV (out of vocabulary) words also. OoV words are $9.9\%$, $13.85\%$ and $20.13\%$ in [Drug NER]{}, [*Disease NER*]{} and [*Clinical NER*]{} dataset respectively (shown in table \[tab:oov\]). As discussed earlier this decrements are less in [*Clinical NER*]{} because of presence of acronyms and abbreviations in high frequency which does not allow model to take advantage of character based word embedding. Through third row we can also observe that using PoS tag embedding as feature is not so crucial in all three tasks. This is because distributed word embedding implicitly preserve that kind of information. In contrast to PoS tag embedding, we observe that use of pre-trained word embedding is the one of the important feature type in our model for each task. Pre-trained word embedding helps model to get better representation for rare words in training dataset. Effects of CRF and BLSTM ------------------------ We also analyze the unified framework to gain insight on the effect of using different loss function in the output layer (CRF vs. WLL) as well as effect of using bi-directional or uni-directional (forward) LSTM. For this analysis, we modify our framework and named model variants as follows: bi-directional LSTM with WLL output layer is called *BLSTM+WLL* and uni-directional or regular LSTM with WLL layer is called *LSTM+WLL*. In other words [*BLSTM+WLL*]{} model uses all the features of the proposed framework except it uses WLL in place of CRF. Similarly [*LSTM+WLL*]{} also uses all features along with forward LSTM instead of bidirectional LSTM and WLL in place of CRF. Results are presented in table \[tab:effect\_model\]. A relative decrement of $7.5\%$, $3.4\%$ and $5.5\%$ in obtained F Score on [*Disease NER*]{}, [*Drug NER*]{} and [*Clinical NER*]{} respectively by [*BLSTM+WLL*]{} compared to the proposed model demonstrate the importance of using CRF layer. This suggests that identifying tag independently is not favored by the model and it is better to utilize the implicit tag dependency. Further observation of average token length of a entity in three tasks indicates plausible reason for difference in performance in the three tasks. Average token length are 1.2 for drug entities, 2.1 for clinical and 2.2 for disease named entities. The longer the average length of entities, better the performance of model utilizing tag dependency. Similarly relative improvements of $12.89\%$, $4.86\%$ and $20.83\%$ in F1 score on [*Disease NER*]{}, [*Drug NER*]{} and [*Clinical NER*]{} tasks respectively are observed when compared with [*LSTM + WLL*]{}. This clearly indicates that the use of bi-directional LSTM is always advantageous. Analysis of Learned Word Embeddings ----------------------------------- Next we analyze characteristics of learned word embeddings after training of the proposed model. As mentioned earlier, we are learning two different representations of each word, one through its characters and other through distributional contexts. Our expectation is that the word embedding obtained through character embeddings will focus on morphological aspects whereas distributional word embedding on semantic and syntactic contexts. We obtain character based word embedding for each word of the *Drug NER* dataset after training. We picked $5$ words from test set of vocabulary list and observe its $5$ nearest neighbors in vocabulary list of training set. The nearest neighbors are selected using both word-embeddings and results are shown in Table \[tab:emb\_neg\]. We can observe that the character based word embedding primarily focus on morphologically similar words, whereas distributional word embeddings preserve semantic properties. This clearly suggests that it is important to use the complementary nature of the two embeddings. Conclusion ========== In this research we present a unified model for drug, disease and clinical entity recognition tasks. Our model, called CWBLSTM, uses BLSTMs in hierarchy to learn better feature representation and CRF to infer correct labels for each word in the sentence at once. We believe, to the best of our knowledge, this is the first work using character based embeddings in drug and clinical entity recognition tasks. CWBLSTM outperforms task specific as well as task independent baselines in all three tasks. Through various analyses we demonstrated the importance of each feature type used by CWBLSTM. Our analyses suggest that pre-trained word embeddings and character based word embedding play complementary roles and along with incorporation of tag dependency are important ingredients for improving the performance of NER tasks in biomedical and clinical domains. References {#references .unnumbered} ========== [^1]: https://www.tensorflow.org [^2]: https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/ [^3]: https://www.drugbank.ca/ [^4]: https://www.nlm.nih.gov/bsd/pmresources.html [^5]: https://www.i2b2.org/NLP/Relations/Main.php [^6]: http://www.cnts.ua.ac.be/conll2002/ner/bin/conlleval.txt
--- abstract: 'We study a simple analytic solution to Einstein’s field equations describing a thin spherical shell consisting of collisionless particles in circular orbit. We then apply two independent criteria for the identification of circular orbits, which have recently been used in the numerical construction of binary black hole solutions, and find that both yield equivalent results. Our calculation illustrates these two criteria in a particularly transparent framework and provides further evidence that the deviations found in those numerical binary black hole solutions are not caused by the different criteria for circular orbits.' author: - 'Monica L. Skoge$^{1}$' - 'Thomas W. Baumgarte$^{1,2}$' title: Comparing Criteria for Circular Orbits in General Relativity --- Binary black holes are among the most promising sources of gravitational radiation for the new generation of gravitational wave detectors LIGO, VIRGO, GEO and TAMA. Motivated by the need of theoretical models for the identification and interpretation of future gravitational wave signals, several researchers have solved the constraint equations of Einstein’s field equations to construct initial data describing binary black holes in quasi-circular orbit [@c94; @b00; @ptc00; @ggb02; @b02; @tbcd02]. Constructing such initial data requires making several choices, including the decomposition of the initial value problem and the background geometry and topology. Moreover, solving the constraint equations provides the gravitational fields for black holes with arbitrary separation and momenta, and an additional criterion has to be applied to identify circular orbits. It is not surprising that different choices lead to physically different data. While all of these different data may be correct solutions to the constraint equations of general relativity, some may be more relevant astrophysically than others, in that they better represent a binary black hole system as it arises from inspiral from large separation. The results of Cook [@c94] and Baumgarte [@b00] (which we will jointly refer to as CB) and Grandclément, Gourgoulhon and Bonazolla [@ggb02] (hereafter GGB) differ by about a factor of two in the orbital frequency for the innermost stable circular orbit. This discrepancy raises two questions, namely which results are more relevant astrophysically and which choice in the respective approaches are responsible for the deviations. The better agreement of the GGB results with post-Newtonian results [@dgg02] suggests that these represent binary black holes in circular orbits more accurately [@footnote1]. There is also increasing evidence that the differences between CB and GGB are related to the different decompositions of the constraint equations [@pct02; @py02]. CB adopt the conformal transverse-traceless decomposition, which allows for an analytic solution of the momentum constraint [@by80], while GGB adopt the conformal thin-sandwich decomposition [@wm95; @y99; @c00] (see also [@py02]). It has been demonstrated that the two decompositions may lead to physically different data [@pct02], and it has also been suggested that the thin-sandwich decomposition together with maximal slicing may provide a more natural framework for constructing quasi-equilibrium solutions [@c00]. In this Brief Report we explore the effect of another difference in the approaches of CB and GGB, namely the criterion for locating circular orbits. CB adopt a turning-point method, in which circular orbits are identified with extrema of the binding energy (see eq. (\[tp\_crit\]) below), while GGB identify circular orbits by equating the ADM [@adm62] and Komar masses [@k59] (eq. (\[m\_crit\])). Since the two mass definitions agree only for stationary spacetimes, this criterion is closely related to imposing a relativistic virial theorem [@bg94]. To explore the effect of these different criteria for circular orbits in a particularly simple and transparent framework, we apply them to an analytic solution of Einstein’s equations describing a thin, spherical shell of identical collisionless particles. At every point on the shell the particles move isotropically, but all with the same speed in the plane perpendicular to the radius. In an oscillating shell, each particle moves about the center in a bound orbit. In the Newtonian limit, each orbit is a closed ellipse, and for static shells each orbit is circular (compare [@ybs01]). Since each particle follows a geodesic, circular orbits can be identified without ambiguity. These orbits can then be compared with those obtained from the turning-point and mass methods. In the following we will focus on a moment of time symmetry, when at least momentarily each particle is in a purely tangential orbit $u^r = 0$ (where $u^a$ is the four-velocity). The spherically symmetric line element can then be written as $$\label{metric} ds^2 = -\alpha^2 dt^2 + \psi^4 (dr^2 + r^2 (d \theta + \sin^2 \theta d\phi)),$$ where $\alpha$ is the lapse function and $\psi$ the conformal factor. The rest mass $M_0$ of the shell can be computed from $$M_0 = \int \rho_0 u^t \sqrt{-g} d^3x = 4 \pi \int \rho_0 W \psi^6 r^2 dr,$$ where $g$ is the determinant of the spacetime metric and where we have defined the particles’ Lorentz factor $W \equiv - \alpha u^t$. Since the shell’s co-moving density $\rho_0$, which is a sum of the individual particle densities $\rho_0^A$, vanishes everywhere except at the radius $R$ of the shell, we find $$\rho_0 = \sum_A \rho_0^A = \frac{M_0}{4 \pi R^2 W \psi^6} \, \delta(r - R).$$ The conformal factor $\psi$ in (\[metric\]) can now be found from the Hamiltonian constraint $$\label{ham1} \nabla^2 \psi = -2 \pi \psi^5 \rho_N,$$ and, following GGB, the lapse $\alpha$ from the maximal slicing condition $$\label{maxslicing} \nabla^2 (\alpha \psi) = 2 \pi \alpha \psi^5 (\rho_N + 2S).$$ Here $\rho_N$ is the density measured by a normal observer $n^a$ $$\rho_N = n^a n^b T_{ab} = n^a n^b \sum_A \rho_0^A u_a^A u_b^A = \rho_0 W^2,$$ (compare [@djs00; @st]), and $S$ is the trace of the spatial stress $$S = \gamma^{ij} T_{ij} = \rho_0 \gamma^{ij} u_i u_j = \rho_0 (W^2 - 1),$$ where we have used the normalization condition $$\label{norm} 1 = W^2 - \gamma^{ij} u_i u_j.$$ For time symmetry both the momentum density $j^a = - \gamma^{ab} n^c T_{bc}$ and the extrinsic curvature vanish, so that a zero shift $\beta^i = 0$ identically satisfies the shift equation obtained in the conformal thin-sandwich decomposition. The Hamiltonian constraint (\[ham1\]) and the maximal slicing condition (\[maxslicing\]) can readily be solved analytically by matching two vacuum solutions at the shell’s radius $R$. Choosing the vacuum solutions such that the interior solution is regular at the center, while the exterior solution is regular at infinity, we find for the conformal factor $$\label{cf1} \psi = \left\{ \begin{array}{ll} \displaystyle 1 + \frac{W}{2\psi|_{\bar R} \bar R} & \mbox{~~~for~~} 0 \leq \bar r < \bar R \\[3mm] \displaystyle 1 + \frac{W}{2\psi|_{\bar R} \bar r} & \mbox{~~~for~~} \bar r \geq \bar R. \end{array} \right.$$ Here and in the following we non-dimensionalize all quantities with respect to $M_0$, e.g. $\bar r \equiv r/M_0$. The value of $\psi|_{\bar R}$ can be found by evaluating the conformal factor at $\bar r = \bar R$, which yields a quadradic equation with the solution $$\label{psir} \psi|_{\bar R} = \frac{1}{2} + \sqrt{\frac{1}{4} + \frac{W}{2 \bar R}}.$$ The sign has been chosen so that $\psi$ approaches the gravitational potential $\phi_{\rm Newt}$ in the Newtonian limit. In terms of the ADM mass [@adm62; @my74] $$\label{m_adm} \bar M_{\rm ADM} = - \frac{1}{2\pi M_0 } \oint_\infty D^i \psi d^2S_i = \frac{W}{\psi|_{\bar R}},$$ the exterior conformal factor (\[cf1\]) can be written $$\label{cf2} \psi = 1 + \frac{\bar M_{\rm ADM}}{2 \bar r} \mbox{~~~for~~}\bar r \geq \bar R.$$ The maximal slicing condition (\[maxslicing\]) can be solved analogously to the Hamiltonian constraint, yielding $$\alpha \psi = \left\{ \begin{array}{ll} \displaystyle 1 - \frac{\alpha|_{\bar R} (3W^2-2)}{2W\psi|_{\bar R}\bar R} & \mbox{~~~for~~} 0 \leq \bar r < \bar R \\[3mm] \displaystyle 1 - \frac{\alpha|_{\bar R} (3W^2-2)}{2W\psi|_{\bar R}\bar r} & \mbox{~~~for~~} \bar r \geq \bar R. \end{array} \right.$$ Dividing by $\psi$, we find in the exterior $$\alpha = \frac{-\alpha|_{\bar R} (3W^2-2)+2W\psi|_{\bar R}\bar r}{W^2+2W\psi|_{\bar R}\bar r} \mbox{~~~for~~}\bar r \geq \bar R.$$ Evaluating this expression at $\bar r = \bar R$ determines the coefficient $\alpha|_{\bar R}$ $$\alpha|_{\bar R} = \left( 1+\frac{2W^2-1}{\psi|_{\bar R} \bar R W} \right)^{-1}.$$ Following GGB we now compute the Komar mass [@k59] $$\label{m_komar} \bar M_{\rm K} = \displaystyle\frac{1}{4\pi M_0} \oint_\infty D^i \alpha \, d^2 S_i = \frac{\alpha|_{\bar R} (3W^2-2) + W^2}{2W \psi|_{\bar R}}.$$ We note that the Komar mass is a slicing dependent quantity, and that this particular form results from having imposed maximal slicing. In terms of the Komar and ADM masses, the exterior lapse $\alpha$ can be written $$\alpha = \frac{2 \bar r - (2\bar M_{\rm K}-\bar M_{\rm ADM})} {2\bar r + \bar M_{\rm ADM}} \mbox{~~~for~~}\bar r \geq \bar R.$$ This expression reduces to the lapse as identified from the Schwarzschild metric in isotropic coordinates (see, e.g., exercise 31.7 in [@mtw73]) only if the two masses agree, $\bar M_{\rm K} = \bar M_{\rm ADM}$ (compare criterion (\[m\_crit\]) below). So far, the shell’s radius $\bar R$ and Lorentz factor $W$ appear independently in the above equations. It is intuitively clear that searching for circular orbits will yield a relation between the particles’ angular velocity and the gravitational field, and hence between $\bar R$ and $W$. Since our model consists of collisionless particles, circular orbits can be determined directly by solving the geodesic equations. Since all particles are identical, it is sufficient to evaluate the equation of motion for one particle, which we take to orbit in the equatorial plane. We therefore have $u^{\theta} = u^r = 0$, so that the normalization condition (\[norm\]) yields a relation between $u^{\phi}$ and $W$ $$\label{uphi} (u^{\phi})^2 = \frac{W^2-1}{\psi^4|_R R^2},$$ where we temporarily drop the bar notation. We now evaluate the geodesic equation, $$\frac{d u^a}{d\lambda} + \Gamma^a_{bc} u^b u^c = 0,$$ for $a = r$ to find a condition for the particles to remain in a purely tangential orbit ($d u^r/d \lambda = 0$) $$\label{gam} \Gamma^r_{tt}(u^t)^2 + \Gamma^r_{\phi \phi}(u^\phi)^2 = 0.$$ Combining this with (\[uphi\]) and $W = \alpha u^t$ gives $$\label{w2} W^2 = \left(1+ \frac{\psi^4|_R R^2 \Gamma^r_{tt}} {\alpha^2 \Gamma^r_{\phi \phi}}\right)^{-1}.$$ When evaluating the Christoffel symbols, we must take into account the discontinuity in the first derivative of the metric coefficients at $r = R$. By averaging such a quantity over an extended shell and letting the thickness of the shell go to zero, we find that the derivative has to be replaced with $$\psi_{,r} \rightarrow \frac{1}{2}(\psi_{,r}|_+ + \psi_{,r}|_-) = \frac{1}{2} \psi_{,r}|_+.$$ Using this rule for both $\psi$ and $\alpha$ we find $$\Gamma^r_{\phi \phi} = \displaystyle \frac{M_{\rm ADM}}{2}\left(1+\displaystyle\frac{M_{\rm ADM}}{2R}\right)^{-1} - R$$ and $$%\Gamma^r_{tt} & = & \displaystyle %\frac{M_{\rm K}}{2R^2}\big(1+\displaystyle\frac{M_{\rm ADM}}{2R}\big)^{-6} %\big(1-\displaystyle\frac{M_{\rm K}}{R}\big(1+\displaystyle %\frac{M_{\rm ADM}}{2R}\big)^{-1}\big). \Gamma^r_{tt} = \displaystyle \frac{M_{\rm K}}{2R^2}\left(1 + \frac{M_{\rm ADM}}{2R}\right)^{-6} \left(1 - \frac{M_{\rm K}}{R + M_{\rm ADM}/2} \right)$$ at $r=R$. Inserting these into eq. (\[w2\]) yields $$\label{geo} W^2 = \left(1-\frac{M_{\rm K}}{2R -2M_{\rm K} + M_{\rm ADM}}\right)^{-1}.$$ After some algebraic manipulation and dividing out the unphysical root $W=0$, eq. (\[geo\]) can be expanded into $$\label{5p} 4 W^5 - 6\bar R W^4 - 4 W^3 + 10 \bar R W^2 + W - 4 \bar R = 0,$$ where we have reintroduced the bar notation. This is the condition relating $W$ and $\bar R$ for circular orbits. It is easy to show that this equation reduces to $\bar \Omega^2 = \bar R^{-3}/2$ in the Newtonian limit (with $\bar R \gg 1$, $v \ll 1$ and $W \simeq 1 + v^2/2 = 1 + \bar R^2 \bar \Omega^2/2$). For black holes, alternative criteria have to be used to identify circular orbits. In the following we will compare the turning-point method adopted by CB and the mass criterion adopted by GGB. In the turning-point method, a circular orbit is identified by finding an extremum of the ADM mass (or equivalently the binding energy) at constant angular momentum $\bar u_{\phi}$ $$\label{tp_crit} \left. \frac{d \bar M_{\rm ADM}}{d \bar R} \right|_{\bar u_{\phi}} = 0.$$ In a Newtonian context, this condition arises naturally from Hamilton’s equations of motion. We start by differentiating the normalization condition, $(\bar u_\phi)^2 = \psi^4|_{\bar R} M_0^2 \bar R^2 (W^2-1)$, with respect to $\bar R$ to find $$\label{dw} \frac{dW}{d \bar R} = \frac{-(W^2-1)(1+b)} {\bar RW(1+b)+4W^2-2}$$ for sequences of constant angular momentum, where for convenience we have abbreviated $b = (1 + 2 W/\bar R)^{1/2}$. We now locate an extremum of the ADM mass (\[m\_adm\]) by setting its derivative with respect to $\bar R$ equal to zero $$\label{dw2} \frac{dW}{d\bar R}\left(\frac{W}{\bar R b(1+b)}-1\right) = \frac{W^2}{\bar R^2 b(1+b)}.$$ Combining (\[dw\]) and (\[dw2\]) then yields the condition $$\label{tp} W^2 = \frac{-\bar R(W^2-1)(1+b)(W-\bar R b(1+b))}{\bar RW(1+b)+4W^2-2}.$$ Inserting $b$ and eliminating the unphysical root $W = -\bar R/2$, eq. (\[tp\]) can be expanded identically into eq. (\[5p\]). In the mass method of GGB, the condition for circular orbits is obtained by equating the ADM and Komar mass (as obtained from maximal slicing) $$\label{m_crit} \bar M_{\rm ADM} = \bar M_{\rm K}.$$ Inserting (\[m\_adm\]) and (\[m\_komar\]) yields, after some manipulation and elimination of the unphysical root $W=0$, again the condition (\[5p\]). Thus we have established that both criteria yield the correct condition for circular orbits in our model problem. Since (\[m\_crit\]) only holds for stationary spacetimes, this criterion is closely related to a relativistic virial theorem. This relation is also evident from the expansions of the ADM and Komar masses to first order in $\epsilon \sim 1/\bar R \sim v^2$, $$\label{m_adm_newt} \bar M_{\rm ADM} \simeq 1 - \frac{1}{2 \bar R} + \frac{1}{2} v^2 = 1 + \bar U + \bar T$$ and $$\label{m_komar_newt} \bar M_{\rm K} \simeq 1 - \frac{1}{\bar R} + \frac{3}{2} v^2 = 1 + 2 \bar U + 3 \bar T,$$ where $U$ and $T$ are the Newtonian potential and kinetic energies of the spherical shell. The two expansions (\[m\_adm\_newt\]) and (\[m\_komar\_newt\]) are equal only if the Newtonian virial theorem $T = - U/2$ holds. For completeness, we evaluate the relativistic virial theorem in spherical symmetry as derived by [@bg94] $$\int \left(4\pi S - \frac{1}{\psi^4}\left( \big(\frac{d\ln \alpha}{dr} \big)^2 -\frac{1}{2} \big(\frac{d\ln \psi^2}{dr} \big)^2\right)\right)\psi^6 r^2 dr = 0.$$ Computing the above integral in terms of the Komar and ADM masses yields $$\frac{(W^2-1)}{W} - \frac{2\bar M_{\rm K}^2}{\bar M_{\rm ADM}-2\bar M_{\rm K}+2\bar R} + \frac{\bar M_{\rm ADM}^2}{2 \bar R} = 0,$$ which can again be brought into the form (\[5p\]). We now briefly discuss the physical implications of the condition (\[5p\]). Solving for $\bar R$ we find $$\bar R = \frac{4 W^5 - 4 W^3 + W}{6 W^4 - 10 W^2 + 4}.$$ To find a minimum value for the radius of our shell, we extremize the above equation with respect to $W$, which yields $$(2W^2 - 1)(6W^6 - 21W^4 + 15W^2 - 2) = 0.$$ The only physical root (i.e. $W$ real and $W \geq 1$) is $W = 1.607$, corresponding to $\bar R_{\rm min} = 1.532$. Expressing this in terms of $M_{\rm ADM}$ and circumferential radius $R_{\rm C}$ we find $$\left( \frac{R_{\rm C}}{M_{\rm ADM}} \right)_{\rm min} = 2.506 \mbox{~~~~(equilibrium)}.$$ This value should be compared with the Buchdahl limit $(R_{\rm C}/M_{\rm ADM})_{\rm min} = 9/4 = 2.25$ [@b59] for static fluid balls and $(R_{\rm C}/M_{\rm ADM})_{\rm min} = 3$ [@mtw73] for test particles in circular orbit in Schwarzschild spacetimes. Requiring the particles’ orbits to be stable leads to a more stringent limit on the compaction, which we find by requiring the second derivative of $ \bar M_{\rm ADM}$ with respect to $\bar R$ to vanish in addition to (\[tp\_crit\]). This yields an equation for $W$ with the physical root $W=1.108$ corresponding to $\bar R_{\rm min} = 3.053$, or $$\left( \frac{R_{\rm C}}{M_{\rm ADM}} \right)_{\rm min} = 4.265 \mbox{~~~~(stability)},$$ which should be compared with the innermost stable circular orbit $(R_{\rm C}/M_{\rm ADM})_{\rm min} = 6$ of test particles in Schwarzschild spacetimes. To summarize, we construct an analytic solution to Einstein’s field equations describing a thin spherical shell consisting of collisionless particles in circular orbits. We apply the turning-point criterion (\[tp\_crit\]) used by CB and the mass criterion (\[m\_crit\]) used by GGB and find that both conditions correctly identify circular orbits. The later criterion is intimately related to adopting maximal slicing, which is a natural choice for constructing quasi-equilibrium spacetimes (compare [@c00]). Our calculation illustrates these two criteria in the context of a very transparent, analytical framework and provides further evidence that the differences between the findings of CB and GGB result from the different initial value decompositions. MLS gratefully acknowledges support through the Surdna Foundation Undergraduate Research Fellowship Program. We would also like to thank R. H. Price and K. S. Thorne for useful conversations, as well as the Visitors Program in Numerical Relativity at Caltech, where this project was initiated, for extending their hospitality. This work was supported in part by NSF Grant PHY 01-39907 to Bowdoin College. [99]{} G. B. Cook, Phys. Rev. D [**50**]{}, 5025 (1994). T. W. Baumgarte, Phys. Rev. D [**62**]{}, 024018 (2000). H. Pfeiffer, S. A. Teukolsky and G. B. Cook, Phys. Rev. D [**62**]{}, 104018 (2000). E. Gourgoulhon, P. Grandclément and S. Bonazzola, Phys. Rev. D [**65**]{}, 044020 (2002); P. Grandclément, E. Gourgoulhon and S. Bonazzola, Phys. Rev. D [**65**]{}, 044021 (2002). B. D. Baker, submitted (also gr-qc/0205082). W. Tichy, B. Brügmann, M. Campanelli and P. Diener, submitted (also gr-qc/0207011). L. Blanchet, Phys. Rev. D [**65**]{}, 124009 (2002); T. Damour, E. Gourgoulhon and P. Grandclément, Phys. Rev. D [**66**]{}, 024007 (2002). In principle it is possible that the choices and approximations made in GGB and the post-Newtonian calculations (e.g. [@djs00; @bd99]) lead to similar errors, so that their agreement might be misleading. Fully self-consistent numerical relativity evolution calculations would therefore provide a more reliable indication as to which data-sets indeed lead to circular orbits. T. Damour, P. Jaranowski and G. Schäfer, Phys. Rev. D [**62**]{}, 084011 (2000). A. Buonanno and T. Damour, Phys. Rev. D [**59**]{}, 084006 (1999). H. Pfeiffer, G. B. Cook and S. A. Teukolsky, Phys. Rev. D [**66**]{}, 024047 (2002). H. Pfeiffer and J. W. York, Jr., submitted (also gr-qc/0207095). Bowen and J. W. York, Jr., Phys. Rev. D [**21**]{}, 2047 (1980). J. Wilson and G. Mathews, Phys. Rev. Lett. [**75**]{}, 4161 (1995). J. W. York, Jr., Phys. Rev. Lett. [**82**]{}, 1350 (1999). G. B. Cook, Living Rev. Rel. [**5**]{}, 1 (2000). R. Arnowitt, S. Deser and C.W. Misner, [ *Gravitation*]{} edited by L. Witten (New York: Wiley, 1962). A. Komar, Phys. Rev. [**113**]{}, 934 (1959). E. Gourgoulhon and S. Bonazzola, Class. Quantum Grav. [**11**]{}, 443 (1994). H.-J. Yo, T. W. Baumgarte and S. L. Shapiro, Phys. Rev. D [**63**]{} 064035 (2001). S. L. Shapiro and S. A. Teukolsky, Astrophys. J. [**298**]{}, 34 (1985); Phys. Rev. D [**47**]{}, 1529 (1993). N. Ó Murchadha and J. W. York, Jr., Phys. Rev. D [**10**]{}, 2345 (1974). C. W. Misner, K. S. Thorne and J. A. Wheeler, [ *Gravitation*]{} (New York: W. H. Freeman and Company, 1973). H. A. Buchdahl, Phys. Rev. [**116**]{}, 1027 (1959).
--- abstract: | Supermassive primordial stars are expected to form in a small fraction of massive protogalaxies in the early universe, and are generally conceived of as the progenitors of the seeds of supermassive black holes (BHs). Supermassive stars with masses of $\sim55,000\,$M$_{\odot}$, however, have been found to explode and completely disrupt in a supernova (SN) with an energy of up to $\sim10^{55}\,$erg instead of collapsing to a BH. Such events, $\sim10,000$ times more energetic than typical SNe today, would be among the biggest explosions in the history of the universe. Here we present a simulation of such a SN in two stages. Using the [RAGE]{} radiation hydrodynamics code we first evolve the explosion from an early stage through the breakout of the shock from the surface of the star until the blast wave has propagated out to several parsecs from the explosion site, which lies deep within an atomic cooling dark matter (DM) halo at $z\simeq15$. Then, using the [GADGET]{} cosmological hydrodynamics code we evolve the explosion out to several kiloparsecs from the explosion site, far into the low-density intergalactic medium. The host DM halo, with a total mass of $4\times 10^7\,$M$_{\odot}$, much more massive than typical primordial star-forming halos, is completely evacuated of high density gas after $\la 10\,$Myr, although dense metal-enriched gas recollapses into the halo, where it will likely form second-generation stars with metallicities of $\simeq 0.05\,$Z$_{\odot}$ after $\ga70\,$Myr. The chemical signature of supermassive star explosions may be found in such long-lived second-generation stars today. author: - | Jarrett L. Johnson, Daniel J. Whalen, Wesley Even, Chris L. Fryer,\ Alex Heger, Joseph Smidt and Ke-Jung Chen title: The Biggest Explosions in the Universe --- Introduction ============ Recently, there has been renewed interest in the long-standing theoretical possibility that supermassive stars (SMSs), with masses of $10^4$–$10^6\,$M$_{\odot}$ inhabited the early universe (see e.g., Volonteri 2012) and their possible fates (e.g., Iben 1963; Fowler & Hoyle 1964; Appenzeller & Fricke 1972; Shapiro & Teukolsky 1979; Bond et al. 1984; Fuller et al. 1986). One of the main motivations for their study comes from observations of quasars at $z$ $\simeq 6$–$7$ which are inferred to be powered by black holes (BHs) with masses exceeding $10^9\,$M$_{\odot}$ (e.g., Willott et al. 2003; Fan et al. 2006; Mortlock et al. 2011). Given the short time ($< 800\,$Myr) available for such massive BHs to grow via accretion from their initial ’seed’ masses, as derived from the most recent cosmological parameters inferred by e.g., the [ *Wilkinson Microwave Anisotropy Probe*]{} (Komatsu et al. 2011),[^1] and the suppression of BH growth due to the strong radiative feedback from both stars (e.g., Whalen et al. 2004; Wise & Abel 2007; O’Shea & Norman 2008) and the BHs themselves (e.g., Pelupessy et al. 2007; Alvarez et al. 2009; Milosavljevi[' c]{} et al. 2009; Jeon et al. 2012; Park & Ricotti 2012), it now appears more likely than ever that the seeds of the most massive early BHs must have been quite massive themselves (e.g., $\ga 10^5\,$M$_{\odot}$; see Johnson et al. 2012a; also e.g., Shapiro 2005; Volonteri & Rees 2006; Natarajan & Volonteri 2012). Whereas the majority of the first, Population (Pop) III stars may have had masses of $\sim 20$–$500\,$M$_{\odot}$ (e.g., Abel et al. 2002; Bromm & Larson 2004; Yoshida et al. 2008; Greif et al. 2011), the best candidates for the seeds of SMBHs are thus much more massive (and rare) supermassive primordial stars. An additional, and independent, reason to consider SMSs in the early universe is that the conditions required for their formation are now thought to be realized much more often than was previously assumed. The most widely discussed avenue for the formation of SMSs is via the direct gravitational collapse of hot ($\simeq10^4\,$K) primordial gas in so-called atomic cooling dark matter (DM) halos at $z\ga 10$ (e.g., Bromm & Loeb 2003; Begelman et al. 2006; Lodato & Natarajan 2006; Spaans & Silk 2006; Regan & Haehnelt 2009; Choi et al. 2013; Latif et al. 2013a,b).[^2] In this scenario, the gas in the protogalaxy remains at the virial temperature of $\sim$ $10^4\,$K because H$_{\rm 2}$ molecules have been photodissociated by the Lyman-Werner (LW) background, leading to the rapid formation of SMSs via the accretion of gas at rates $\sim 10^2$–$10^3$ times higher than in the formation of most Pop III stars from H$_{\rm 2}$-cooled gas. The flux of radiation required to keep the gas H$_{\rm 2}$-free depends on its spectrum, with lower fluxes required if it is produced by metal-enriched stars instead of Pop III stars (e.g., Shang et al. 2010). Recent work by independent groups has shown that Pop II star-forming galaxies in the early universe are able to produce sufficient H$_{\rm 2}$-dissociating radiation to prevent the cooling of primordial gas in a substantial fraction of atomic cooling halos, thereby leading to the seeding of these halos with SMSs that can collapse into BHs (see Dijkstra et al. 2008; Agarwal et al. 2012; Petri et al. 2012; Johnson et al. 2013). Indeed, Agarwal et al. (2012, 2013) find that a large fraction of the SMBHs in the centers of galaxies today may have been seeded by SMSs. Strengthening these conclusions are other recent results which suggest that lower LW fluxes may be required for the formation of SMSs, due to a reduced role of H$_{\rm 2}$ self-shielding (Wolcott-Green et al. 2011) and the presence of significant turbulence or magnetic fields (Van Borm & Spaans 2013). Complementary studies have been undertaken to understand the growth and evolution of SMSs, as well. Modeling the growth of accreting protostars with masses up to $\simeq 10^3\,$M$_{\odot}$, Hosokawa et al. (2012) have shown that they emit little high energy radiation that could halt their continued accretion, and Inayoshi et al. (2013) have shown that pulsational instabilities are likewise unable to halt their growth. Johnson et al. (2012b) modeled the growth of SMSs to much higher masses and showed that, even if they are able to emit the copious ionizing radiation characteristic of main sequence Pop III stars, which may occur once the accretion rate becomes sufficiently low (Schleicher et al. 2013), radiative feedback is not able to stop their growth up to at least $\sim 10^5\,$M$_{\odot}$. At the highest accretion rates expected for these objects ($\ga 1\,$M$_{\odot}$ yr$^{-1}$; Wise et al. 2008; Shang et al. 2010; Johnson et al. 2011), the masses of primordial SMSs are only limited by the $\la 4\, $Myr that they have to accrete gas before they collapse to BHs (Begelman 2010). Whereas the majority of SMSs are expected to collapse to black holes with little or no associated explosion (e.g., Fryer & Heger 2011; see also Fuller & Shi 1998; Linke et al. 2001), it is possible that some fraction instead explode as extremely energetic supernovae (SNe; e.g., Fuller et al. 1986; Montero et al. 2012; Whalen et al. 2012a, 2013d). In particular, Heger et al. (2013) have found from stellar evolution calculations including post-Newtonian corrections to gravity that SMSs with masses in a narrow range around $\simeq 55,000\,$M$_{\odot}$ end their lives as extraordinarily luminous SNe.[^3] With energies of almost $10^{55}\,$erg, these thermonuclear explosions are among the most energetic in the history of the universe. Here we expand on the radiation hydrodynamics simulations presented by Whalen et al. (2012a) and simulate the long-term evolution of a SMS SN in its cosmological environment, in order to show how these gargantuan explosions impact both the formation of the first galaxies and the chemical signature of the first stars. In the next section, we describe the multi-scale simulations that we have carried out to model the evolution of the explosion from its breakout from the surface of the star to the propagation of the blast wave into the intergalactic medium (IGM). In Section 3, we present our results on the energetics and dynamics of the explosion, as well as on metal enrichment and second-generation star formation. In Section 4, we conclude with a brief discussion of our results. =8.5cm Simulation Setup ================ Here we describe the two simulations that we have carried out. The first is a 1-D radiation hydrodynamics calculation using the Los Alamos National Laboratory RAGE code (Gittings et al. 2008) which allows to track the propagation of the SN blast wave out to several parsecs from the explosion site, deep within the host atomic cooling halo. For the second, we map the results of the first into a 3-D cosmological simulation using the GADGET hydrodynamics code (Springel et al. 2001; Springel & Hernquist 2002). Our use of these two simulation codes for the phases of the SN in which they are most well-suited to accurately model the explosion, from small (AU) scales to large (kpc) scales, constitutes a significant improvement over previous cosmological simulations of SN feedback. =7.in =8.5cm Stellar and Early Supernova Evolution ------------------------------------- For the SMS progenitor of the SN we adopt the $55,000\,$M$_{\odot}$ stellar model described in Whalen et al. (2012a), which was evolved until the onset of explosion, using the [*Kepler*]{} code (Weaver et al. 1978; Woosley et al. 2002). The explosion was then followed using [*Kepler*]{} and was confirmed (Chen et al., 2013) using the CASTRO code (Almgren et al. 2010). The explosion completely disrupts the star and yields an explosion energy of 7.7 $\times$ 10$^{54}$ erg (Heger et al. 2013). The RAGE code is then used to simulate the SN from the breakout of the shock from the surface of the star until the blast wave has propagated through a circumstellar medium with a density $\ge10^2\,$cm$^{-3}$ (and a density profile $\propto$ $r^{-2}$) out to several parsecs from the explosion site. Up to this point, the stellar evolution calculation and the simulation of the early phases of the SN are the same as described in Whalen et al. (2012a), to which we refer the reader for more detailed discussion of the calclulations (see also Frey et al. 2013). The blue curves in Figure 1 show the velocity (left panel) and density (right panel) profiles of the SN ejecta at the end of the RAGE simulation, at which point the shock has propagated out to $6\,$pc from the explosion site. The $55,000\,$M$_{\odot}$ of ejecta, $23,000\,$M$_{\odot}$ of which is heavy elements produced during the evolution and explosion of the progenitor, are traveling at almost $10,000\,$km$\,$s$^{-1}$ outward from the center of the host atomic cooling halo. It is from this point that we map these velocity and density profiles into a self-consistently evolved atomic cooling halo in a much larger cosmological volume. Cosmological Blast Wave ----------------------- To simulate the subsequent evolution of the SN blast wave in the appropriate cosmological environment, we map the velocity and density profiles obtained from the smaller-scale RAGE simulation into the center of a $4 \times 10^7\,$M$_{\odot}$ atomic cooling DM halo, the type of which is expected to host the formation of SMSs in the early universe. The halo is identified in a $1\,$Mpc$^3$ (comoving) cosmological volume which has been evolved from $z = 100$ down to $z \simeq 15$ under the influence of a uniform, elevated H$_{\rm 2}$-dissociating (Lyman-Werner; LW) radiation field, which prevents the gas from cooling and is assumed to lead to formation of a single SMS. Further details of the cosmological simulation up to this point are described in Johnson et al. (2011), who considered the impact of the alternative end state of such a SMS, a rapidly accreting BH. To map the output of the (Eulerian) RAGE simulation into the (Lagrangian) smoothed particle hydrodynamics (SPH) GADGET simulation, we assigned the central $460$ SPH particles, constituting the $55,000\,$M$_{\odot}$ of gas within $\simeq 6\,$pc of the densest particle in the halo, outward (radial) velocities and hydrogen number densities so as to match those from the RAGE simulation. The fits that we obtain are shown in Fig. 1, with the inner $55,000\,$M$_{\odot}$ in SPH particles constituting the ejecta denoted by orange triangles and the unpertured particles residing in the outskirts of the halo denoted by black circles. Whereas we fit the velocity profile very well, due to the mapping from an Eulerian to Lagrangian code the density profile is somewhat noisier.[^4] Nonetheless, the basic features of the density profile containing the vast majority of the mass are represented, and the overall energy and momentum are also well-matched. From the initial conditions shown in Fig. 1 we restart the cosmological SPH simulation, the results of which we present in the next section. Beyond mapping into it the blast profile of the SMS SN, we have chosen to leave the gas in the host halo otherwise unchanged. We have made this simplifying choice, in light of the large uncertainties in the radiative output of rapidly accreting SMSs, due to which it is unclear how the radiation emitted during the brief ($\sim 2\,$Myr) lifetime of the star will impact the medium within the host halo. Even if the star emits copious ionizing radiation, as main sequence Pop III stars are expected to do, it may be that the H [ii]{} region created by the star is confined to the innermost regions of the halo (Johnson et al. 2012; see also Hosokawa et al. 2012 on the possibility of even less energetic radiation being emitted during the protostellar phase). It is possible, on the other hand, that ionizing radiation is able to escape out into the halo, if the accretion flow is highly anisotropic (e.g., due to the presence of an accretion disk; e.g., McKee & Tan 2008), or if accretion is intermittent (e.g., Clark et al. 2011; Smith et al. 2011; Vorobyov et al. 2013), in which case radiation could break out during periods of reduced accretion. Whereas the small-scale structure of the interstellar gas in the simulation we present here is subject to the limited resolution of the cosmological simulation into which we place the expanding blast wave, we note that at sub-resolution scales ($\la1\,$pc) it is possible that a substantial amount of energy in the explosion is radiated away (e.g., Kitayama & Yoshida 2005; Whalen et al. 2008; de Souza et al. 2011; Vasiliev et al. 2012). We shall address how such additional radiative losses would affect the dynamics of the blast wave and the metal enrichment of the host halo and IGM in future work. Results ======= Here we present the results of our cosmological hydrodynamics simulation, with particular attention paid to the dynamics of the expanding blast wave and to the enrichment of the IGM by the metal-rich SN ejecta. Dynamics and Energetics ----------------------- The injection of almost $10^{55}$ erg at a single explosion site has a dramatic impact on the host halo. Figure 2 shows the properties of the gas in the vicinity of the halo, as a function of the distance from the explosion site at its center. As the left-most panels show, the blast results in the complete evacuation of high-density ($n$ $\ga10\,$cm$^{-3}$) gas within $10\,$Myr, with the material overtaken by the shock being carried out beyond the virial radius of the halo (at $r\simeq10^3\,$pc) at up to $\simeq 10^3\,$km$\,$s$^{-1}$. This material is also shock heated to temperatures up to $\sim 10^8\,$K, resulting in its almost complete ionization, as shown in the right-panels. The gas, however, rapidly cools due to inverse Compton scattering of CMB photons, H and He atomic line emission, and bremsstrahlung, as also found (using the same GADGET code) in the less-energetic ($10^{52}\,$erg) pair instability supernova (PSN) explosion in a $2.5\times10^5\,$M$_\odot$ DM halo simulated by Greif et al. (2007), as well as in the 1-D calculation of a very energetic ($10^{54}\,$erg) Pop III SN in a slightly less massive ($10^7 $M$_{\odot}$) DM halo presented in Kitayama & Yoshida (2005). As shown in Figure 3, most ($\simeq 90\,\%$) of the $7.7 \times 10^{54}\,$erg initially in the blast is radiated away via these processes within $10^4\,$yr. Nevertheless, the momentum of the blast is conserved and the shock continues to propagate into the IGM, sweeping up the majority of the mass after $\simeq 1\, $Myr. By $50\, $Myr, at which time $\simeq 99\,\%$ of the energy has been radiated away, the shock has propagated out to $\simeq 5$–$10\,$kpc and has swept up almost $10^7\,$M$_{\odot}$. Figure 4 shows the properties of the gas in the vicinity of the explosion site within a $400\,$pc (comoving) slice of the cosmological volume, at $1\, $Myr, $10\, $Myr and $50\, $Myr after the explosion of the SMS. Comparing the radial velocity field (second column from the left) to the cosmological density field (far left column), it is clear that the blast wave propagates most rapidly into the low-density voids while its progress is halted in the direction of the high-density filaments, at the intersection of which lies the host halo. Figs. 2 and 4 also show the same general trend that, at late times, the most strongly shock-heated and highest-velocity material is located behind the shock front several kpc from the explosion site. The cooler material within the ($\simeq 1\, $kpc) virial radius of the host halo is able to begin recollapsing after $50\, $Myr. As we discuss next, this gas is likely to form second-generation stars that are enriched to fairly high metallicities. =8.7cm =8.6cm =7.1in Metal Enrichment and Second-Generation Star Formation ----------------------------------------------------- As is also expected for less energetic PSNe from Pop III stars with masses of $\sim 200 $M$_{\odot}$ (e.g., Heger et al. 2003), a large fraction of the ejecta from our SMS SN consists of heavy elements. In particular, $\simeq 23,000 \,$M$_{\odot}$ of newly-synthesized metals are ejected in the explosion. These heavy elements are mixed with the primordial gas, enriching it to relatively high metallicity. Figure 5 shows the average metallicity to which the gas in the vicinity of the explosion site is enriched by the ejecta, after $1\, $Myr, $10\, $Myr and $70\, $Myr. As the blast wave propagates outward into the low-density IGM gas at greater and greater radii becomes enriched. As metals are carried out of the host halo, metallicities at the smallest radii fall. After $70\, $Myr, however, the average metallicity of the gas out to almost $10\, $kpc (physical) is enriched to of the order of $10^{-2}\,$Z$_{\odot}$, and the densest gas which is recollapsing into the host halo is enriched to $\simeq 0.05\, $Z$_{\odot}$. As shown in Figure 6, by $70\, $Myr the gas at the center of the SN remnant, shown in Figure 7, has recollapsed to densities of $n\sim10^2\,$cm$^{-3}$, significantly more dense than the highest-density ($n$ $\simeq$10$\,$cm$^{-3}$) gas that remained within the virial radius of the host halo $\sim 10\, $Myr after the SN. Not only is the gas recollapsing after $70\, $Myr, but, as shown in the right panel of Fig. 6, a portion of the SN ejecta is entrained in this gas. If this ejecta is well-mixed with the dense primordial gas, then we expect its metallicity to be $\simeq 0.05\, $Z$_{\odot}$, as indicated in Fig. 5. Gas enriched to such a high metallicity is predicted to readily fragment into low-mass stars (e.g., Bromm et al. 2001; Santoro & Shull 2006; Schneider et al. 2006), even in the presence of the elevated LW background radiation field expected in regions of the universe where SMSs form (e.g., Omukai et al. 2008). Therefore, we expect that second-generation stars would form from this SMS SN-enriched gas, and that a large fraction of these would have masses low enough ($\la 0.8 \,$M$_{\odot}$) that they could still be present in the Galaxy today. While the nucleosynthetic signature of very massive Pop III pair-instability SNe (i.e., $140$–$260\, $M$_{\odot}$) has yet to be uncovered in extremely metal-poor stars (Cayrel et al. 2004; Beers & Christlieb 2005; Frebel et al 2005; Lai et al 2008; Joggerst et al 2010; Joggerst & Whalen 2011), it may have been found in high-redshift damped Lyman alpha absorbers (Cooke et al. 2011), and a number of metal-poor stars in a recent extension to the SEGUE survey have now been selected for spectroscopic followup on suspicion that they too may harbor this pattern (Ren et al. 2012). Furthermore, most stars forming in the ashes of very massive primordial SNe were likely enriched to metallicities above those targeted by surveys of metal-poor stars to date (Karlsson et al. 2008), and our simulations predict that second-generation stars formed from gas enriched by SMS SNe will have metallicities above $10^{-2}\,$Z$_{\odot}$, which are also above this threshold. We conclude that, while SMS SNe are almost certainly rare events, their chemical signature might well be found in some of the ancient but not very metal-poor stars inhabiting our Galaxy today. Discussion and Conclusions ========================== We have carried out a multi-scale simulation of the explosion of a $55,000\, $M$_{\odot}$ SMS, which, with an energy approaching $10^{55}\,$erg, is among the biggest explosions in the history of the universe. With our multi-code approach, we have captured self-consistently, and for the first time, the essential radiation hydrodynamic features of the explosion at early times and the interaction of the blast wave with its host protogalaxy and with the IGM at later times. Although the atomic cooling halos expected to host the formation of SMSs are at least two orders of magnitude more massive than the DM halos in which the first primordial stars are expected to form, we have found that these SMS SNe are energetic enough to completely evacuate them of dense gas. The metal-enriched ejecta is dispersed well beyond the $\sim 1\, $kpc virial radius of the host halo, out to $\sim 5$–$10\, $kpc into the low-density IGM, after $\sim 50\, $Myr. By this time, $\sim 99\,\%$ of the kinetic energy in the explosion has been radiated away; nonetheless, the expansion of the shock into the IGM continues even at these late times, as the remaining kinetic energy is still comparable to that expected for a $\sim 200 \,$M$_{\odot}$ Pop III PSN. Because of the deep potential well of the halo and ongoing accretion from filaments, after $70\, $Myr a fraction of the metal-enriched gas in the SN remnant, shown in Fig. 7, has recollapsed to high densities. Given the relatively high metallicity of this gas ($\sim 0.05 \,$Z$_{\odot}$),[^5] it is most likely that it will fragment vigorously and form a cluster of second-generation stars. Enriched to this metallicity, any of these stars which survive to the present-day would exhibit metallicities much higher than the most metal-poor stars that have been found in surveys of the Galactic halo. Indeed, although SMS SN are likely rare events, it is possible that their chemical signature could be found in very old, relatively high-metallicity Pop II stars which inhabit the Milky Way today (or in present-day dwarf galaxies; Frebel & Bromm 2012). As we expect SMS SNe to produce very little $^{56}$Ni (Heger et al. 2013), their chemical signature may be distinguishable from those of most very massive (i.e., $140$–$260 \,$M$_{\odot}$) Pop III explosions (PSNe), which can produce iron-group elements (Heger & Woosley 2002). Similar to these PSNe, however, SMS SNe would not make any *s*-process or *r*-process contributions. Finding the chemical signature of SMS SNe in low-mass, long-lived stars (or in the IGM; see e.g., Cooke et al. 2011) would be one way of verifying that such exotic events occurred in the early universe. Another possibility is that these gargantuan explosions could be found in all-sky surveys such as those planned for the [*Wide-field Infrared Survey Telescope*]{} (WFIRST) and the [*Wide-field Imaging Surveyor for High-redshift*]{} (WISH), as shown recently by Whalen et al. (2012a).[^6] Given that SMSs are expected to form in regions subjected to a large flux of LW radiation from nearby (within $\sim 10\, $kpc; Dijkstra et al. 2008; Agarwal et al. 2012) star-forming galaxies, detection of SMS SNe would pinpoint locations on the sky where rapidly-forming first galaxies could be found in follow-up observations by the [*James Webb Space Telescope*]{}. Indeed, these explosions are so large that the SN remnants they leave behind, as shown in Fig. 7, are likely to be many times larger than, and in fact are likely to envelop, any such neighboring star-forming galaxies. Could later stages of SMS SNe be detected by other means? They might appear in the radio at $21\, $cm. Meiksin & Whalen (2013) recently found that synchrotron emission from hypernovae in relatively dense media will be visible at $21\, $cm to existing radio facilities such as eVLA and eMERLIN in addition to the [*Square Kilometer Array*]{} (SKA). With their much higher energies and similar circumstellar densities, SMS SNe may be much brighter in the radio and detectable in all-sky surveys in spite of their small numbers. We are now calculating the radio signatures of SMS SNe in $z\sim 15$ protogalaxies. Likewise, Oh et al. (2003) and Whalen et al. (2008) examined the potential imprint of Pop III SNe on the cosmic microwave background (CMB) via the Sunyayev-Zeldovich (SZ) effect. They found that a population of $140$–$260\, $M$_{\odot}$ PSNe might impose excess power on the CMB on small scales, but with two caveats. First, the explosions must occur in large H [ii]{} regions because the SN shock must still be hot by the time it encloses a large volume of CMB photons. Explosions in dense media dissipate too much heat to have an appreciable SZ signature or upscatter many CMB photons. Second, although a population of Pop III SNe might collectively impose features on the CMB individual remnants achieve radii just below the current resolution of [*Atacama Cosmology Telescope*]{} or the [*South Pole Telescope*]{}. Our models explode in dense environments and at redshifts at which inverse Compton cooling losses are lower than for the first SNe, but they eventually reach radii that would allow them to be resolved by current instruments. We are currently evaluating the SZ signatures of SMS SNe. Acknowledgements {#acknowledgements .unnumbered} ================ This work was supported by the U.S. Department of Energy through the LANL/LDRD Program, and JLJ acknowledges the support of a LDRD Director’s Postdoctoral Fellowship at Los Alamos National Laboratory. The RAGE and GADGET simulations were carried out on the LANL Institutional Computing clusters Pinto and Mustang, respectively. DJW acknowledges support from the Baden-Württemberg-Stiftung by contract research via the programme Internationale Spitzenforschung II (grant P- LS-SPII/18). AH and KC were supported by the US DOE Program for Scientific Discovery through Advanced Computing (SciDAC; DE-FC02-09ER41618), by the US Department of Energy under grant DE-FG02-87ER40328, by the Joint Institute for Nuclear Astrophysics (JINA; NSF grant PHY08-22648 and PHY110-2511). AH acknowledges support by an ARC Future Fellowship (FT120100363) and a Monash University Larkins Fellowship. KC was supported by a KITP/UCSB Graduate Fellowship and by a UMN Stanwood Johnston Fellowship. The authors thank Avery Meiksin for helpful discussion. Work at LANL was done under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. [199]{} Abel, T., Bryan, G. L., Norman, M. L. 2002, Sci, 295, 93 Almgren, A. S., et al. 2010, ApJ, 715, 1221 Agarwal, B., Khochfar, S., Johnson, J. L., Neistein, E., Dalla Vecchia, C., Livio, M. 2012, MNRAS, 425, 2854 Agarwal, B., Davis, A. J., Khochfar, S., Natarajan, P., Dunlop, J. S. 2013, MNRAS, submitted (arXiv:1302.6996) Alvarez, M. A., Wise, J. H., Abel, T. 2009, ApJ, 701, L133 Appenzeller, I., Fricke, K. 1972, A&A, 21, 285 Begelman M. C. 2010, MNRAS, 402, 673 Begelman M. C., Volonteri M., Rees M. J. 2006, MNRAS, 370, 289 Beers, T. C., Christlieb, N. 2005, ARA&A, 43, 531 Bond, J. R., Arnett, W. D., Carr, B. J. 1984, ApJ, 280, 825 Bromm, V., Ferrara, A., Coppi, P. S., Larson, R. B. 2001, MNRAS, 328, 969 Bromm, V., Loeb, A. 2003, ApJ, 596, 34 Bromm V., Larson R. B. 2004, ARA&A, 42, 79 Cayrel, R., et al. 2004, A&A, 416, 1117 Choi, J.-H., Shlosman, I., Begelman, M. C. 2013, ApJ, submitted (arXiv:1304.1369) Clark, P. C., Glover, S. C. O., Smith, R. J., Greift, T. H., Klessen, R. S., Bromm, V. 2011, Sci, 331, 1040 Cooke, R., Pettini, M., Steidel, C. C., Rudie, G. C., Jogenson, R. A. 2011, MNRAS, 412, 1047 de Souza, R. S., Rodrigues, L. F. S., Ishida, E. E. O., Opher, R. 2011, MNRAS, 415, 2969 de Souza, R. S., Ishida, E. E. O., Johnson, J. L., Whalen, D. J., Mesinger, A. 2013, MNRAS, submitted (arXiv:1306.4984) Dijkstra, M., Haiman, Z., Mesinger, A., Wyithe, J. S. B. 2008, MNRAS, 391, 1961 Fan, X., et al. 2006, AJ, 131, 1203 Fowler, W. A., Hoyle, F. 1964, ApJS, 9, 201 Frebel, A., et al. 2005, Nat, 434, 871 Frebel, A., Bromm, V. 2012, ApJ, 759, 115 Frey, L., Even, W., Whalen, D. J., et al. 2013, ApJS, 204, 16 Fryer, C. L., Heger, A. 2011, AN, 332, 408 Fuller, G. M., Woosley, S. E., Weaver, T. A. 1986, ApJ, 307, 675 Fuller, G. M., Shi, X. 1998, ApJ, 502, L5 Gittings, M., et al. 2008, CS&D, 1, 5005 Greif T. H., Johnson J. L., Bromm V., Klessen R. S. 2007, ApJ, 670, 1 Greif T. H., Glover, S. C. O., Bromm, V., Klessen, R. S. 2010, ApJ, 716, 510 Greif T. H., Springel, V., White, S. D. M., Glover, S. C. O., Clark, P. C., Smith, R. J., Klessen, R. S., Bromm, V. 2011, ApJ, 737, 75 Heger, A., Woosley, S. E. 2002, ApJ, 567, 532 Heger, A., et al. 2013, in prep Hosokawa, T., Omukai, K., Yorke, H. W. 2012, ApJ, submitted (arXiv:1203.2613) Hummel, J. A., Pawlik, A. H., Milosavljevi[' c]{}, M., Bromm, V. 2012, ApJ, 755, 72 Iben, I. 1963, ApJ, 138, 1090 Inayoshi, K., Omukai, K. 2012, MNRAS, 422, 2539 Inayoshi, K., Hosokawa, T., Omukai, K. 2013, MNRAS, accepted (arXiv:1302.6065) Jeon, M., Pawlik, A. H., Greif, T. H., Glover, S. C. O., Bromm, V., Milosavljevi[' c]{}, M., Klessen, R. S. 2012, ApJ, 754, 34 Joggerst, C. C., Whalen, D. J. 2011, ApJ, 728, 129 Joggerst, C. C., Almgren, A., Bell, J., Heger, A., Whalen, D. J., Woosley, S. E. 2010, ApJ, 709, 11 Johnson, J. L., Dalla Vecchia, C., Khochfar, S. 2013, MNRAS, 428, 1857 Johnson, J. L., Khochfar, S., Greif, T. H., Durier, F. 2011, MNRAS, 410, 919 Johnson, J. L., Whalen, D. J., Fryer, C. L., Li, H. 2012b, ApJ, 750, 66 Johnson, J. L., Whalen, D. J., Li, H., Holz, D. E. 2012a, ApJ, submitted (arXiv:1211.0548) Karlsson, T., Johnson, J. L., Bromm, V. 2008, ApJ, 679, 6 Kistler, M. D., Beacom, J. F. 2006, PhRvD, 74, 063007 Kitayama T., Yoshida N. 2005, ApJ, 630, 675 Komatsu, E., et al. 2011, ApJS, 192, 18 Lai, D. K., Bolte, M., Johnson, J. A., Lucatello, S., Heger, A., Woosley, S. E. 2008, ApJ, 681, 1524 Latif, M. A., Schleicher, D. R. G., Schmidt, W., Niemeyer, J. 2013a, MNRAS, 430, 588 Latif, M. A., Schleicher, D. R. G., Schmidt, W., Niemeyer, J. 2013b, MNRAS, submitted (arXiv:1304.0962) Linke, F., Font, J. A., Janka, H.-T., M[ü]{}ller, E., Papadopoulos, P. 2001, A&A, 376, 568 Lodato, G., Natarajan, P. 2006, MNRAS, 371, 1813 McKee, C. F., Tan, J. C. 2008, ApJ, 681, 771 Meiksin, A., Whalen, D. J. 2013, MNRAS, 430, 2854 Milosavljevi[' c]{}, M., Bromm, V., Couch, S. M., Oh, S. P. 2009, ApJ, 698, 766 Montero, P. J., Janka, H.-T., M[" u]{}ller, E. 2012,ApJ, 749, 37 Morlino, G., Blasi, P., Amato, E. 2009, Astropart. Phys., 31, 376 Mortlock, D. J., et al. 2011, Nat, 474, 616 Natarajan, P., Volonteri, M. 2012, MNRAS, 422, 2051 Oh, S. P., Cooray, A., Kamionkowski, M. 2003, MNRAS, 342, L20 Omukai, K., Schneider, R., Haiman, Z. 2008, ApJ, 686, 801 O’Shea B. W., Norman M. L. 2008, ApJ, 673, 14 Pan, T., Kasen, D., Loeb, A. 2012, MNRAS, 422, 2701 Park, K., Ricotti, M. 2012, ApJ, 747, 9 Pelupessy, F. I., Di Matteo, T., Ciardi, B. 2007, ApJ, 665, 107–119 Petri, A., Ferrara, A., Salvaterra, R. 2012, MNRAS, accepted (arXiv:1202.3141) Planck Collaboration, A&A submitted (arXiv:1303.5076) Regan J. A., Haehnelt M. G. 2009, MNRAS, 396, 343 Ren, J., Christlieb, N., Zhao, G. 2012, RAA, 12, 1637 Ritter, J. S., Safranek-Shrader, C., Gnat, O., Milosavljevi[' c]{}, M., Bromm, V. 2012, ApJ, 761, 56 Safranek-Shrader, C., Milosavljevi[' c]{}, M., Bromm, V. 2013, MNRAS, submitted (arXiv:1307.1982) Santoro, F., Shull, J. M. 2006, ApJ, 643, 26 Scannapieco, E., Madau, P., Woosley, S., Heger, A., Ferrara, A. 2005, ApJ, 633, 1031 Schleicher, D. R. G., Palla, F., Ferrara, A., Galli, D., Latif, M. 2013, A&A, submitted (arXiv:1305.5923) Schneider, R., Omukai, K., Inoue, A. K., Ferrara, A. 2006, MNRAS, 369, 825 Sethi, S., Haiman, Z., Pandey, K. 2010, ApJ, 721, 615 Shang, C., Bryan, G. L., Haiman, Z. 2010, MNRAS, 402, 1249 Shapiro, S. L. 2005, ApJ, 620, 59 Shapiro, S. L., Teukolsky, S. A. 1979, ApJ, 234, L177 Smith, R. J., Glover, S. C. O., Clark, P. C., Greif, T. H., Klessen, R. S. 2011, MNRAS, 414, 3633 Spaans, M., Silk, J. 2006, ApJ, 652, 902 Springel V., Yoshida N., White S. D. M., 2001, NewA, 6, 79 Springel V., Hernquist, L. 2002, MNRAS, 333, 649 Tanaka, M., Moriya, T. J., Yoshida, N., Nomoto, K. 2012, MNRAS, 422, 2675 Tanaka, M., Moriya, T. J., Yoshida, N. 2013, MNRAS, submitted (arXiv:1306.3743) Van Borm, C., Spaans, M. 2013, A&A, submitted (arXiv:1304.4057) Vasiliev, E. O., Vorobyov, E. I., Matvienko, E. E., Razoumov, A. O., Shchekinov, Y. A. 2012, ARep, 56, 895 Volonteri, M., Rees, M. 2006, ApJ, 650, 669 Volonteri, M. 2012, Sci, 337, 544 Volonteri, M., Begelman, M. C. 2010, MNRAS, 409, 1022 Vorobyov, E. I., DeSouza, A. L., Basu, S. 2013, ApJ, submitted (arXiv:1303.3622) Weaver, T. A., Zimmerman, G. B., Woosley, S. E. 1978, ApJ, 225, 1021 Whalen D., van Veelen B., O’Shea B. W., Norman M. L. 2008, ApJ, 682, 49 Whalen, D. J., Heger, A., Chen, K.-J., Even, W., Fryer, C. L., Stiavelli, M., Xu, H., Joggerst, C. C. 2012a, ApJ, submitted (arXiv:1211.1815) Whalen, D. J., Abel, T., Norman, M. L. 2004, ApJ, 610, 14 Whalen, D. J., et al. 2013a, ApJ, 768, 195 Whalen, D. J., et al. 2012b, ApJ, submitted (arXiv:.1211.4979) Whalen, D. J., et al. 2013b, ApJ, 768, 95 Whalen, D. J., et al. 2013c, ApJL, 762, 6 Whalen, D. J., et al. 2013d, ApJ, accepted (arXiv:1305.6966) Willott, C. J., McLure, R. J., Jarvis, M. J., 2003, ApJ, 587, L15 Wise, J. H., Abel, T. 2007, ApJ, 671, 1559 Wise, J. H., Turk, M. J., Abel, T. 2008, ApJ, 682, 745 Wise, J. H., Abel, T. 2008, ApJ, 685, 40 Wise, J. H., Turk, M. J., Norman, M. L., Abel, T. 2012, ApJ, 745, 50 Wolcott-Green, J., Haiman, Z., Bryan, G. L. 2011, MNRAS, 418, 838 Woosley, S. E., Heger, A., Weaver, T. A. 2002, RevMP, 74, 1015 Yamazaki, R., Kohri, K., Katagiri, H. 2009, A&A, 495, 9 Yoshida, N., Omukai, K., Hernquist, L. 2008, Sci, 321, 669 Yuan, Q., Yin, P., Bi, X. 2010, arXiv:1010.1901 [^1]: Adopting the cosmological parameters reported recently by the [*Planck*]{} Collaboration (2013) yields a similar time available for seed growth. [^2]: We note that other formation mechanisms stemming from shocks (Inayoshi et al. 2012) and magnetic fields (Sethi et al. 2010) have also been suggested. [^3]: These SNe would appear much brighter than other types of Pop III SNe (e.g., Scannapieco et al. 2005; Hummel et al. 2012; Pan et al. 2012; Tanaka et al. 2012, 2013; Whalen et al. 2012b, 2013a,b,c; de Souza et al. 2013). [^4]: Note in Fig. 1 that the density spike at the shock front in the RAGE output contains less mass than is contained in a single GADGET SPH particle, and so there are no particles representing this particular parcel of gas in the cosmological simulation. This illustrates the fundamental difficulty in matching output from an Eulerian, adaptive mesh refinement code to an SPH code. [^5]: Previous cosmological simulations of metal enrichment by massive Pop III stars have shown second-generation star-forming gas to have typical metallicities of $\ga 10^{-3} \,$Z$_{\odot}$ (e.g., Wise & Abel 2008; Greif et al. 2010; Ritter et al. 2012; Vasiliev et al. 2012; see also Wise et al. 2012; Safranek-Shrader et al. 2013). Incidentally, we note that Greif et al. (2007) arrived at comparable results by tracking metal enrichment using a much smaller number of SPH particles than we have used to track it here. Thus, we expect that our results with regard to metal enrichment should be broadly consistent with what would be found using other approaches. [^6]: We note that neutrino emission from these explosions (produced as discussed in e.g., Kistler & Beacom 2006; Yamazaki 2009; Morlino et al. 2009; Yuan et al. 2010) might also be detectable.
--- abstract: 'Aiming at a better understanding of finite groups as finite dynamical systems, we show that by a version of Fitting’s Lemma for groups, each state space of an endomorphism of a finite group is a graph tensor product of a finite directed $1$-tree whose cycle is a loop with a disjoint union of cycles, generalizing results of Hern[á]{}ndez-Toledo on linear finite dynamical systems, and we fully characterize the possible forms of state spaces of nilpotent endomorphisms via their “ramification behavior”. Finally, as an application, we will count the isomorphism types of state spaces of endomorphisms of finite cyclic groups in general, extending results of Hern[á]{}ndez-Toledo on primary cyclic groups of odd order.' author: - 'Alexander Bors[^1]' title: On the dynamics of endomorphisms of finite groups --- Some background =============== Finite dynamical systems have recently gained a lot of interest not only within mathematics, but also for their practical applications in areas such as cryptography, pseudorandom number generation and reverse engineering. For example, one approach to study gene regulatory networks is to discretize both the data and the time flow and then work in a finite dynamical system of the form $(k^n,f)$, where $k$ is a finite field and $f$ a (polynomial) map $k^n\rightarrow k^n$, see [@JLSS07a]. Such so-called polynomial finite dynamical systems are also objects of current theoretical research, and there are still many open questions. However, there is a well-established theory of so-called *linear finite dynamical systems* (a special case, abbreviated henceforth by LFDSs). These consist of a finite-dimensional vector space $V$ over a finite field together with a linear map $f:V\rightarrow V$. The first results (written in the language of circuit theory) are on the case where $f$ is an automorphism of $V$ and are due to Elspas from 1959, see [@Els59a]. Much later, in 2005, Hern[á]{}ndez-Toledo extended these results to LFDSs in general, see [@Her05a]. The results give strong restrictions on the possible forms of state spaces compared to arbitrary finite dynamical systems, for example, all state spaces of LFDSs are graph tensor products of a $1$-tree whose cycle is a loop (representing the nilpotent part of $f$, $\mathrm{nil}(f)$) with a disjoint union of cycles (representing the periodic part of $f$, $\mathrm{per}(f)$), and $V$ decomposes as a direct sum of $\mathrm{nil}(f)$ and $\mathrm{per}(f)$. In this paper, we generalize the results on LFDSs, but going in a different direction than usually: What if not $f$ is replaced by a more complicated polynomial map, but we keep the “nice” property of $f$ being an endomorphism and instead replace the vector space structure on the underlying set by a group structure (note that any vector space endomorphism is in particular an endomorphism of the underlying additive group)? It turns out that the basic results on LFDSs mentioned in the last paragraph can be transferred to this more general situation. A first indication of this fact can be found in the 2012 paper [@Sha12a], where Sha shows that state spaces of endomorphisms of finite cyclic groups are graph tensor products as described above. Also, as we will see, the group laws impose strong restrictions on the form of the $1$-tree representing the nilpotent part. Results on the structure of the state space =========================================== Let us first fix some notation and terminology. We denote by $\mathbb{N}$ the set of natural numbers (including $0$) and by $\mathbb{N}^+$ the set of positive integers. As usual, a *finite dynamical system* (FDS) is a pair $(X,f)$ where $X$ is a finite set (whose elements will be referred to as *points*) and $f$ a so-called *endofunction of $X$*, that is, a function $X\rightarrow X$. For $n\in\mathbb{N}$, $f^n$ denotes the $n$-th iteration of $f$ (i.e., the $n$-th power of $f$ in the monoid of endofunctions of $X$). Points $x$ such that, for some positive $n$, $f^n(x)=x$ are called *periodic*, and the smallest such $n$ is called the *period of $x$*. Points which are not periodic are called *transient*. For any transient point $y$, there exists a least positive integer $h$ such that $f^h(y)$ is periodic; this $h$ is called the *height of $y$*, denoted $\mathrm{ht}(y)$. Following the terminology in [@LP01a], the *state space* of $(X,f)$, denoted $\Gamma_f$, is the digraph with vertex set $X$ which has a directed edge from $x$ to $y$ if and only if $y=f(x)$. By the general theory of FDSs, $\Gamma_f$ always is a directed $1$-forest. For FDSs $(X,f),(Y,g)$, the FDS $(X\times Y,f\times g)$, where $f\times g$ is the endofunction of $X\times Y$ mapping $(x,y)\mapsto(f(x),g(y))$, is called the *product of $(X,f)$ and $(Y,g)$*. Observe that $\Gamma_{f\times g}=\Gamma_f\times\Gamma_g$, where the $\times$ on the RHS denotes the graph (tensor) product. An *isomorphism between FDSs $(X,f)$ and $(Y,g)$* is a bijection $\alpha:X\rightarrow Y$ such that $\alpha\circ f=g\circ\alpha$. It is easy to see that $\alpha$ is an isomorphism between $(X,f)$ and $(Y,g)$ if and only if $\alpha$ is an isomorphism between the state spaces $\Gamma_f$ and $\Gamma_g$. From now on, we will always consider the situation where $X$ is a finite group $G$ (or, more precisely, its underlying set) and $f$ is a group endomorphism $\varphi$ of $G$; such FDSs will be referred to as *finite dynamical groups* (FDGs). We set $\mathrm{nil}(\varphi):=\{g\in G\mid\exists n\in\mathbb{N}^+:\varphi^n(x)=1\}$ and define $\mathrm{per}(\varphi)$ as the set of periodic points of $\varphi$. As in the case of LFDSs, $\mathrm{nil}(\varphi)$ will be called the *nilpotent part* and $\mathrm{per}(\varphi)$ the *periodic part* of $\varphi$; if $\mathrm{nil}(\varphi)=G$, $\varphi$ is called *nilpotent*. Note that by definition, $\mathrm{nil}(\varphi)$ is the union of the subsets $\mathrm{ker}^{(m)}(\varphi)$ of $G$ for $m\in\mathbb{N}$, where $\mathrm{ker}^{(m)}(\varphi)$, which we will call the *$m$-th kernel of $\varphi$*, is just the $m$-th preimage of $\{1\}$ under $\varphi$. The result that, in case of an LFDS $(V,f)$, $V$ directly decomposes into $\mathrm{nil}(f)$ and $\mathrm{per}(f)$ generalizes to: \[composTheo\] Let $(G,\varphi)$ be an FDG. Then: \(1) $\mathrm{nil}(\varphi)$ is the largest subgroup of $G$ invariant under $\varphi$ on which the corresponding restriction of $\varphi$ is nilpotent. Also, $\mathrm{nil}(\varphi)$ is normal in $G$. \(2) $\mathrm{per}(\varphi)$ is the largest subgroup of $G$ invariant under $\varphi$ on which the corresponding restriction of $\varphi$ is an automorphism. \(3) $G=\mathrm{nil}(\varphi)\rtimes\mathrm{per}(\varphi)$. \(4) The FDS $(G,\varphi)$ is the product of the FDSs $(\mathrm{nil}(\varphi),\varphi_{|\mathrm{nil}(\varphi)})$ and $(\mathrm{per}(\varphi),\varphi_{|\mathrm{per}(\varphi)})$. In particular, $\Gamma_{\varphi}$ is the product of a $1$-tree whose cycle is a loop with a disjoint union of cycles. For (1) and (2), note that it suffices to show that $\mathrm{nil}(\varphi)$ and $\mathrm{per}(\varphi)$ are subgroups (and $\mathrm{nil}(\varphi)$ normal), which is clear by observing that $\mathrm{nil}(\varphi)$ is the maximum (with respect to inclusion) of the ascending chain of normal subgroups $(\mathrm{ker}^{(m)}(\varphi))_{m\in\mathbb{N}}$ and $\mathrm{per}(\varphi)$ is the minimum of the descending chain of subgroups $(\mathrm{im}(\varphi^n))_{n\in\mathbb{N}}$. From these observations, (3) immediately follows from the group version of Fitting’s Lemma stated and proved as Theorem 4.2 in [@Car13a], and (4) is clear by (3) and the structure of semidirect products. We now turn to the structure of the tree from the nilpotent part. First, some terminology: \[rigidProcDef\] Let $\Gamma=(V,E)$ be a finite digraph, $v\in V$. \(1) A vertex $w\in V$ such that $(v,w)\in E$ is called a **successor** or **child** of $v$. \(2) The **procreation behavior of $v$** is the sequence $(a_k)_{k\in\mathbb{N}^+}$ such that for all positive integers $k$, $a_k$ is the number of children $c$ of $v$ such that there exists a directed path $(w_1,\ldots,w_k)$ in $\Gamma$ with $w_1=c$ (we say: $c$ **has (at least) $k-1$ successor generations** and call $a_k$ the **$k$-th procreation number of $v$**). For $n\in\mathbb{N}$, the **procreation behavior of length $n$ of $v$** is the $n$-tuple consisting of the first $n$ procreation numbers of $v$. \(3) We say that $\Gamma$ has **rigid procreation** if and only if for all $v,w\in V$ and all $n\in\mathbb{N}$ such that $v$ and $w$ both have $n$ successor generations, the procreation behaviors of length $n$ of $v$ and $w$ are equal. For any digraph $\Gamma=(V,E)$, the **dual digraph of $\Gamma$**, denoted $\Gamma^{\ast}$, is defined as $(V,E^{-1})$ with $E^{-1}$ the inverse relation of $E$, i.e., the set of all pairs $(y,x)$ such that $(x,y)\in E$. \[rigidProcTheo\] Let $(G,\varphi)$ be an FDG. Then $\Gamma_{\varphi}^{\ast}$ has rigid procreation. First, note that periodic points of $\varphi$ have infinitely many successor generations in $\Gamma_{\varphi}^{\ast}$ and that it suffices to show that any point $v\in G$ which has $n$ successor generations has the same procreation behavior of length $n$ as $1_G$. This is clear for periodic points by the structure of $\Gamma_{\varphi}$ exhibited in Theorem \[composTheo\](4) which implies that the $k$-th procreation coefficient of any periodic point is $1$ plus the number of successors of $1_G$ in $\Gamma_{\varphi_{|\mathrm{nil}(\varphi)}}^{\ast}$ which have at least $k-1$ successor generations, so we can assume that $v$ is transient. Fix any $w$ in the $n$-th successor generation of $v$ which does not appear in any earlier generation (in other words, $\mathrm{ht}(w)=\mathrm{ht}(v)+n$). First, we claim that any element in one of the $n$ successor generations of $v$ (including the element $v$ itself) has a unique representation of one of the forms $\varphi^k(w)\cdot x$ with $k\in\{0,\ldots,n\}$ and $x\in\mathrm{ker}^{(n-k)}(\varphi)$. To see this, first take an element $g$ in one of the successor generations, say $\mathrm{ht}(g)=\mathrm{ht}(v)+n-k$ with $k\in\{0,\ldots,n\}$. It then follows that $\varphi^{n-k}(\varphi^k(w^{-1})g)=v^{-1}\cdot v=1_G$, so that indeed, $g$ can be written as $\varphi^k(w)\cdot x$ with $x:=\varphi^k(w^{-1})g\in\mathrm{ker}^{(n-k)}(\varphi)$. But also, clearly $\mathrm{ht}(\varphi^k(w)\cdot x)=\mathrm{ht}(v)+n-k$ if $x\in\mathrm{ker}^{(n-k)}(\varphi)$ so that $\varphi^k(w)\cdot x=\varphi^l(w)\cdot y$ first implies $k=l$ and then $x=y$. We are now ready to show that the procreation behaviors of $v$ and $1_G$ of length $n$ coincide. Fix $k\in\{0,\ldots,n-1\}$, and let $x$ be a child of $1_G$ in $\Gamma_{\varphi}^{\ast}$ which has $k$ successor generations (i.e., contributes to the entry $a_k$ in the procreation behavior $(a_1,\ldots,a_n)$ of $1_G$). Then either $x=1$ or there exists $y\in\mathrm{ker}^{(k+1)}(\varphi)\setminus\mathrm{ker}^{(k)}(\varphi)$ such that $\varphi^k(y)=x$, in which case we readily check that $\varphi^{n-k-1}(w)\cdot y$ is an element in the $k$-th preimage of $\varphi^{n-1}(w)\cdot x$ under $\varphi$; summing up, we have an injection $x\mapsto \varphi^{n-1}(w)\cdot x$ from the set of children of $1_G$ with $k$ successor generations into the set of children of $v$ with $k$ successor generations. But this is even a bijection, for if $\varphi^{n-1}(w)\cdot x$ is a child of $v$ which has $k$ successor generations and we fix an element $\varphi^{n-k-1}(w)\cdot y$ in the $k$-th preimage of $\varphi^{n-1}(w)\cdot x$ under $\varphi$, then we see immediately that $y$ must be in the $k$-th preimage of $x$ under $\varphi$ so that $x$ is a child of $1_G$ with $k$ successor generations. This proves the theorem. \[isoRem\] Note that the isomorphism type of a state space of an endomorphism $\varphi$ of a finite group $G$ is completely determined by the procreation behavior of $1_G$ in $\Gamma_{\varphi}^{\ast}$ together with the orders $|\mathrm{per}_n(\varphi)|$ of the subgroups of $G$ consisting of periodic points whose period divides $n$ for the various $n\in\mathbb{N}^+$, a fact that we will frequently use without further reference when counting isomorphism types of state spaces of finite cyclic groups in the next section. The last theorem on the state space structure of FDGs which we want to present here gives further information on the behavior of the procreation numbers of the identity element: \[stateGraphTheo\] Let $(G,\varphi)$ be an FDG and let $(a_k)_{k\in\mathbb{N}^+}$ be the procreation behavior of $1_G$ in $\Gamma_{\varphi}^{\ast}$. Then for all $k\in\mathbb{N}$, $a_1\cdots a_k=|\mathrm{ker}^{(k)}(\varphi)|$ (in particular, $a_k=[\mathrm{ker}^{(k)}(\varphi):\mathrm{ker}^{(k-1)}(\varphi)]$), and for all $n,m\in\mathbb{N}^+$, $n\leq m$ implies $a_m\mid a_n$. The divisibity result is obtained by an application of Lagrange’s theorem after some counting which will yield the first assertion as a “by-product”. First, observe that in any finite digraph with rigid procreation, the number of endpoints of paths with length $r$ starting from some vertex $v$ with at least $r$ successor generations and procreation behavior of length $r$ equal to $(a_1,\ldots,a_r)$ is precisely $a_1\cdots a_r$, since by induction on $r$, the $a_r$ children of $v$ that have enough successor generations to contribute to this number each give $a_1\cdots a_{r-1}$ endpoints. Applying this to the vertex $1_G$ in $\Gamma_{\varphi}^{\ast}$ yields $|\mathrm{ker}^{(k)}(\varphi)|=a_1\cdots a_k$. Now note that the $n$-th procreation number of $1_G$ in the dual of the state space of the FDG $(\mathrm{im}(\varphi),\varphi_{|\mathrm{im}(\varphi)})$ is the number of children of $1_G$ which have at least $k$ successor generations in $\Gamma_{\varphi}^{\ast}$, so the corresponding procreation behavior is given by the sequence $(a_{n+1})_{n\in\mathbb{N}^+}$, and we obtain $|\mathrm{ker}^{(k)}(\varphi)\cap\mathrm{im}(\varphi)|=a_2\cdots a_{k+1}$. By Lagrange’s Theorem, we now get $a_2\cdots a_{k+1}\mid a_1\cdots a_k$, that is, $a_{k+1}\mid a_1$ for all $k\in\mathbb{N}$, and thus the general result by passing to the procreation behaviors of $1_G$ in the state spaces successive images of $\varphi$ with the corresponding restriction of $\varphi$. Actually, this is the strongest result on the structure of the nilpotent part which we can derive in general, as the following proposition shows. \[strongestProp\] For any finite $1$-tree $\Gamma$ whose cycle is a loop and which has rigid procreation such that the procreation behavior of the one vertex on the loop is $(a_k)_{k\in\mathbb{N}^+}$ with $a_m\mid a_n$ for all $n,m\in\mathbb{N}^+$ with $n\geq m$, there exists an FDG $(G,\varphi)$ such that $G$ is abelian and $\Gamma_{\varphi}^{\ast}\cong\Gamma$. Consider the finite abelian group $G:=\prod\limits_{i=1}^n{\mathbb{Z}/a_i\mathbb{Z}}=\langle x_1,\ldots,x_n\mid x_ix_j=x_jx_i\hspace{3pt}(i\not=j),x_i^{a_i}=1\hspace{3pt}(i=1,\ldots,n)\rangle$, where $n$ is so large that $a_{n+1}=1$. We specify a nilpotent endomorphism $\varphi$ of $G$ such that the $k$-th kernel of $\varphi$ is the subgroup generated by $x_1,\ldots,x_k$, which is sufficient by Theorem \[stateGraphTheo\]. $\varphi$ can be defined by specification on the generators $x_i$. We set $\varphi(x_1):=1$ and $\varphi(x_{i+1}):=x_i^{a_i/a_{i+1}}$ for $i=1,\ldots,n-1$. This preserves the orders of generators $x_i$ with $i>1$ and hence defines an endomorphism of $G$. It is clear that any of $x_1,\ldots,x_k$ is mapped to $1_G$ after $k$ applications of $\varphi$, while the other generators “survive” $k$ applications of $\varphi$. An application to finite cyclic groups ====================================== Let us now consider the finite cyclic group $\mathbb{Z}/n\mathbb{Z}$. Any endomorphism of this group is a “stretch modulo $n$” by a factor $a\in\{0,\ldots,n-1\}$; we denote the corresponding stretch function by $\lambda_a$. FDSs arising from such maps $\lambda_a$ play an important role in pseudorandom number generation (key word: multiplicative congruential generators), and several papers have already been dedicated to the study of their state spaces: Ahmad [@Ahm69a] in 1969 investigated the cycle structure of automorphisms of finite cyclic groups. In 2008, Hern[á]{}ndez-Toledo [@Her08a] used the structure of the group of units modulo odd prime powers to describe the structure of state spaces of endomorphisms of $\mathbb{Z}/p^k\mathbb{Z}$ for odd primes $p$ as explicitly as possible. He did not treat the case of primary cyclic groups of even order or the general case, though. Sha in his already mentioned paper [@Sha12a] investigated state spaces of endomorphisms of general finite cyclic groups, describing, among other things, their graph automorphism groups. Finally, Deng in [@Den13a] more generally extensively studied the state spaces arising from affine maps of finite cyclic groups and gave a necessary and sufficient criterion of number-theoretic nature when two such graphs are isomorphic. However, to the author’s best knowledge, so far there exists no published explicit formula for the number of isomorphism types of state spaces of endomorphisms of $\mathbb{Z}/n\mathbb{Z}$, which we will now derive as an application of the abstract theory developed in the previous section. To this end, we will extend Hern[á]{}ndez-Toledo’s idea of using the structure of the group of units to primary cyclic groups of even order, and the group-theoretic Lemma \[coprimeProdLem\] will allow us to easily extend our counting formulas from primary cyclic groups to the general case. To make our text self-comprehensive and since our proof for primary cyclic groups of even order is similar to the one we give for the odd order case, we will prove both cases here. Let us start with the odd order case (note that since $\mathbb{Z}/p^n\mathbb{Z}$ does not decompose as a semidirect product in a nontrivial way, any endomorphism of it is either nilpotent or an automorphism): \[oddPrimeLem\] Let $p$ be an odd prime, $k\in\mathbb{N}$. Then the number of isomorphism types of state spaces of endomorphisms of $\mathbb{Z}/p^k\mathbb{Z}$ equals $k\cdot(\tau(p-1)+1)$, where $\tau:\mathbb{N}^+\rightarrow\mathbb{N}^+$ denotes the divisor number function. Of these, $k$ correspond to nilpotent endomorphisms and $k\cdot\tau(p-1)$ to automorphisms. Because of $\lambda_a^m=\lambda_{a^m\hspace{3pt}(\mathrm{mod}\hspace{3pt}p^k)}$, it is easy to see that $\lambda_a:\mathbb{Z}/p^k\mathbb{Z}\rightarrow\mathbb{Z}/p^k\mathbb{Z}$ is nilpotent if and only if $p\mid a$. Let $v_p^{(k)}(a):=\mathrm{min}\{\nu_p(a),k\}$ denote the *$p$-adic valuation of $a$ modulo $p^k$*; here, $\nu_p(a)$ denotes the usual $p$-adic valuation of $a$, defined as the exponent of the greatest power of $p$ dividing $a$, which is understood to be $\infty$ if $a=0$. If $u\in\{0,\ldots,p-1\}$ with $p\nmid u$, then for all $c\in\{0,\ldots,p-1\}$ and $l\in\{1,\ldots,n\}$, the congruence $up^l\cdot x\equiv c\hspace{3pt}(\mathrm{mod}\hspace{3pt}p^k)$ has the same number of solutions modulo $p^k$ as $p^lx\equiv c\hspace{3pt}(\mathrm{mod}\hspace{3pt}p^k)$ (namely $p^l$ if $v_p^{(k)}(c)\geq l$ and $0$ else) so that the procreation behaviors of the identity element $0$ under $\lambda_{p^l}$ and $\lambda_{up^l}$ are the same and hence their state spaces are isomorphic. So for counting the isomorphism types in the nilpotent case, we only need to consider the endomorphisms $\lambda_{p^l}$ for $l=1,\ldots,n$. But again, by the observations on the solvability modulo $p^k$ of the congruence $p^l\cdot x\equiv c\hspace{3pt}(\mathrm{mod}\hspace{3pt}p^k)$ from above and Theorem \[stateGraphTheo\], it is easy to see that the following holds for the procreation behavior in this case: Write $k=q\cdot l+r$ with $q,r\in\mathbb{N}$ and $0\leq r<l$. Then the procreation behavior of the identity under $\lambda_{p^l}$ is $(p^l,p^l,\ldots,p^l,p^r,1,\ldots)$, where the first $q$ procreation numbers are equal to $p^l$. Hence these $k$ nilpotent endomorphisms indeed yield pairwise non-isomorphic state spaces, and we are done in the nilpotent case. It remains to treat the case $p\nmid a$, where $\lambda_a$ is an automorphism. This is basically the same argumentation as the one of Hern[á]{}ndez-Toledo. We make use of the fact that there is a primitive root $g$ modulo $p^k$, and write $a=g^{u\cdot s\cdot p^l}$, where the numbers $u,t\in\{1,\ldots,p^k-1\}$ with $\mathrm{gcd}(u,p(p-1))=1$, $s$ is a product of powers of the prime divisors of $p-1=p_1^{e_1}\cdots p_r^{e_r}$ (all $e_i\geq 1$), and $l\in\{0,\ldots,k-1\}$. Since the multiplicative order of $g$ modulo $p^m$, $m\in\{1,\ldots,k\}$, is $\phi(p^m)=p^{m-1}(p-1)$, by a basic result of group theory (or elementary number theory), the multiplicative order of $a$ modulo $p^m$ is $$p_1^{e_1-v_{p_1}^{(e_1)}(s)}\cdots p_r^{e_r-v_{p_r}^{(e_r)}(s)}p^{\mathrm{max}\{0,m-1-l\}}.$$ This means that the cycle of any generator of $\mathbb{Z}/p^m\mathbb{Z}$ under the stretch by $a$ has this length, and hence so does the cycle of any element of $\mathbb{Z}/p^k\mathbb{Z}$ of order $p^m$, as $\lambda_a$ on $\mathbb{Z}/p^k\mathbb{Z}$ restricts to an automorphism of the subgroup generated by this element, defining a dynamical structure isomorphic to the one of the corresponding stretch on $\mathbb{Z}/p^m\mathbb{Z}$. We can therefore describe the cycle structure of $\lambda_a$ on $\mathbb{Z}/p^k\mathbb{Z}$ as follows: It has, in addition to the one trivial fixed point, $p^{l+1}-1$ points (namely the nontrivial elements of the unique subgroup of order $p^{l+1}$) lying on cycles of length $$p_1^{e_1-v_{p_1}^{(e_1)}(s)}\cdots p_r^{e_r-v_{p_r}^{(e_r)}(s)},$$ that is, $$\frac{p^{l+1}-1}{p_1^{e_1-v_{p_1}^{(e_1)}(s)}\cdots p_r^{e_r-v_{p_r}^{(e_r)}(s)}}$$ cycles of that length. Furthermore, for each $j\in\{k+2,\ldots,n\}$, it has $p^j-p^{j-1}$ points (the elements from the complement of the subgroup with $p^{j-1}$ elements in the subgroup with $p^j$ elements) on cycles of length $$p_1^{e_1-v_{p_1}^{(e_1)}(s)}\cdots p_r^{e_r-v_{p_r}^{(e_r)}(s)}p^{j-l-1},$$ i.e., $$p^l\cdot\frac{p-1}{p_1^{e_1-v_{p_1}^{(e_1)}(s)}\cdots p_r^{e_r-v_{p_r}^{(e_r)}(s)}}$$ cycles of that length. From this, we obtain a bijective correspondence between the isomorphism types of state spaces of automorphisms of $\mathbb{Z}/p^k\mathbb{Z}$ and the cartesian product of the set of positive divisors of $p-1$ with the set $\{0,\ldots,k-1\}$, whence there are $k\cdot\tau(p-1)$ such isomorphism types, as we wanted to show. The case $p=2$ goes as follows: \[evenPrimeLem\] Let $k\in\mathbb{N},k\geq 3$. Then the number of isomorphism types of state spaces of endomorphisms of $\mathbb{Z}/2^k\mathbb{Z}$ equals $3k-3$, of which $k$ stem from nilpotent endomorphisms and $2k-3$ from automorphisms. Also, $\mathbb{Z}/2\mathbb{Z}$ has $2$ such isomorphism types, one nilpotent, one periodic, and $\mathbb{Z}/4\mathbb{Z}$ has $4$ isomorphism types, two nilpotent, two periodic. This is actually very similar in spirit to the proof of Lemma \[oddPrimeLem\] which we just gave; the situation is only a bit different because the structure of $(\mathbb{Z}/2^k\mathbb{Z})^{\ast}$ is more complicated compared to the one of $(\mathbb{Z}/p^k\mathbb{Z})^{\ast}$ for odd $p$. However, we still have everything under control. First of all, let us note that as for the nilpotent case, the same argument as for odd $p$ works, so we do not need to discuss it. As for the periodic case, the cases $k=1,2$ are readily checked separately, and for $k\geq 3$, it is a well-known result of elementary number theory that the group of units $(\mathbb{Z}/2^k\mathbb{Z})^{\ast}$ is not cyclic, but decomposes as a direct product of two cyclic subgroups, one of order $2$ and generated by $-1$, the other of order $2^{k-2}$, generated by $5$. So write $a=(-1)^{\epsilon}5^{u\cdot 2^l}$ in $\mathbb{Z}/2^k\mathbb{Z}$ with $\epsilon\in\{0,1\}$, $u\in\{1,\ldots,2^{k-2}-1\}$ odd and $l\in\{0,\ldots,k-2\}$. Apparently, the multiplicative order of $a$ modulo $2^m$ with $m\geq 2$ then is $2^{\mathrm{max}\{\epsilon,m-2-l\}}$, and modulo $2$ it is just $1$. So we have two certain fixed points in this case (the identity and the uniquely determined element of additive order $2$), and additionally, for all $m\in\{2,\ldots,k\}$, we have $\frac{2^{m-1}}{2^{\mathrm{max}\{\epsilon,m-2-l\}}}$ cycles of length $2^{\mathrm{max}\{\epsilon,m-2-l\}}$. Hence different values for $\epsilon$ give non-isomorphic state spaces (because there will be more than two fixed points if and only if $\epsilon=0$), but also clearly, for a fixed value of $\epsilon$ and varying values of $l\in\{0,\ldots,k-2\}$, we also get pairwise non-isomorphic state spaces, *except* for $\epsilon=1$ and $l=k-3,k-2$ (which yield isomorpic state spaces), whereas different choices for $u$ never have any influence on the isomorphism type. Hence in this case, there are $2\cdot (k-1)-1=2k-3$ isomorphism types of automorphism state spaces. Now, for counting isomorphism types of state spaces of endomorphisms, it is not difficult to generalize from the primary cyclic case to arbitrary finite cyclic groups by using the following observation: \[coprimeProdLem\] Let $(G_1,\psi_1), (G_1,\psi_1'), (G_2,\psi_2)$ and $(G_2,\psi_2')$ be FDGs such that $\mathrm{gcd}(|G_1|,|G_2|)=1$. Then if $\Gamma_{\psi_1\times\psi_2}\cong\Gamma_{\psi_1'\times\psi_2'}$, then $\Gamma_{\psi_1}\cong\Gamma_{\psi_1'}$ and $\Gamma_{\psi_2}\cong\Gamma_{\psi_2'}$. It suffices to show that the procreation behavior of the identity in $\Gamma_{\psi_1\times\psi_2}^{\ast}$ and the various orders of periodic point subgroups $\mathrm{per}_n(\psi_1\times\psi_2)$ uniquely determine the corresponding parameters in $\Gamma_{\psi_1}$ and $\Gamma_{\psi_2}$. As for the procreation behavior, let $a_n(\psi)$ for $n\in\mathbb{N}^+$ and $\psi$ an endomorphism of a finite group $G$ denote the $n$-th procreation number of $1_G$, i.e., by Theorem \[stateGraphTheo\], the index $[\mathrm{ker}^{(n)}(\psi):\mathrm{ker}^{(n-1)}(\psi)]$. It is clear that $\mathrm{ker}^{(n)}(\psi_1\times\psi_2)=\mathrm{ker}^{(n)}(\psi_1)\times\mathrm{ker}^{(n)}(\psi_2)$, so that $a_n(\psi_1\times\psi_2)=a_n(\psi_1)\cdot a_n(\psi_2)$. But since $a_n(\psi_i)\mid |G_i|$, by the coprimality assumption, we can read off the values of $a_n(\psi_1)$ and $a_n(\psi_2)$ from this product. The argumentation for the orders of periodic point subgroups is similar, using $\mathrm{per}_n(\psi_1\times\psi_2)=\mathrm{per}_n(\psi_1)\times\mathrm{per}_n(\psi_2)$. In view of the Chinese Remainder Theorem and the fact that $\psi_1\times\psi_2$ is an automorphism (resp. nilpotent) if and only if $\psi_1$ and $\psi_2$ have the corresponding property, this yields: \[cyclicCountTheo\] Let $n=2^k\cdot p_1^{k_1}\cdots p_l^{k_l}$ be a positive natural number with the prime factor decomposition displayed such that $k\geq0$, $l\geq0$ and $k_1,\ldots,k_l\geq1$. Furthermore, let $\tau:\mathbb{N}^+\rightarrow\mathbb{N}^+$ denote the divisor number function and let $$\delta_{[k\leq 2]}:=\left\{\begin{array}{cl}1, & \text{if}\hspace{3pt}k\leq 2, \\ 0, & \text{else}\end{array}\right..$$ Then the number of isomorphism types of state spaces of endomorphisms of $\mathbb{Z}/n\mathbb{Z}$ is precisely $$\mathrm{max}\{2^{k\cdot\delta_{[k\leq 2]}},3k-3\}\cdot\prod\limits_{i=1}^l{k_i(\tau(p_i-1)+1)}.$$ Of these, precisely $$\mathrm{max}\{1,k\}\cdot\prod\limits_{i=1}^l{k_i}$$ are $1$-trees, and precisely $$\mathrm{max}\{1,k,2k-3\}\cdot\prod\limits_{i=1}^l{k_i\tau(p_i-1)}$$ are disjoint unions of cycles. [1]{} S. Ahmad. . , **6**(4):370–374, 1969. A. Caranti. . , **16**(5):779–792, 2013. G. Deng. . , **2013**:5 pages, 2013. B. Elspas. . , **6**(1):39–60, 1959. A.S. Jarrah, R. Laubenbacher, B. Stigler and M. Stillman. . , **39**(4):477–489, 2007. R.A. Hern[á]{}ndez-Toledo. . , **33**:2977–2989, 2005. R.A. Hern[á]{}ndez-Toledo. . , **42**(4):515–520, 2008. R. Laubenbacher and B. Pareigis. . , **26**:237–251, 2001. M. Sha. . , **83**:105–120, 2012. [^1]: The author is supported by the Austrian Science Fund (FWF): Project F5504-N26, which is a part of the Special Research Program “Quasi-Monte Carlo Methods: Theory and Applications”. 2010 *Mathematics Subject Classification*: 05C38, 05C60, 05C76, 05E15, 20D45, 20D60, 37P99. *Key words and phrases:* Finite dynamical system, finite group, group endomorphisms, state space
--- abstract: 'The quality factor ($Q$), mode volume ($V_{\text{eff}}$), and room-temperature lasing threshold of microdisk cavities with embedded quantum dots (QDs) are investigated. Finite element method simulations of standing wave modes within the microdisk reveal that $V_{\text{eff}}$ can be as small as 2$(\lambda/n)^3$ while maintaining radiation-limited $Q$s in excess of 10$^5$. Microdisks of diameter 2 $\mu$m are fabricated in an AlGaAs material containing a single layer of InAs QDs with peak emission at $\lambda=1317$ nm. For devices with $V_{\text{eff}}\sim$2 $(\lambda/n)^3$, $Q$s as high as $1.2{\times}10^5$ are measured passively in the 1.4 $\mu$m band, using an optical fiber taper waveguide. Optical pumping yields laser emission in the $1.3$ $\mu$m band, with room temperature, continuous-wave thresholds as low as $1$ $\mu$W of absorbed pump power. Out-coupling of the laser emission is also shown to be significantly enhanced through the use of optical fiber tapers, with laser differential efficiency as high as $\xi\sim16\%$ and out-coupling efficiency in excess of $28\%$.' address: - 'Department of Applied Physics, California Institute of Technology, Pasadena, CA 91125, USA.' - 'Center for High Technology Materials, University of New Mexico, Albuquerque, NM 87106, USA.' author: - 'Kartik Srinivasan, Matthew Borselli, and Oskar Painter' - Andreas Stintz and Sanjay Krishna title: 'Cavity $Q$, mode volume, and lasing threshold in small diameter AlGaAs microdisks with embedded quantum dots' --- [1]{} P. Michler, A. Kiraz, C. Becher, W. Schoenfeld, P. Petroff, L. Zhang, E. Hu, and A. Imamoglu, “[A Quantum Dot Single-Photon Turnstile Device]{},” Science [**290,**]{} 2282–2285 (2000). C. Santori, M. Pelton, G. Solomon, Y. Dale, and Y. Yamamoto, “[Triggered Single Photons from a Quantum Dot]{},” Phys. Rev. Lett. [**86,**]{} 1502–1505 (2001). E. Moreau, I. Robert, J. Gérard, I. Abram, L. Manin, and V. Thierry-Mieg, “[Single-mode solid-state photon source based on isolated quantum dots in pillar microcavities]{},” Appl. Phys. Lett. [**79,**]{} 2865–2867 (2001). J. Reithmaier, G. Sek, A. Loffer, C. Hoffman, S. Kuhn, S. Reitzenstein, L. Keldysh, V. Kulakovskii, T. Reinecke, and A. Forchel, “[Strong coupling in a single quantum dot-semiconductor microcavity system]{},” Nature [**432,**]{} 197–200 (2004). T. Yoshie, A. Scherer, J. Hendrickson, G. Khitrova, H. Gibbs, G. Rupper, C. Ell, Q. Schenkin, and D. Deppe, “[Vacuum Rabi splitting with a single quantum dot in a photonic crystal nanocavity]{},” Nature [**432,**]{} 200–203 (2004). E. Peter, P. Senellart, D. Martrou, A. Lema$\hat{i}$tre, J. Hours, J. Gérard, and J. Bloch, “[Exciton photon strong-coupling regime for a single quantum dot embedded in a microcavity]{},” Phys. Rev. Lett. 95 (2005). H. Cao, J. Xu, W. Xiang, Y. Ma, S.-H. Chang, S. Ho, and G. Solomon, “[Optically pumped InAs quantum dot microdisk lasers]{},” Appl. Phys. Lett. [**76,**]{} 3519–3521 (2000). T. Ide, T. Baba, J. Tatebayashi, S. Iwamoto, T. Nakaoka, and Y. Arakawa, “[Room temperature continuous wave lasing InAs quantum-dot microdisks with air cladding]{},” Opt. Express [**13,**]{} 1615–1620 (2005). T. Yang, O. Schekin, J. O’Brien, and D. Deppe, “[Room temperature, continuous-wave lasing near 1300 nm in microdisks with quantum dot active regions]{},” IEE Elec. Lett. 39 (2003). H. J. Kimble, “[Strong Interactions of Single Atoms and Photons in Cavity QED]{},” Physica Scripta [**T76,**]{} 127–137 (1998). J. Cirac, P. Zoller, H. Kimble, and H. Mabuchi, “[Quantum state transfer and entanglement distribution among distant nodes in a quantum network]{},” Phys. Rev. Lett. [**78,**]{} 3221–3224 (1997). E. Knill, R. Laflamme, and G. Milburn, “[A scheme for efficient quantum computation with linear optics]{},” Nature [**409,**]{} 46–52 (2001). A. Kiraz, M. Atature, and A. Imamoglu, “[Quantum-dot single-photon sources: Prospects for applications in linear optics quantum-information processing]{},” Phys. Rev. A 69 (2004). S. L. McCall, A. F. J. Levi, R. E. Slusher, S. J. Pearton, and R. A. Logan, “[Whispering-gallery mode lasers]{},” Appl. Phys. Lett. [**60,**]{} 289–291 (1992). B. Gayral, J. M. Gérard, A. Lema$\hat{i}$tre, C. Dupuis, L. Manin, and J. L. Pelouard, “[High-$Q$ wet-etched GaAs microdisks containing InAs quantum boxes]{},” Appl. Phys. Lett. [**75,**]{} 1908–1910 (1999). K. Srinivasan, M. Borselli, T. Johnson, P. Barclay, O. Painter, A. Stintz, and S. Krishna, “[Optical loss and lasing characteristics of high-quality-factor AlGaAs microdisk resonators with embedded quantum dots]{},” Appl. Phys. Lett. [**86,**]{} 151106 (2005). K. Srinivasan, A. Stintz, S. Krishna, and O. Painter, “[Photoluminescence measurements of quantum-dot-containing semiconductor microdisk resonators using optical fiber taper waveguides]{},” Phys. Rev. B [**72,**]{} 205318 (2005). S. M. Spillane, T. J. Kippenberg, K. J. Vahala, K. W. Goh, E. Wilcut, and H. J. Kimble, “[Ultrahigh-$Q$ toroidal microresonators for cavity quantum electrodynamics]{},” Phys. Rev. A [**71,**]{} 013817 (2005). M. Borselli, T. Johnson, and O. Painter, (2005), manuscript in preparation. L. Andreani, G. Panzarini, and J.-M. Gérard, “[Strong-coupling regime for quantum boxes in pillar microcavities:Theory]{},” Phys. Rev. B [**60,**]{} 13276–13279 (1999). D. S. Weiss, V. Sandoghdar, J. Hare, V. Lefèvre-Seguin, J.-M. Raimond, and S. Haroche, “[Splitting of high-$Q$ Mie modes induced by light backscattering in silica microspheres]{},” Opt. Lett. [**20,**]{} 1835–1837 (1995). T. Kippenberg, S. Spillane, and K. Vahala, “[Modal coupling in traveling-wave resonators]{},” Opt. Lett. [**27,**]{} 1669–1671 (2002). M. Borselli, T. Johnson, and O. Painter, “Beyond the Rayleigh scattering limit in high-Q silicon microdisks: theory and experiment,” Opt. Express [**13,**]{} 1515–1530 (2005). M. Bayer and A. Forchel, “Temperature dependence of the exciton homogeneous linewidth in In$_{0.60}$Ga$_{0.40}$As/GaAs self-assembled quantum dots,” Phys. Rev. B [**65,**]{} 041308(R) (2002). K. Srinivasan and O. Painter, “[Momentum space design of high-Q photonic crystal optical cavities]{},” Opt. Express [**10,**]{} 670–684 (2002). H.-Y. Ryu, M. Notomi, and Y.-H. Lee, “[High-quality-factor and small-mode-volume hexapole modes in photonic-crystal-slab nanocavities]{},” Appl. Phys. Lett. [**83,**]{} 4294–4296 (2003). B.-S. Song, S. Noda, T. Asano, and Y. Akahane, “[Ultra-high-Q photonic double-heterostructure nanocavity]{},” Nature Materials [**4,**]{} 207–210 (2005). E. Kuramochi, M. Notomi, S. Mitsugi, A. Shinya, T. Tanabe, and T. Watanabe, “[Photonic crystal nanocavity formed by local width modulation of line-defect with $Q$ of one million]{},” In [*LEOS 2005, Post-Deadline Session PD 1.1*]{}, (IEEE Lasers and Electro-Optics Society, 2005). Z. Zhang and M. Qiu, “Small-volume waveguide-section high $Q$ microcavities in 2D photonic crystal slabs,” Optics Express [**12,**]{} 3988–3995 (2004). D. Englund, I. Fushman, and J. Vučković, “General recipe for designing photonic crystal cavities,” Optics Express [**13,**]{} 5961–5975 (2005). A. Stintz, G. Liu, H. Li, L. Lester, and K. Malloy, “Low-Threshold Current Density 1.3-$\mu$m InAs Quantum-Dot Lasers with the Dots-in-a-Well (DWELL) structure,” IEEE Photonics Tech. Lett. [**12,**]{} 591–593 (2000). A. Loffler, J. Reithmaier, G. Sek, C. Hofmann, S. Reitzenstein, M. Kamp, and A. Forchel, “[Semiconductor quantum dot microcavity pillars with high-quality factors and enlarged dot dimensions]{},” Appl. Phys. Lett. [**86,**]{} 111105 (2005). H. Pask, H. Summer, and P. Blood, “[Localized Recombination and Gain in Quantum Dots]{},” In [*Tech. Dig. Conf. on Lasers and Electro-Optics, CThH3*]{}, (Optical Society of America, Baltimore, MD, 2005). G. P. Agrawal and N. K. Dutta, [*[Semiconductor Lasers]{}*]{} (Van Nostrand Reinhold, New York, NY, 1993). J. Vučković, O. Painter, Y. Xu, A. Yariv, and A. Scherer, “[FDTD Calculation of the Spontaneous Emission Coupling Factor in Optical Microcavities]{},” IEEE J. Quan. Elec. [**35,**]{} 1168–1175 (1999). L. A. Coldren and S. W. Corzine, [*[Diode Lasers and Photonic Integrated Circuits]{}*]{} (John Wiley & Sons, Inc., New York, NY, 1995). T. Sosnowski, T. Norris, H. Jiang, J. Singh, K. Kamath, and P. Bhattacharya, “[Rapid carrier relaxation in In${0.4}$Ga$_{0.60}$As/GaAs quantum dots characterized by differential transmission spectroscopy]{},” Phys. Rev. B [ **57,**]{} R9423–R9426 (1998). D. Yarotski, R. Averitt, N. Negre, S. Crooker, A. Taylor, G. Donati, A. Stintz, L. Lester, and K. Malloy, “[Ultrafast carrier-relaxation dynamics in self-assembled InAs/GaAs quantum dots]{},” J. Opt. Soc. Am. B [**19,**]{} 1480–1484 (2002). T. Ide, T. Baba, J. Tatebayashi, S. Iwamoto, T. Nakaoka, and Y. Arakawa, “[Lasing characteristics of InAs quantum-dot microdisk from 3K to room temperature]{},” Appl. Phys. Lett. [**85,**]{} 1326–1328 (2004). Introduction {#sec:intro} ============ Optical microcavities with embedded quantum dots (QDs) have become a very active area of research, with applications to triggered single photon sources[@ref:Michler; @ref:Santori; @ref:Moreau], strongly coupled light-matter systems for quantum networking[@ref:Reithmaier; @ref:Yoshie3; @ref:Peter], and low threshold microcavity lasers[@ref:Cao; @ref:Ide2; @ref:Yang_T2]. For these applications some of the most important microcavity parameters are the quality factor ($Q$), mode volume ($V_{\text{eff}}$), and the efficiency of light collection from the microcavity ($\eta_{o}$). $Q$ and $V_{\text{eff}}$ describe the decay rate ($\kappa$) and peak electric field strength within the cavity, respectively, which along with the oscillator strength and dephasing rate of the QD exciton determine if the coupled QD-photon system is in the regime of reversible energy exchange (strong coupling) or in a perturbative regime (weak coupling) characterized by a modification of the QD exciton radiative lifetime (the Purcell effect)[@ref:Kimble2]. The collection efficiency $\eta_{0}$ is of great importance for quantum networking[@ref:Cirac] and linear optics quantum computing applications[@ref:Knill; @ref:Kiraz], where near-unity photon pulse collection values are required. Microdisks supporting high-$Q$ whispering-gallery resonances were first studied in the context of semiconductor microlasers in the early 1990s[@ref:McCall2]. Since that time there has been extensive work on incorporating self-assembled InAs QD active regions within semiconductor microdisks for studying quantum interactions of light and matter[@ref:Michler; @ref:Peter; @ref:Cao; @ref:Ide2; @ref:Yang_T2; @ref:Gayral]. With respect to lasers, the relatively small modal gain available from a single layer of QDs has typically resulted in device operation at reduced temperatures[@ref:Cao], or the use of multiple QD layers to achieve room temperature (RT) operation[@ref:Ide2; @ref:Yang_T2]. More recently, improvements in the cavity $Q$ have resulted in RT operation in devices containing a single layer of QDs[@ref:Srinivasan9]. Furthermore, it has been shown that the collection efficiency of emitted power can be significantly increased by using optical fiber tapers to evanescently couple light from the microdisk[@ref:Srinivasan11]. As discussed above, these improvements in $Q$ and $\eta_{0}$ are not only important for lasers, but for future experiments in cavity quantum electrodynamics (cQED). In this article, we continue our study of taper-coupled microdisk-QD structures by considering device performance as the disks are scaled down in size. In Section [\[sec:sims\]]{}, we use finite element simulations to examine the behavior of $Q$ and $V_{\text{eff}}$ as a function of disk diameter. We relate these parameters to those used in cQED, and from this, determine that disks of $1.5-2$ $\mu$m in diameter are optimal for use in future experiments with InAs QDs. Section [\[sec:setup\]]{} briefly outlines the methods used to fabricate and test devices consisting of a 2 $\mu$m diameter disk created in an AlGaAs heterostructure with a single layer of self-assembled InAs QDs. In Section [\[sec:results\]]{}, we present experimental measurements of the fabricated devices. Through passive characterization, cavity $Q$s as high as $1.2{\times}10^5$ are demonstrated for devices with a predicted $V_{\text{eff}}\sim2.2(\lambda/n)^3$. In addition, photoluminescence measurements show that the devices operate as lasers with RT, continuous-wave thresholds of $\sim$1 $\mu$W of absorbed pump power. Finally, the optical fiber taper is used to increase the efficiency of out-coupling by nearly two orders of magnitude, so that an overall fiber-coupled laser differential efficiency of $\xi\sim16\%$ is achieved. We conclude by presenting some estimates of the number of QDs contributing to lasing and the spontaneous emission coupling factor ($\beta$) of the devices. Simulations {#sec:sims} =========== To study $Q$ and $V_{\text{eff}}$ of the microdisk cavities, finite-element eigenfrequency simulations[@ref:Spillane3; @ref:Borselli3] are performed using the Comsol FEMLAB commerical software. By assuming azimuthal symmetry of the disk structures, only a two-dimensional cross-section of the disk is simulated, albeit using a full-vectorial model. The cavity mode effective volume is calculated according to the formula[@ref:Andreani]: $$\label{eq:mode_volume} \begin{split} V_{\text{eff}}=\frac{\int_{V} \epsilon({\mathbf{r}})|{\mathbf{E({\mathbf{r}})}}|^2d^{3}{\mathbf{r}}}{\max[\epsilon({\mathbf{r}})|{\mathbf{E({\mathbf{r}})}}|^2]}\\ \end{split}$$ where $\epsilon({\mathbf{r}})$ is the dielectric constant, $|E({\mathbf{r}})|$ is the electric field strength, and $V$ is a quantization volume encompassing the resonator and with a boundary in the radiation zone of the cavity mode under study. The resonance wavelength $\lambda_{0}$ and radiation limited quality factor $Q_{\text{rad}}$ are determined from the complex eigenvalue (wavenumber) of the resonant cavity mode, $k$, obtained by the finite-element solver, with $\lambda_{0}=2\pi/{\frak{Re}}(k)$ and $Q_{\text{rad}}={\frak{Re}}(k)/(2{\frak{Im}}(k))$. Figure \[fig:SEM\](a) shows a scanning electron microscope (SEM) image of a fabricated microdisk. The devices are formed from a GaAs/AlGaAs waveguide layer that is $255$ nm thick, and due to an emphasis of sidewall smoothness over verticality during fabrication[@ref:Srinivasan9], the etched sidewall angle is approximately $26^{\circ}$ from vertical. These parameters are included in the simulations as shown Figure \[fig:SEM\](b). Here, we will focus on resonant modes in the 1200 nm wavelength band, corresponding to the low temperature (T=4 K) ground-state exciton transition of the QDs, relevant for future cQED experiments. We confine our attention to the more localized transverse electric (TE) polarized modes of the microdisk, and only consider the first order radial modes. In what follows we use the notation TE$_{p,m}$ to label whispering-gallery-modes (WGMs) with electric field polarization dominantly in the plane of the microdisk, radial order $p$, and azimuthal mode number $m$. The refractive index of the microdisk waveguide is taken as $n=3.36$ in the simulations, corresponding to the average of the refractive indices of the GaAs and AlGaAs layers at $\lambda=1200$ nm. In addition, the modes that we study are *standing wave* modes that are superpositions of the standard clockwise (CW) and counterclockwise (CCW) *traveling wave* modes typically studied in microdisks. These standing wave modes form when surface scattering couples and splits the initially degenerate CW and CCW traveling wave modes[@ref:Weiss; @ref:Kippenberg; @ref:Borselli2]; this process is contingent upon the loss rates within the disk being small enough for coherent coupling between the traveling wave modes to occur. For the AlGaAs microdisk devices we have studied thus far[@ref:Srinivasan9; @ref:Srinivasan11], this has indeed been the case. The effective mode volume for a standing wave mode, as defined in Equation \[eq:mode\_volume\], is roughly half that of a traveling wave mode[@ref:Borselli2]. This is of particular relevance to cQED experiments involving such microdisks as the coherent coupling rate of light and matter scales as $g\sim 1/\sqrt{V_{\text{eff}}}$. A QD positioned at an anti-node of the standing wave will have an exciton-photon coupling rate which is $\sqrt{2}$ times larger than for the traveling wave mode. Figures \[fig:SEM\](b) and \[fig:sim\_results\](a) show the results of the finite element simulations. We see that $V_{\text{eff}}$ for these standing wave modes can be as small as $2(\lambda/n)^3$ while maintaining $Q_{\text{rad}}>10^5$. Indeed, for microdisk average diameters $D>2$ $\mu$m[^1], radiation losses are not expected to be the dominant loss mechanism as $Q_{\text{rad}}$ quickly exceeds $10^7$, and other sources of field decay such as material absorption or surface scattering are likely to dominate . To translate these results into the standard parameters studied in cQED, we calculate the cavity decay rate $\kappa/2\pi=\omega/(4{\pi}Q)$ (assuming $Q=Q_{\text{rad}}$) and the coherent coupling rate $g$ between the cavity mode and a single QD exciton. In this calculation, a spontaneous emission lifetime $\tau_{sp}=1$ ns is assumed for the QD exciton, and $g={\mathbf{d}}\cdot{\mathbf{E}}/\hbar$ is the vacuum coherent coupling rate between cavity mode and QD exciton, given by[@ref:Kimble2; @ref:Andreani]: $$\label{eq:coupling_rate} \begin{split} g/2\pi=\frac{1}{2\tau_{sp}}\sqrt{\frac{3c\lambda_{0}^2\tau_{sp}}{2\pi{n^3}V_{\text{eff}}}}, \\ \end{split}$$ where $c$ is the speed of light and $n$ is the refractive index at the location of the QD. This formula assumes that the QD is optimally positioned within the cavity field, so that the calculated $g$ is the maximum possible coupling rate. The resulting values for $g$ and $\kappa$ are displayed in Figure \[fig:sim\_results\](b), and show that $g/2\pi$ can exceed $\kappa/2\pi$ by over an order of magnitude for a range of disk diameters. In addition, for all but the smallest-sized microdisks, $\kappa/2\pi<1$ GHz. A decay rate of $1$ GHz is chosen as a benchmark value as it corresponds to a linewidth of a few ${\mu}$eV at these wavelengths, on par with the narrowest self-assembled InAs QD exciton linewidths that have been measured at cryogenic temperatures[@ref:Bayer]. Indeed, because dissipation in a strongly-coupled QD-photon system can either be due to cavity decay or quantum dot dephasing, in Figure \[fig:sim\_results\_2\] we examine the ratio of $g$ to the maximum decay rate in the system assuming a fixed QD dephasing rate $\gamma/2\pi$=1 GHz[^2]. This ratio is roughly representative of the number of coherent exchanges of energy (Rabi oscillations) that can take place between QD and photon. We see that it peaks at a value of about 18 for a disk diameter $D\sim1.5$ $\mu$m. For diameters smaller than this, loss is dominated by cavity decay due to radiation, while for larger diameters, the dominant loss mechanism is due to dephasing of the QD. For other types of atomic-like media besides the self-assembled InAs QDs considered here one need not assume a limit of $\gamma/2\pi=1$ GHz, and we note that due to the exponential dependence of $Q_{\text{rad}}$ and approximately linear dependence of $V_{\text{eff}}$ on microdisk diameter, $Q_{\text{rad}}/V_{\text{eff}}$ rapidly rises above $10^7$ for microdisks of diameter only $D=2.5$ $\mu$m. These values of $Q_{\text{rad}}$ and $V_{\text{eff}}$ are comparable to those found in recent high-$Q$ photonic crystal microcavity designs[@ref:Srinivasan1; @ref:Ryu5; @ref:Song; @ref:Kuramochi; @ref:Zhang_Z]. In fact a similar scaling for high-$Q$ planar photonic crystal microcavities, in which one may trade-off a linear increase in $V_{\text{eff}}$ for an exponential increase in $Q$, has recently been described by Englund, et al., in Ref. [@ref:Englund]. For now, however, we take the ratio $g/\text{max}(\gamma,\kappa)$ with $\gamma/2\pi=1$ GHz as our metric, and as such focus on $1.5$-$2$ $\mu$m diameter microdisks. Growth, Fabrication, and Test Set-up {#sec:setup} ==================================== The samples used were grown by molecular beam epitaxy, and consist of a single layer of InAs QDs embedded in an In$_{0.15}$Ga$_{0.85}$As quantum well, which is in turn sandwiched between layers of Al$_{0.30}$Ga$_{0.70}$As and GaAs to form a 255 nm thick waveguide layer. This dot-in-a-well (DWELL) structure is grown on top of a 1.5 $\mu$m thick Al$_{0.70}$Ga$_{0.30}$As buffer layer that is later undercut to form the disk pedestal. Growth parameters were adjusted[@ref:Stintz] to put the material’s RT ground state emission peak at $\lambda=1317$ nm. Fabrication of the microdisk cavities begins with the deposition of a $180$ nm thick Si$_{x}$N$_{y}$ etch mask layer through plasma-enhanced chemical vapor deposition. This is followed by electron-beam lithography to define a linear array of disks, a post-development reflow of the resist, and a SF${_6}$/C$_{4}$F$_{8}$ inductively coupled plasma reaction ion etch (ICP-RIE) of the nitride mask. The DWELL material is then etched using Ar/Cl$_{2}$ ICP-RIE, and the array of disks is isolated onto a mesa stripe through standard photolithography and an ICP-RIE etch. Finally, the disks are undercut using a dilute solution of hydrofluoric acid (20:1 H$_2$O:HF); an image of a fabricated device is shown in Figure \[fig:SEM\]. During the fabrication, two goals were given special consideration. As discussed within the context of silicon microdisks in Ref. [@ref:Borselli2], elimination of radial variations in the disk geometry is important for reducing scattering loss; this is accomplished through optimization of the electron-beam lithography, and in particular, use of the post-development resist reflow technique of Ref. [@ref:Borselli2]. In addition, a premium was placed on sidewall smoothness, even at the expense of sidewall verticality. This required an optimization of the ICP-RIE processes used to etch both the Si$_{x}$N$_{y}$ mask and GaAs/AlGaAs waveguide layer. In particular, a low bias voltage, C$_{4}$F$_{8}$-rich plasma is used to etch the Si$_{x}$N$_{y}$, and a low Cl$_{2}$ percentage is used in the Ar/Cl$_{2}$ etch of the waveguide to eliminate any sidewall pitting due to excessive chemical etching. The microdisks are studied in a photoluminescence (PL) measurement setup that provides normal incidence pumping and free-space collection from the samples. The pump laser is an $830$ nm laser diode that is operated continuous-wave, and the pump beam is shaped into a Gaussian-like profile by sending it through a section of single mode optical fiber, after which it is then focused onto the sample with an ultra-long working distance objective lens (NA $= 0.4$). The free-space photoluminescence is first collected at normal incidence from the sample surface using the same objective lens for pump focusing, and is then coupled into a multi-mode fiber (MMF) using an objective lens with NA $= 0.14$. The luminescence collected by this MMF is wavelength resolved by a Hewlett Packard 70452B optical spectrum analyzer (OSA). The PL setup has been modified[@ref:Srinivasan11] to allow for devices to be probed by optical fiber tapers. The fiber taper is formed by heating (with a hydrogen torch) and adiabatically stretching a single mode fiber until its minimum diameter is approximately $1$ $\mu$m. It is mounted onto an acrylic mount that is attached to a motorized Z-axis stage (50 nm encoded resolution), so that the fiber taper can be precisely aligned to the microdisk, which is in turn mounted on a motorized XY stage. When doing passive measurements of cavity $Q$, the taper input is connected to a scanning tunable laser (5 MHz linewidth) with a tuning range between $1420$-$1480$ nm, and the taper output is connected to a photodetector to monitor the transmitted power. Alternately, when collecting emission from the microdisk through the fiber taper, the taper input is left unconnected and the output is sent into the OSA. Experimental Results {#sec:results} ==================== We begin our measurements by using the fiber taper to passively probe the $Q$ of the microdisks. Based on the simulations presented in Section [\[sec:sims\]]{}, we have focused on $2$ $\mu$m diameter microdisks. Due to the small diameter of these microdisks, the finite-element-calculated free-spectral range of resonant modes is relatively large, with resonances occurring at $1265$, $1346$, and $1438$ nm for the TE$_{p=1}$ WGMs with azimuthal mode numbers $m=11$,$10$, and $9$, respectively. The simulations presented in Section [\[sec:sims\]]{} were done for the TE$_{1,11}$ mode in the $\lambda=1200$ nm band due to the applicability of that wavelength region for future low temperature cQED experiments. However, for the current room-temperature measurements, the absorption due to the QD layer at those wavelengths is significant, so we probe the devices within the $\lambda=1400$ nm band ($\sim$100 nm red-detuned from the peak ground-state manifold QD emission). At these longer wavelengths the radiation-limited $Q_{\text{rad}}$ for a given disk diameter will be smaller than its value in the shorter $\lambda=1200$ nm band. Table \[table:FEMLAB\_results\] summarizes the properties of the TE$_{p=1}$ WGMs within the $1200$-$1400$ wavelength band for a $D=2$ $\mu$m microdisk with shape as shown in Fig. \[fig:SEM\]. [0.84]{}[YYYYY]{} Mode label & $\lambda_{0}$ & $Q_{rad}$ & $V_{\text{eff}}$ & application\ TE$_{1,9}$ & 1438 nm & $3.7{\times}10^5$ & 2.2 $(\lambda/n)^3$ & passive RT testing\ TE$_{1,10}$ & 1346 nm & $1.9{\times}10^6$ & 2.5 $(\lambda/n)^3$ & RT lasers\ TE$_{1,11}$ & 1265 nm & $9.8{\times}10^6$ & 2.8 $(\lambda/n)^3$ & low-T cQED\ Figure \[fig:Q\_plus\_fs\_coll\](a) shows a wavelength scan of the transmitted signal when a fiber taper is positioned a few hundred nanometer away from the disk edge. The doublet resonance appearing at $\lambda\sim1440$ nm in the spectrum is the signature of the standing wave modes described earlier[@ref:Weiss]. The measured linewidths correspond to $Q$ factors of $1.2{\times}10^5$, and in general $Q$s of $0.9$-$1.3{\times}10^5$ have been measured for these $2$ $\mu$m diameter microdisks. The $Q$s of these modes are approaching the radiation-limited value of $3.7{\times}10^5$, and are some of the highest measured values for near-IR wavelength-scale microcavities in AlGaAs[@ref:Yoshie3; @ref:Gayral; @ref:Srinivasan9; @ref:Loffler]. The corresponding cavity decay rates are $\kappa/2\pi\sim0.8-1.3$ GHz, over an order of magnitude smaller than the predicted coupling rate $g$ for an optimally placed QD. In addition, these $Q$s, if replicated within the QD emission band at $\lambda=1300$ nm, are high enough to ensure that room-temperature lasing should be achievable from the single layer of QDs in these devices[@ref:Stintz]. From calculations of the intrinsic radiation loss, the shorter $1300$ nm wavelength modes should in fact have a significantly increased $Q_{\text{rad}}$ of $1.9{\times}10^6$, although surface scattering may also slightly increase due to its approximate cubic dependence on wavelength[@ref:Borselli2]. The emission properties of the QD-containing microdisks are tested at room temperature by continuous-wave optical pumping through a high-NA objective lens at normal incidence and, initially, collecting the normal incidence emitted light through the same lens. A light-in versus light-out (L-L) curve for one of the $D\sim2$ $\mu$m microdisks with a resonant emission peak at $\lambda\sim1345$ nm is shown in Figure \[fig:Q\_plus\_fs\_coll\](b), and displays a lasing threshold kink at approximately $1.0$ $\mu$W of absorbed pump power. The laser mode wavelength corresponds well with the TE$_{p=1,m=10}$ mode from finite-element simulations (see Table \[table:FEMLAB\_results\]). The absorbed pump power is estimated to be $11\%$ of the incident pump power on the microdisk, and was determined assuming an absorption coefficient of $10^4$ cm$^{-1}$ for the GaAs layers and quantum well layer. This threshold level is approximately two orders of magnitude smaller than those in recent demonstrations of RT, continuous-wave microdisk QD lasers[@ref:Ide2; @ref:Yang_T2], although the active regions in those devices contain five stacked layers of QDs while the devices presented here contain only a single layer of QDs. The low lasing threshold of the device presented in Figure \[fig:Q\_plus\_fs\_coll\](b) was consistently measured for the set of devices on this sample (approximately $20$ devices). In Figure \[fig:fs\_coll\_vs\_fiber\_coll\](a) we show another L-L curve, this time for a device that has a TE$_{p=1,m=10}$ WGM emission peak at $\lambda=1330$ nm and has a threshold absorbed pump power of $1.1$ $\mu$W. As demonstrated in Ref. [@ref:Srinivasan11], the same fiber taper used to measure the cavity $Q$ can efficiently out-couple light from the lasing mode. We do this by maintaining the free-space pumping used above while contacting a fiber taper to the side of the microdisk as shown in the inset of Figure \[fig:fs\_coll\_vs\_fiber\_coll\](b). From the corresponding L-L curve (Fig. \[fig:fs\_coll\_vs\_fiber\_coll\](b)) we see that the laser threshold under fiber taper loading has increased from $1.1$ $\mu$W to $1.6$ $\mu$W, but in addition the differential laser efficiency $\xi$ is now $4\%$ compared to $0.1\%$ when employing free-space collection (Fig. \[fig:fs\_coll\_vs\_fiber\_coll\](a)-(b)). Furthermore, because the microdisk modes are standing waves they radiate into both the forwards and backwards channels of the fiber. With collection from both the forward and backward channels the differentially efficiency was measured to be twice that of the single forward channel. Collecting from both channels and adjusting for all fiber losses in the system (roughly $50\%$ due to fiber splices and taper loss), the total differential laser efficiency with fiber taper collection is $16\%$. Due to the difference in photon energy of the pump laser and microdisk emission, this laser differential efficiency corresponds to a conversion efficiency of $28\%$ from pump photons to fiber-collected microdisk laser photons. $28\%$ is thus a *lower* bound on the fiber-taper collection efficiency and/or quantum efficiency of the QD active region. In addition to the improved laser differential efficiency of the TE$_{p=1,m=10}$ laser mode when using the fiber taper to out-couple the laser light, we also see in the below-threshold spectrum of Figure \[fig:fs\_coll\_vs\_fiber\_coll\](b) that two additional resonances appear at $\lambda=1310$ nm and $\lambda=1306$ nm. The long wavelength mode is identified as TM$_{p=1,m=8}$ and the short wavelength mode as TE$_{p=2,m=7}$ from finite-element simulations. These modes are not discernible in the free-space collected spectrum due to their low radiation-limited $Q$ factors ($800$ and $5000$ for the TE$_{2,7}$ and TM$_{1,8}$, respectively), but show up in the taper coupled spectrum due to their alignment with the QD ground-state exciton emission peak and the heightened sensitivity of the taper coupling method. The single-mode lasing and limited number of WGM resonances ($6$ when including the degeneracy of the WGMs) in the emission spectrum in these $D=2$ $\mu$m microdisks is a result of the large $80$-$100$ nm free-spectral-range of modes in the $1300$-$1500$ wavelength band. As a result, one would expect the spontaneous emission factor ($\beta$) of these microdisk lasers to be relatively high. A log-log plot of the fiber taper coupled laser emission of Figure \[fig:fs\_coll\_vs\_fiber\_coll\](b) is shown in Figure \[fig:log\_log\](a) along with a rate-equation model fit to the data. Of particular note is the well defined sub-threshold linear slope of the log-log plot. In this case the sensitivity of the fiber taper collection allows for the sub-threshold slope to be accurately estimated at m=1.67, corresponding to a near quadratic dependence of spontaneous emission intensity on pump power (Figure \[fig:log\_log\](b), inset) and indicating that there is likely significant non-radiative recombination. Assuming that radiative recombination occurs as a bi-particle process[^3], the larger than unity power law dependence of sub-threshold emission on pump power is indicative of single-particle non-radiative recombination processes such as surface recombination[@ref:Agrawal]. Given the close proximity of the WGM laser mode to the periphery of the microdisk and the above-band pumping, the presence of significant surface recombination is not surprising. Unfortunately, due to this large non-radiative component one can only provide a *weak lower bound* $\beta^{\prime}$ for the $\beta$-factor directly from the L-L curve. From Figure \[fig:log\_log\] we estimate $\beta\ge\beta^{\prime}\sim3\%$. A rate-equation model incorporating bi-particle spontaneous emission proportional to $N^2$ and surface recombination with a $N^{1.22}$ carrier dependence (the ratio of the power law dependences is set equal to the measured sub-threshold slope of m=1.67) is fit to the data and shown as a solid curve in Figure \[fig:log\_log\]. In this model the measured fiber taper collection efficiency was used, along with the previously measured and estimated QD density, maximum gain, and quantum efficiency from stripe lasers[@ref:Stintz]. An estimate for the actual radiative $\beta$-factor of $15.5\%$ was used, corresponding closely with the partitioning of spontaneous emission amongst the $6$ localized and high-$Q$ WGM resonances within the QD ground-state manifold emission band[^4]. The reference spontaneous emission lifetime of the ground-state QD exciton in bulk was taken as $\tau_{sp}=1$ ns. The data was fit by varying *only* the effective surface recombination velocity. As seen in Figure \[fig:log\_log\], the fit is quite good over the entire sub-threshold and threshold regions of the laser data. The inferred surface recombination velocity from the fit is $v_{s}\sim75$ cm/s, extremely slow for the AlGaAs material system[@ref:Coldren] but perhaps indicative of the fast capture rate of carriers, and consequent localization, into QDs[@ref:Sosnowski; @ref:Yarotski]. Due to the large perimeter-to-area ratio in these small $D=2$ $\mu$m microdisks, even with this low velocity the model predicts that laser threshold pump power is dominated by surface recombination with an effective lifetime $\tau_s\sim300$ ps. Such a surface recombination lifetime has also been estimated by Baba, et al., in their recent work on QD-microdisk lasers[@ref:Ide]. The number of QDs contributing to lasing in these small microdisks can also be estimated. From the finite-element simulations the area of the standing wave WGM lasing mode in the plane of the QD layer is approximately $1$ $\mu$m$^2$, and the predicted QD density for this sample is $300$ $\mu$m$^{-2}$, so that $\sim300$ QDs are spatially aligned with the cavity mode. Assuming a RT homogeneous linewidth on the order of a few meV[@ref:Bayer], compared to a measured inhomogeneous Gaussian broadening of $35$ meV, and considering the location of the lasing mode in the tail of the Gaussian distribution, we estimate $<10\%$ of these dots are spectrally aligned with the cavity mode. By this estimate, on the order of $25$ QDs contribute to lasing. Conclusions =========== We have demonstrated fiber-coupled $2$ $\mu$m diameter quantum-dot-containing microdisks that have a quality factor $Q$ in excess of 10$^5$ for a predicted mode volume $V_{\text{eff}}$ as small as $\sim2.2(\lambda/n)^3$. Such devices are predicted to be suitable for future experiments in single quantum dot, single photon experiments in cavity QED, where these $Q$ and $V_{\text{eff}}$ values can enable strong coupling at GHz-scale speeds. An initial application of this work is in continuous wave, optically pumped microcavity lasers. Here, the high $Q$ ensures that lasing can be achieved with the modest gain provided by a single layer of quantum dots, and combined with the ultra-small $V_{\text{eff}}$, results in thresholds as low as 1.0 $\mu$W of absorbed power. In addition, the fiber taper coupling is shown to be an efficient method to collect the laser emission with a measured $28\%$ lower bound on out-coupling efficiency. This work was partly supported by the Charles Lee Powell Foundation. The authors thank Christopher P. Michael, Paul E. Barclay, and Thomas J. Johnson for helpful discussions. KS thanks the Hertz Foundation and MB thanks the Moore Foundation, NPSC, and HRL Laboratories for their graduate fellowship support. [^1]: The average diameter is taken at the center of the slab, or equivalently, is the average of the top and bottom diameters. [^2]: Note that $\gamma \equiv \gamma_{\perp}$ is in general greater than half the total excitonic decay rate ($\gamma_{||}/2$) or radiative decay rate ($1/2\tau_{sp}$) for QD excitons, due to near-elastic scattering or dephasing events with, for example, acoustic phonons of the lattice. [^3]: As has been discussed recently in Ref. [@ref:Pask] this may not be an accurate model for QD state-filling, but for our simple analysis here it will suffice. [^4]: This estimate was based upon considering Purcell enhancement at RT for QDs spatially and spectrally aligned with the WGMs ($F_{P}\sim6$), and suppression of spontaneous emission for QDs spatially and spectrally misaligned from the WGMs ($F_{P}\sim0.4$). This simple estimate is consistent with accurate finite-difference time-domain calculations of similar sized microdisks[@ref:Vuckovic1].
--- author: - | , and Weonjong Lee\ Lattice Gauge Theory Research Center, CTP, and FPRD,\ Department of Physics and Astronomy,\ Seoul National University, Seoul, 151-747, South Korea\ E-mail: - | Sunghoon Kim, and Seok-Ho Myung\ Sejong Science High School, Seoul, 152-881, South Korea\ title: Performance of SSE and AVX Instruction Sets ---
--- abstract: 'The pion parton distribution function, $u^{\pi}(x)$, is reexamined by a universal reparametrization function, $w_\tau(x)$, in the light-front holographic QCD (LFHQCD) approach. We show that, owing to the flexibility of $w_\tau(x)$, the large-$x$ behavior $u^{\pi}(x)\sim (1-x)^{2}$ can be contained within the LFHQCD formalism. From this fact, augmented by perturbative QCD and recent lattice QCD results, we state that such behavior cannot be excluded.' author: - Lei Chang - Khépani Raya - Xiaobin Wang bibliography: - 'bibliography.bib' title: 'Pion Parton Distribution Function in Light-Front Holographic QCD' --- *Motivation*— During the rise of parton models, around the 1970s, a connection between the proton electromagnetic form factors (obtained via exclusive process) and its structure functions (inferred from deep inelastic scattering) was realized by Drell-Yann [@Drell:1969km] and West [@West:1970av]. Their findings yielded the so-called Drell-Yan-West relation (DYW), which entails that, when the momentum transfer ($-t=Q^2$) becomes asymptotically large, the proton electromagnetic form factor (EFF) falls as $$\label{eq:EFFDYW} F_{1 }^{p}(t)\sim \frac{1}{(-t)^{\tau - 1}}\;,$$ while the corresponding parton distribution function (PDF) behaves, at large-$x$ (*i.e.*, $x\to1$), as $$\label{eq:PDFDYW} u^{p}(x)\sim (1-x)^{2\tau - 3}\;.$$ Here, $x$ is the longitudinal momentum fraction carried by the parton - or Bjorken-$x$ [@Bjorken:1968dy] - and $\tau$, called *twist*, denotes the number of $\tau$-components of the hadron state. In a subsequent work by Ezawa [@Ezawa:1974wm], it was shown that the pion violates the DYW relation. This can be attributed to the different number of constituents and spin. It is seen that, while the EFF exhibits the same asymptotic profile for both mesons, Eq. , the pion parton distribution function adopts the large-$x$ form $$\label{eq:pionPDF1} u^\pi(x)\sim (1-x)^{2\tau - 2}\;.$$ The leading-twist ($\tau=3$ for proton, $\tau=2$ for pion) entails the well-known $1/(-t)^2$ and $1/(-t)$ falls of the proton and pion EFFs [@Lepage:1980fj], respectively, and the $x\to1$ behavior of the PDFs is driven by $$\begin{aligned} \label{eq:largexgood0} u^p(x)&\sim&(1-x)^3\;,\\ \label{eq:largexgood} u^\pi(x)&\sim&(1-x)^2\;. \end{aligned}$$ Those patterns are further supported by perturbative Quantum Chromodynamics (pQCD) [@Farrar:1975yb; @Berger:1979du; @Lepage:1980fj]. In fact, assuming a theory in which the quarks interact via the exchange of a vector-boson, asymptotically damped as $(1/k^2)^\beta$, Eq.  generalizes as [@Holt:2010vj]: $$u^\pi(x)\sim (1-x)^{2\beta}\;.$$ Hence, the large-$x$ behavior of the valence-quark PDF is a direct measure of the momentum-dependence of the underlying interaction [@Farrar:1975yb; @Berger:1979du; @Holt:2010vj; @Hecht:2000xa]. In the novel approach of light-front holographic QCD (LFHQCD) [@Brodsky:2014yha; @Zou:2018eam], it is suggested that the DYW relation is preserved for both the proton and pion [@deTeramond:2018ecg]. Thereby, it predicts a valence pion PDF that, from the leading-twist-2 term, falls as $$\label{eq:largexbad} u^\pi(x)\sim (1-x)^1\;,$$ feeding the controversy provoked by the E615-Experiment leading order (LO) analysis [@Conway:1989fs], which favors a large-$x$ exponent of “1”, in apparent contradiction with the parton models and pQCD. Many theoretical and phenomenological approaches have been participants in this debate, *e.g.* [@Holt:2010vj; @Hecht:2000xa; @Wijesooriya:2005ir; @Aicher:2010cb; @Chang:2014lva; @Chen:2016sno; @deTeramond:2018ecg; @Ding:2019lwe; @Ding:2019qlr; @Sufian:2020vzb; @Joo:2019bzr; @Sufian:2019bol; @Oehm:2018jvm; @Brommel:2006zz; @Detmold:2003tm]. Playing a key role in this controversy, the analysis of Aicher *et al.* [@Aicher:2010cb] shows that, if a next-to-leading order (NLO) treatment of the data is performed and soft-gluon resummation is considered, it is possible to recover the pQCD prediction. On different grounds, the $x\to1$ profile of Eq.  is also favored by a recent lattice QCD (lQCD) result [@Sufian:2020vzb], in which a novel “Cross Section” (CS) technique [@Sufian:2019bol; @Sufian:2020vzb] is employed to obtain the pointwise shape of the pion PDF. Furthermore, it is important to unravel the proton and pion properties together. Expose, for example, the origin and difference of their masses: if we accept QCD as the fundamental, underlying theory of the strong interactions (and we do), it is necessary to simultaneously explain the *masslessness* of the pion and the much larger size of the proton mass [@Horn:2016rip; @Roberts:2016vyn; @Roberts:2019ngp]. Similarly, it is vital to obtain a clear picture of the proton and pion parton distributions in the same approach. QCD predicts the profiles of Eqs. -, thus we need to explain how those behaviors can (or cannot) take place. In this letter, we revisit Ref. [@deTeramond:2018ecg]. There, the authors present an appealing way to parametrize the PDFs and generalized parton distributions (GPDs), from an integral representation of the EFFs, but they claim that the falloff of the pion PDF at $x\to1$ is an unresolved issue. Our aim is to show that the large-$x$ behavior of Eq.  can be perfectly accommodated within the same LFHQCD formalism, while also maintaining the correct counting rules for the proton. *Counting rules in LFHQCD*— Following Ref. [@deTeramond:2018ecg], the form factor is expressed in an integral representation as $$\begin{aligned} \label{eq:EFFdef} F_{\tau}(t)&=&\frac{1}{N_{\tau}}\int_{0}^{1} dy (1-y)^{\tau-2} y^{-t/4\lambda-\frac{1}{2}}\\ &=& \frac{1}{N_{\tau}}B(\tau-1,\frac{1}{2}-\frac{t}{4\lambda})\;,\end{aligned}$$ where $N_\tau=\sqrt{\pi}\;\Gamma(\tau-1)/\Gamma(\tau-2)$ and $\sqrt{\lambda}=0.548$ GeV; $B(u,v)$ corresponds to the Euler Beta Function. The universal scale, $\lambda$, is fixed by the $\rho$ meson mass [@Brodsky:2014yha; @Brodsky:2016yod]. Under the change of variable $y=w_{\tau}(x)$ one can write, more generally: $$\label{eq:EFFdef2} F_{\tau}(t)=\frac{1}{N_{\tau}}\int_{0}^{1} dx (1-w_{\tau}(x))^{\tau-2} w_{\tau}(x)^{-t/4\lambda-\frac{1}{2}}\frac{\partial w_{\tau}(x)}{\partial x}\, ,$$ where the reparametrization function, $w_\tau(x)$, is constrained by the conditions: $$\begin{aligned} \label{eq:const} w_{\tau}(0)=0;\,w_{\tau}(1)=1;\,\frac{\partial w_{\tau}(x)}{\partial x}\ge 0\, .\end{aligned}$$ Notice that we have introduced a $\tau$-dependence in $w_\tau(x)$. This is a key difference, with respect to [@deTeramond:2018ecg], that we will exploit later. At zero skewness, the valence-quark GPD is conveniently expressed as $$H(x,t)=q_{\tau}(x)\text{e}^{t f_{\tau}(x)}$$ where we identify the PDF and profile function, $q_\tau(x)$ and $f_{\tau}(x)$ respectively, as $$\begin{aligned} q_{\tau}(x)&=&\frac{1}{N_{\tau}}(1-w_{\tau}(x))^{\tau-2} w_{\tau}(x)^{-\frac{1}{2}}\frac{\partial w_{\tau}(x)}{\partial x}\, ,\\ f_{\tau}(x)&=&\frac{1}{4\lambda}\text{log}\left(\frac{1}{w_{\tau}(x)}\right)\,.\end{aligned}$$ Then, a simple form for $w_{\tau}(x)$ is suggested: $$\label{eq:wexpl} w_{\tau}(x)=x^{(1-x)^{g(\tau)}}\text{e}^{-a_\tau(1-x)^{g(\tau)}}\;,$$ with $g(\tau),a_\tau\;\textgreater\; 0$. The adopted profile of $w_\tau(x)$ preserves the desired Regge behavior at small-$x$ [@Zou:2018eam; @deTeramond:2018ecg], while also satisfying the constraints of Eqs. . Thus, owing to the reparametrization invariance of the Euler Beta Function, $F_\tau(t)$ exhibits the large-$t$ falloff: $$\label{eq:EFFlarget} F_{\tau}(Q^{2})\sim \left(\frac{1}{-t}\right)^{\tau-1}\;,$$ which implies that the correct asymptotic behavior of the form factor [@Lepage:1980fj; @Farrar:1975yb; @Ezawa:1974wm] is faithfully reproduced. On the other hand, the $x\to1$ leading power of $q_{\tau}(x)$ will exhibit the $\tau$-dependence as follows: $$\label{eq:PDFdef} q_{\tau}(x)\sim (1-x)^{h(\tau)}\;,$$ with $h(\tau)=(\tau-1)g(\tau)-1$. Due to the arbitrariness on the choice of $g(\tau)$, LFHQCD cannot predict its precise form, and so the exact counting rules. However, it is this flexibility what allows us to recover the corresponding counting rules for both pion and proton, Eqs. -. Given the simplicity of Eq. , we propose the following for the PDFs: $$\begin{aligned} \label{eq:Rules} \text{Rule-I}&:& (1-x)^{2\tau-3} \,,\; \text{with} \, g(\tau)=2\;. \\ \text{Rule-II}&:& (1-x)^{2\tau-2} \,,\; \text{with} \, g(\tau)=2+\frac{1}{\tau-1} \;.\end{aligned}$$ Thus, it follows from  that the spin$-\frac{1}{2}$ relation  can be satisfied if Rule-I is chosen, while the spin$-0$ counterpart  holds if Rule-II is selected instead. Focusing on the pion, we will perform a numerical test to contrast the above rules against the phenomenology and analyze under which circumstances are these rules feasible. *Pion valence-quark PDF*— Consider the twist-4 pion valence-quark PDF as $$\label{eq:PDFtwist4} u^{\pi}(x;\zeta)=(1-\gamma)q_{\tau=2}(x;\zeta)+\gamma q_{\tau=4}(x;\zeta)\;,$$ with normalization $\int_0^1 dx \;u_\pi(x;\zeta)=1$ and $\gamma=0.125$. The latter, twist-$4$ component, represents the meson cloud contribution determined in  [@Brodsky:2014yha]. The PDF is defined at an intrinsic scale $\zeta=\zeta_1$, which is set as $\zeta_1=1.1\pm 0.2$ GeV to keep in line with previous works [@deTeramond:2018ecg; @Deur:2016opc]. Then, continuum analyses [@Ding:2019lwe; @Ding:2019qlr] are employed as benchmarks to estimate the value $$\begin{aligned} \textless x \textgreater_{\zeta_1}=\int_0^1 dx\;x u^\pi(x;\zeta_1)\approx 0.26\;,\end{aligned}$$ such that the $a_2$ coefficient in Eq.  can be determined. This is additionally cross-checked from the value $\textless x \textgreater_{\zeta_2}\approx 0.24$, obtained at $\zeta_2:=2$ GeV after NLO evolution, as compared to the lQCD estimates from Refs. [@Joo:2019bzr; @Detmold:2003tm; @Oehm:2018jvm; @Brommel:2006zz]. To account for the impact of the twist-4 term, we also vary the ratio $a_4/a_2$ from $0.1$ to $1$. Only mild effects at intermediate values of $x$ are observed. Figure \[fig:PDF\] displays the valence-quark PDFs, evolved to $\zeta_5:=5.2$ GeV, and its comparisson with experimental and lattice data [@Conway:1989fs; @Aicher:2010cb; @Sufian:2020vzb]. For contrast, we have also included a recent Dyson-Schwinger equations (DSEs) result [@Ding:2019lwe; @Ding:2019qlr]. The $t$-dependence of the valence-quark GPD, for Rule-II, is presented in Figure \[fig:GPD\]. It is clear that Rule-I produces a PDF that is closer to the original experimental data [@Conway:1989fs], while the analogous for Rule-II matches the rescaled data from Ref. [@Aicher:2010cb]. Either rule will give the correct large-$t$ fall of the EFF in Eq. , but only in the second case one obtains the $x\to 1$ behavior predicted by pQCD. This is readily achieved in the DSE formalism [@Ding:2019lwe; @Ding:2019qlr; @Chen:2016sno]: its direct connection with QCD ensures that perturbation theory is recovered, and so the connection of the asymptotic behavior of the gluon with the large-$x$ behavior of the valence-quark PDF [@Holt:2010vj]. Moreover, state-of-the-art lQCD results [@Sufian:2020vzb] also establish that the asymptotic form of Eq.  is preferred. It is noteworthy that, even though the pion PDF obtained from Rule-I differs from that of [@deTeramond:2018ecg], the evoluted results are compatible. This is unsurprising since the corresponding reparametrization function is not dramatically different from its counterpart in [@deTeramond:2018ecg]. Thus, although it is not be included in the present letter, we expect Rule-I to produce a congruent picture for the valence-quark PDF of the proton. These observations encourage us to select Rule-I for the case of the proton and Rule-II when studying pions, for an internally consistent description based on the LFHQCD formalism. ![*Valence-quark pion PDF.* Obtained NLO results at $\zeta_5=5.2$ GeV, from the rules in . The corresponding (blue and red) error bands account for the uncertainty in the initial scale, $\zeta_1=1.1\pm0.2$ GeV and the variation of $a_4/a_2 = 0.1$ to $1$. The broadest, gray band, corresponds to the novel lQCD “CS” result from [@Sufian:2020vzb] and the dashed-line depicts the DSE result [@Ding:2019lwe; @Ding:2019qlr]. **Data points:** (triangles) LO extraction “E615-Original” [@Conway:1989fs] and (circles) the NLO analysis “E615-Rescaled” of Ref. [@Aicher:2010cb]. []{data-label="fig:PDF"}](pionPDF2.pdf){width="0.95\columnwidth"} ![*Valence-quark pion GPD.* $t$-dependence of the valence-quark pion GPD at zero skewness. The plot above corresponds to Rule-II in Eq. , at the initial scale $\zeta_1$. []{data-label="fig:GPD"}](pionGPD.pdf){width="0.95\columnwidth"} *Summary and conclusions*— We have reanalized the LFHQCD approach of Ref. [@deTeramond:2018ecg] to study the valence-quark PDF of the pion. It has been proven that, given the flexibility of the universal reparametrization function, $w_\tau(x)$, it is in fact possible to accomodate a large-$x$ behavior of $u^\pi(x)\sim(1-x)^{2\tau-2}$ within this framework. Besides the agreement with the rescaled experimental data [@Aicher:2010cb], this makes it compatible with the Ezawa findings [@Ezawa:1974wm] and the predictions from pQCD [@Farrar:1975yb; @Berger:1979du; @Lepage:1980fj]. Recent continuum [@Ding:2019lwe; @Ding:2019qlr] and sophisticated lQCD studies [@Sufian:2020vzb] also favor this endpoint form. Due to this confluence of vastly different approaches, and given our observations, we state that the $u^\pi(x)\sim(1-x)^2$ profile can not only be contained within the LFHQCD formalism, but also cannot be excluded. Besides, we sketched how a simultaneous description of the proton and pion distribution functions, that agrees with pQCD, can be achieved if the counting rules are chosen accordingly: we encourage the use of Rule-I for proton and Rule-II for pion. We acknowledge helpful conversations with Yuan Sun. This work is supported by: the Chinese Government Thousand Talents Plan for Young Professionals.
--- abstract: | We define a new notion of total curvature, called [*net total curvature,*]{} for finite graphs embedded in ${{\mathbb R}}^n$, and investigate its properties. Two guiding principles are given by Milnor’s way of measuring the local crookedness of a Jordan curve via a Crofton-type formula, and by considering the double cover of a given graph as an Eulerian circuit. The strength of combining these ideas in defining the curvature functional is (1) it allows us to interpret the singular/non-eulidean behavior at the vertices of the graph as a superposition of vertices of a $1$-dimensional manifold, and thus (2) one can compute the total curvature for a wide range of graphs by contrasting local and global properties of the graph utilizing the integral geometric representation of the curvature. A collection of results on upper/lower bounds of the total curvature on isotopy/homeomorphism classes of embeddings is presented, which in turn demonstrates the effectiveness of net total curvature as a new functional measuring complexity of spatial graphs in differential-geometric terms. author: - '**Robert Gulliver and Sumio Yamada**' date: 'December 31, 2010' title: '**Total Curvature of Graphs after Milnor and Euler**' --- [^1] [^2] INTRODUCTION: CURVATURE OF A GRAPH {#intro} ================================== The celebrated Fáry-Milnor theorem states that a curve in ${{\mathbb R}}^n$ of total curvature at most $4\pi$ is unknotted. As a key step in his 1950 proof, John Milnor showed that for a smooth Jordan curve $\Gamma$ in ${{\mathbb R}}^3$, the total curvature equals half the integral over $e \in S^2$ of the number $\mu(e)$ of local maxima of the linear “height" function $\langle e,\cdot \rangle$ along $\Gamma$ [@M]. This equality can be regarded as a Crofton-type representation formula of total curvature where the order of integrations over the curve and the unit tangent sphere (the space of directions) are reversed. The Fáry-Milnor theorem follows, since total curvature less than $4\pi$ implies there is a unit vector $e_0 \in S^2$ so that $\langle e_0,\cdot \rangle$ has a unique local maximum, and therefore that this linear function is increasing on an interval of $\Gamma$ and decreasing on the complement. Without changing the pointwise value of this “height" function, $\Gamma$ can be topologically untwisted to a standard embedding of $S^1$ into ${{\mathbb R}}^3$. The Fenchel theorem, that any curve in ${{\mathbb R}}^3$ has total curvature at least $2\pi$, also follows from Milnor’s key step, since for all $e\in S^2$, the linear function $\langle e,\cdot \rangle$ assumes its maximum somewhere along $\Gamma$, implying $\mu(e) \geq 1$. Milnor’s proof is independent of the proof of Istvan Fáry, published earlier, which takes a different approach [@Fa]. We would like to extend the methods of Milnor’s seminal paper, replacing the simple closed curve by a finite [*graph*]{} $\Gamma$ in ${{\mathbb R}}^3$. $\Gamma$ consists of a finite number of points, the [*vertices*]{}, and a finite number of simple arcs, the [*edges*]{}, each of which has as its endpoints one or two of the vertices. We shall assume $\Gamma$ is connected. The [*degree*]{} of a vertex $q$ is the number $d(q)$ of edges which have $q$ as an endpoint. (Another word for degree is “valence".) We remark that it is technically not needed that the dimension $n$ of the ambient space equals three. All the arguments can be generalized to higher dimensions, although in higher dimensions $(n\geq 4)$ there are no nontrivial knots. Moreover, any two homeomorphic graphs are isotopic. The key idea in generalizing total curvature for knots to total curvature for graphs is to consider the Euler circuits of the given graph, namely, parameterizations by $S^1$, of the [*double*]{} cover of the graph. We note that given a graph of even degree, there can be several Euler circuits, or ways to “trace it without lifting the pen." A topological vertex of a graph of degree $d$ is a singularity, in that the graph is not locally Euclidean. However by considering an Euler circuit of the double of the graph, the vertex becomes locally the intersection point of $d$ paths. We will show (Corollary \[cor2\]) that at the vertex, each path through it has a (signed) measure-valued curvature, and the absolute value of the sum of those measures is well-defined, independent of the choice of the Euler circuit of the double cover. We define (Definition \[defnet\]) the [*net total curvature*]{} (NTC) of a piecewise $C^2$ graph to be the sum of the total curvature of the smooth arcs and the contributions from the vertices as described. This notion of net total curvature is substantially different from the total curvature, denoted TC, as defined by Taniyama [@T]. (Taniyama writes $\tau$ for TC.) See section \[deftc\] below. This is consistent with known results for the vertices of degree $d =2$; with vertices of degree three or more, this definition helps facilitate a new Crofton-type representation formula (Theorem \[muthm\]) for total curvature of graphs, where the total curvature is represented as an integral over the unit sphere. Recall that the vertex is now seen as $d$ distinct points on an Euler circuit. The way we pick up the contribution of the total curvature at the vertices identifies the $d$ distinct points, and thus the $2d$ unit tangent spheres on a circuit. As Crofton’s formula in effect reverses the order of integrations — one over the circuit, the other over the space of tangent directions — the sum of the $d$ exterior angles at the vertex is incorporated in the integral over the unit sphere. On the other hand the integrand of the integral over the unit sphere counts the number of net local maxima of the height function along an axis, where net local maximum means the number of local maxima minus the number of local minima at these $d$ points of the Euler circuit. This establishes a correspondence between the differential geometric quantity (net total curvature) and the differential topological quantity (average number of maxima) of the graph, as stated in Theorem \[muthm\] below. In section \[deftc\], we compare several definitions for total curvature of graphs which have appeared in the recent literature. In section \[comb\], we introduce the main tool (Lemma  \[combin\]) which in a sense reduces the computation of NTC to counting intersections with planes. Milnor’s treatment [@M] of total curvature also contained an important topological extension. Namely, in order to define total curvature, the knot needs only to be [*continuous*]{}. This makes the total curvature a geometric quantity defined on any homeomorphic image of $S^1$. In this article, we first define net total curvature (Definition \[defnet\]) on piecewise $C^2$ graphs, and then extend the definition to continuous graphs (Definition \[gendefnet\].) In analogy to Milnor, we approximate a given continuous graph by a sequence of polygonal graphs. In showing the monotonicity of the total curvature (Proposition \[monotmu\]) under the refining process of approximating graphs we use our representation formula (Theorem \[muthm\]) applied to the polygonal graphs. Consequently the Crofton-type representation formula is also extended (Theorem \[muthm2\]) to cover continuous graphs. Additionally, we are able to show that continuous graphs with finite total curvature (NTC or TC) are tame. We say that a graph is [*tame*]{} when it is isotopic to an embedded polyhedral graph. In sections \[three/four\] through \[FaryMilnor\], we characterize NTC with respect to the geometry and the topology of the graph. Proposition \[subadditivity\] shows the subadditivity of NTC under the union of graphs which meet in a finite set. In section  \[deg3\], the concept of bridge number is extended from knots to graphs, in terms of which the minimum of NTC can be explicitly computed, provided the graph has at most one vertex of degree $>3$. In section \[lowbds\], Theorem \[incrdecr\] gives a lower bound for NTC in terms of the width of an isotopy class. The infimum of NTC is computed for specific graph types: the two-vertex graphs $\theta_m$, the “ladder" $L_m$, the “wheel" $W_m$, the complete graph $K_m$ on $m$ vertices and the complete bipartite graph $K_{m,n}$. Finally we prove a result (Theorem \[thetathm\]) which gives a Fenchel type lower bound $(\geq 3 \pi)$ for total curvature of a theta graph (an image of the graph consisting of a circle with an arc connecting a pair of antipodal points), and a Fáry-Milnor type upper bound $(< 4 \pi)$ to imply the theta graph is isotopic to the standard embedding. A similar result was given by Taniyama [@T], referring to TC. In contrast, for graphs of the type of $K_m \ (m\geq 4)$, the infimum of NTC in the isotopy class of a polygon on $m$ vertices is also the infimum for a sequence of distinct isotopy classes. Many of the results in our earlier preprint [@GY2] have been incorporated into the present paper. We thank Yuya Koda for his comments regarding Proposition \[net3\], and Jaigyoung Choe and Rob Kusner for their comments about Theorem \[thetathm\], especially about the sharp case ${{\rm NTC}}(\Gamma) = 3\pi$ of the lower bound estimate. DEFINITIONS OF TOTAL CURVATURE {#deftc} ============================== The first difficulty, in extending the results of Milnor’s classic paper, is to understand the contribution to total curvature at a vertex of degree $d(q)\geq 3$. We first consider the well-known case: **[Definition of Total Curvature for Knots]{}** For a smooth closed curve $\Gamma$, the total curvature is $${\mathcal C}(\Gamma) = \int_\Gamma |\vec{k}| \, ds,$$ where $s$ denotes arc length along $\Gamma$ and $\vec{k}$ is the curvature vector. If $x(s)\in {{\mathbb R}}^3$ denotes the position of the point measured at arc length $s$ along the curve, then $\vec{k} = \frac {d^2x}{ds^2}$. For a piecewise smooth curve, that is, a graph with vertices $q_1, \dots, q_N$ having always degree $d(q_i)=2$, the total curvature is readily generalized to $$\label{gencurv} {\mathcal C}(\Gamma) = \sum_{i=1}^N {\rm c}(q_i) + \int_{\Gamma_{\rm reg}} |\vec{k}| \, ds,$$ where the integral is taken over the separate $C^2$ edges of $\Gamma$ without their endpoints; and where ${\rm c}(q_i) \in [0,\pi]$ is the exterior angle formed by the two edges of $\Gamma$ which meet at $q_i$. That is, $\cos({\rm c}(q_i)) = \langle T_1, -T_2\rangle,$ where $T_1= \frac{dx}{ds}(q_i^+)$ and $T_2= -\frac{dx}{ds}(q_i^-)$ are the unit tangent vectors at $q_i$ pointing into the two edges which meet at $q_i$. The exterior angle ${\rm c}(q_i)$ is the correct contribution to total curvature, since any sequence of smooth curves converging to $\Gamma$ in $C^0$, with $C^1$ convergence on compact subsets of each open edge, includes a small arc near $q_i$ along which the tangent vector changes from near $\frac{dx}{ds}(q_i^-)$ to near $\frac{dx}{ds}(q_i^+)$. The greatest lower bound of the contribution to total curvature of this disappearing arc along the smooth approximating curves equals ${\rm c}(q_i)$. Note that ${\mathcal C}(\Gamma)$ is well defined for an [*immersed*]{} knot $\Gamma$. **[Definitions of Total Curvature for Graphs]{}** When we turn our attention to a [*graph*]{} $\Gamma$, we find the above definition for curves (degree $d(q)=2$) does not generalize in any obvious way to higher degree (see [@G]). The ambiguity of the general formula is resolved if we specify the replacement for ${\rm c}(0)$ when $\Gamma$ is the cone over a finite set $\{T_1, \dots, T_d\}$ in the unit sphere $S^2$. The earliest notion of total curvature of a graph appears in the context of the first variation of length of a graph, which we call [**variational total curvature**]{}, and is called the [*mean curvature*]{} of the graph in [@AA]: we shall write VTC. The contribution to VTC at a vertex $q$ of degree $2$, with unit tangent vectors $T_1$ and $T_2$, is ${\rm vtc}(q)=|T_1 + T_2| = 2\sin(c(q)/2)$. At a non-straight vertex $q$ of degree $2$, ${\rm vtc}(q)$ is less than the exterior angle ${\rm c}(q)$. For a vertex of degree $d$, the contribution is ${\rm vtc}(q)=|T_1 +\dots +T_d|$. A rather natural definition of total curvature of graphs was given by Taniyama in [@T]. We have called this [**maximal total curvature**]{} ${{\rm TC}}(\Gamma)$ in [@G]. The contribution to total curvature at a vertex $q$ of degree $d$ is $${\rm tc}(q):= \sum_{1\leq i<j\leq d}\arccos\langle T_i,-T_j\rangle.$$ In the case $d(q) = 2$, the sum above has only one term, the exterior angle ${\rm c}(q)$ at $q$. Since the length of the Gauss image of a curve in $S^2$ is the total curvature of the curve, ${\rm tc}(q)$ may be interpreted as adding to the Gauss image in ${{\mathbb R}}P^2$ of the edges, a complete great-circle graph on $T_1(q),\dots,T_d(q)$, for each vertex $q$ of degree $d$. Note that the edge between two vertices does not measure the distance in ${{\mathbb R}}P^2$ but its supplement. In our earlier paper [@GY1] on the density of an area-minimizing two-dimensional rectifiable set $\Sigma$ spanning $\Gamma$, we found that it was very useful to apply the Gauss-Bonnet formula to the cone over $\Gamma$ with a point $p$ of $\Sigma$ as vertex. The relevant notion of total curvature in that context is [**cone total curvature**]{} ${{\rm CTC}}(\Gamma)$, defined using ${\rm ctc}(q)$ as the replacement for ${\rm c}(q)$ in equation : $$\label{defconetc} {\rm ctc}(q) := \sup_{e \in S^2} \left\{ \sum_{i=1}^d\left(\frac{\pi}{2}-\arccos\langle T_i, e\rangle \right) \right\}.$$ Note that in the case $d(q) = 2$, the supremum above is assumed at vectors $e$ lying in the smaller angle between the tangent vectors $T_1$ and $T_2$ to $\Gamma$, so that ${\rm ctc}(q)$ is then the exterior angle ${\rm c}(q)$ at $q$. The main result of [@GY1] is that $2\pi$ times the area density of $\Sigma$ at any of its points is at most equal to ${{\rm CTC}}(\Gamma)$. The same result had been proven by Eckholm, White and Wienholtz for the case of a simple closed curve [@EWW]. Taking $\Sigma$ to be the branched immersion of the disk given by Douglas [@D1] and Radó [@R], it follows that if ${\mathcal C}(\Gamma) \leq 4\pi$, then $\Sigma$ is embedded, and therefore $\Gamma$ is unknotted. Thus [@EWW] provided an independent proof of the Fáry-Milnor theorem. However, ${{\rm CTC}}(\Gamma)$ may be small for graphs which are far from the simplest isotopy types of a graph $\Gamma$. In this paper, we introduce the notion of [**net total curvature**]{} ${{\rm NTC}}(\Gamma)$, which is the appropriate definition for generalizing — [*to graphs*]{} — Milnor’s approach to isotopy and total curvature of [*curves*]{}. For each unit tangent vector $T_i$ at $q$, $1 \leq i \leq d=d(q)$, let $\chi_i:S^2 \rightarrow \{-1, +1\}$ be equal to $-1$ on the hemisphere with center at $T_i$, and $+1$ on the opposite hemisphere (modulo sets of zero Lebesgue measure). We then define $$\label{defnc} {\rm ntc}(q):= \frac{1}{4}\int_{S^2}\left[\sum_{i=1}^d\chi_i(e)\right]^+\,dA_{S^2}(e).$$ We note that the function $\sum_{i=1}^d \chi_i(e)$ is odd, hence the quantity above can be written as $${\rm ntc}(q):= \frac{1}{8}\int_{S^2}\left|\sum_{i=1}^d\chi_i(e)\right|\,dA_{S^2}(e).$$ as well. In the case $d(q)=2$, the integrand of is positive (and equals 2) only on the set of unit vectors $e$ which have negative inner products with both $T_1$ and $T_2$, ignoring $e$ in sets of measure zero. This set is bounded by semi-great circles orthogonal to $T_1$ and to $T_2$, and has spherical area equal to twice the exterior angle. So in this case, ${\rm ntc}(q)$ is the exterior angle. Thus, in the special case where $\Gamma$ is a piecewise smooth curve, the following quantity ${{\rm NTC}}(\Gamma)$ coincides with total curvature, as well as with ${{\rm TC}}(\Gamma)$ and ${{\rm CTC}}(\Gamma)$: \[defnet\] We define the [*net total curvature*]{} of a piecewise $C^2$ graph $\Gamma$ with vertices $\{q_1, \dots, q_N\}$ as $${{{\rm NTC}}}(\Gamma):= \sum_{i=1}^N {\rm ntc}(q_i)+\int_{\Gamma_{\rm reg}} |\vec{k}| \, ds.$$ For the sake of simplicity, elsewhere in this paper, we consider the ambient space to be ${{{\mathbb R}}}^3$. However the definition of the net total curvature can be generalized for a graph in ${{{\mathbb R}}}^n$ by defining the vertex contribution in terms of an average over $S^{n-1}$: $${\rm ntc}(q) := \pi \Big(\fint_{S^{n-1}}\left[\sum_{i=1}^d \chi_i(e)\right]^+ \,dA_{S^{n-1}}(e) \Big),$$ which is consistent with the definition (\[defnc\]) of $\rm ntc$ when $n=3$. Recall that Milnor [@M] defines the total curvature of a continuous simple closed curve $C$ as the supremum of the total curvature of all polygons inscribed in $C$. By analogy, we define net total curvature of a [*continuous*]{} graph $\Gamma$ to be the supremum of the net total curvature of all polygonal graphs $P$ suitably inscribed in $\Gamma$ as follows. \[gamapprox\] For a given continuous graph $\Gamma$, we say a polygonal graph $P \subset {{\mathbb R}}^3$ is [*$\Gamma$-approximating*]{}, provided that its topological vertices (those of degree $\neq 2$) are exactly the topological vertices of $\Gamma$, and having the same degrees; and that the arcs of $P$ between two topological vertices correspond one-to-one to the edges of $\Gamma$ between those two vertices. Note that if $P$ is a $\Gamma$-approximating polygonal graph, then $P$ is homeomorphic to $\Gamma$. According to the statement of Proposition \[monotmu\], whose proof will be given in the next section, if $P$ and $\widetilde{P}$ are $\Gamma$-approximating polygonal graphs, and $\widetilde{P}$ is a refinement of $P$, then ${{\rm NTC}}(\widetilde{P}) \geq {{\rm NTC}}(P)$. Here $\widetilde{P}$ is said to be a refinement of $P$ provided the set of vertices of $P$ is a subset of the vertices of $\widetilde{P}$. Assuming Proposition \[monotmu\] for the moment, we can generalize the definition of the total curvature to non-smooth graphs. \[gendefnet\] Define the [*net total curvature*]{} of a continuous graph $\Gamma$ by $${{\rm NTC}}(\Gamma) := \sup_{P} {{\rm NTC}}(P)$$ where the supremum is taken over all $\Gamma$-approximating polygonal graphs $P$. For a polygonal graph $P$, applying Definition \[defnet\], $${{{\rm NTC}}}(P):= \sum_{i=1}^N {\rm ntc}(q_i),$$ where $q_1, \dots, q_N$ are the vertices of $P$. Definition \[gendefnet\] is consistent with Definition \[defnet\] in the case of a piecewise $C^2$ graph $\Gamma$. Namely, as Milnor showed, the total curvature ${\mathcal C}(\Gamma_0)$ of a smooth curve $\Gamma_0$ is the supremum of the total curvature of inscribed polygons ([@M], p. 251), which gives the required supremum for each edge. At a vertex $q$ of the piecewise-$C^2$ graph $\Gamma$, as a sequence $P_k$ of $\Gamma$-approximating polygons become arbitrarily fine, a vertex $q$ of $P_k$ (and of $\Gamma$) has unit tangent vectors converging in $S^2$ to the unit tangent vectors to $\Gamma$ at $q$. It follows that for $1\leq i\leq d(q)$, $\chi_i^{P_k} \to \chi_i^{\Gamma}$ in measure on $S^2$, and therefore ${\rm ntc}_{P_k}(q) \to {\rm ntc}_\Gamma(q)$. CROFTON-TYPE REPRESENTATION FORMULA FOR TOTAL CURVATURE {#comb} ======================================================= We would like to explain how the net total curvature ${{\rm NTC}}(\Gamma)$ of a graph is related to more familiar notions of total curvature. Recall that a graph $\Gamma$ has an Euler circuit if and only if its vertices all have even degree, by a theorem of Euler. An Euler circuit is a closed, connected path which traverses each edge of $\Gamma$ exactly once. Of course, we do not have the hypothesis of even degree. We can attain that hypothesis by passing to the [*double*]{} $\widetilde{\Gamma}$ of $\Gamma$: $\widetilde{\Gamma}$ is the graph with the same vertices as $\Gamma$, but with two copies of each edge of $\Gamma$. Then at each vertex $q$, the degree as a vertex of $\widetilde{\Gamma}$ is $\widetilde{d}(q) = 2\,d(q)$, which is even. By Euler’s theorem, there is an Euler circuit $\Gamma'$ of $\widetilde{\Gamma}$, which may be thought of as a closed path which traverses each edge of $\Gamma$ exactly [*twice*]{}. Now at each of the points $\{q_1, \dots, q_d\}$ along $\Gamma'$ which are mapped to $q \in \Gamma$, we may consider the exterior angle ${\rm c}(q_i)$. The sum of these exterior angles, however, depends on the choice of the Euler circuit $\Gamma'$. For example, if $\Gamma$ is the union of the $x$-axis and the $y$-axis in Euclidean space ${{\mathbb R}}^3$, then one might choose $\Gamma'$ to have four right angles, or to have four straight angles, or something in between, with completely different values of total curvature. In order to form a version of total curvature at a vertex $q$ which only depends on the original graph $\Gamma$ and not on the choice of Euler circuit $\Gamma'$, it is necessary to consider some of the exterior angles as partially balancing others. In the example just considered, where $\Gamma$ is the union of two orthogonal lines, two opposing right angles will be considered to balance each other completely, so that ${\rm ntc}(q)=0$, regardless of the choice of Euler circuit of the double. It will become apparent that the connected character of an Euler circuit of $\widetilde\Gamma$ is not required for what follows. Instead, we shall refer to a [*parameterization*]{} $\Gamma'$ of the double $\widetilde\Gamma$, which is a mapping from a $1$-dimensional manifold without boundary, not necessarily connected; the mapping is assumed to cover each edge of $\widetilde\Gamma$ once. The nature of ${\rm ntc}(q)$ is clearer when it is localized on $S^2$, analogously to [@M]. In the case $d(q)=2$, Milnor observed that the exterior angle at the vertex $q$ equals half the area of those $e \in S^2$ such that the linear function $\langle e, \cdot \rangle$, restricted to $\Gamma$, has a local maximum at $q$. In our context, we may describe ${\rm ntc}(q)$ as one-half the integral over the sphere of the number of [*net local maxima*]{}, which is half the difference of local maxima and local minima. Along the parameterization $\Gamma'$ of the double of $\Gamma$, the linear function $\langle e, \cdot \rangle$ may have a local maximum at some of the vertices $q_1, \dots, q_d$ over $q$, and may have a local minimum at others. In our construction, each local minimum balances against one local maximum. If there are more local minima than local maxima, the number ${\rm nlm}(e,q)$, the net number of local maxima, will be negative; however, our definition uses only the positive part $[{\rm nlm}(e,q)]^+$. We need to show that $$\int_{S^2} [{\rm nlm}(e,q)]^+ \,dA_{S^2}(e)$$ is independent of the choice of parameterization, and in fact is equal to $2 \, {\rm ntc}(q)$; this will follow from another way of computing ${\rm nlm}(e,q)$ (see Corollary \[cor2\] below). \[defnlm\] Let a parameterization $\Gamma'$ of the double of $\Gamma$ be given. Then a vertex $q$ of $\Gamma$ corresponds to a number of vertices $q_1, \dots, q_d$ of $\Gamma'$, where $d$ is the degree $d(q)$ of $q$ as a vertex of $\Gamma$. Choose $e \in S^2$. If $q \in \Gamma$ is a local extremum of $\langle e, \cdot \rangle$, then we consider $q$ as a vertex of degree $d(q) = 2$. Let ${\rm lmax}(e,q)$ be the number of local maxima of $\langle e, \cdot \rangle$ along $\Gamma'$ at the points $q_1, \dots, q_d$ over $q$, and similarly let ${\rm lmin}(e,q)$ be the number of local minima. We define the number of [*net local maxima*]{} of $\langle e, \cdot \rangle$ at $q$ to be $${\rm nlm}(e,q) = \frac12[{\rm lmax}(e,q) - {\rm lmin}(e,q)]$$. The definition of ${\rm nlm}(e,q)$ appears to depend not only on $\Gamma$ but on a choice of the parameterization $\Gamma'$ of the double of $\,\Gamma$: ${\rm lmax}(e,q)$ and ${\rm lmin}(e,q)$ may depend on the choice of $\Gamma'$. However, we shall see in Corollary \[cor1\] below that the number of [**net**]{} local maxima ${\rm nlm}(e,q)$ is in fact independent of $\,\Gamma'$. We have included the factor $\frac12$ in the definition of ${\rm nlm}(e,q)$ in order to agree with the difference of the numbers of local maxima and minima along a parameterization of $\,\Gamma$ itself, if $d(q)$ is even. We shall [**assume**]{} for the rest of this section that a unit vector $e$ has been chosen, and that the linear “height" function $\langle e, \cdot \rangle$ has only a finite number of critical points along $\Gamma$; this excludes $e$ belonging to a subset of $S^2$ of measure zero. We shall also assume that the graph $\Gamma$ is subdivided to include among the vertices all critical points of the linear function $\langle e, \cdot \rangle$, with degree $d(q) = 2$ if $q$ is an interior point of one of the topological edges of $\Gamma$. \[updown\] Choose a unit vector $e$. At a point $q \in \Gamma$ of degree $d = d(q)$, let the [*up-degree*]{} $d^+ = d^+(e,q)$ be the number of edges of $\Gamma$ with endpoint $q$ on which $\langle e, \cdot \rangle$ is greater (“higher") than $\langle e, q \rangle$, the “height" of $q$. Similarly, let the [*down-degree*]{} $d^-(e,q)$ be the number of edges along which $\langle e, \cdot \rangle$ is less than its value at $q$. Note that $d(q) = d^+(e,q) + d^-(e,q)$, for almost all $e$ in $S^2$. \[combin\] [**(Combinatorial Lemma)**]{} For all $q \in \Gamma$ and for a.a. $e\in S^2$, ${\rm nlm}(e,q) = \frac12[d^-(e,q) - d^+(e,q)]$. [[***Proof.*** ]{}]{}Let a parameterization $\Gamma'$ of the double of $\Gamma$ be chosen, with respect to which ${\rm lmax}(e,q)$ and ${\rm lmin}(e,q)$ are defined. Recall the assumption above, that $\Gamma$ has been subdivided so that along each edge, the linear function $\langle e, \cdot \rangle$ is strictly monotone. Consider a vertex $q$ of $\Gamma$, of degree $d=d(q)$. Then $\Gamma'$ has $2d$ edges with an endpoint among the points $q_1, \dots, q_d$ which are mapped to $q \in \Gamma$. On $2d^+$, resp. $2d^-$ of these edges, $\langle e, \cdot \rangle$ is greater resp. less than $\langle e, q \rangle$. But for each $1\leq i\leq d$, the parameterization $\Gamma'$ has exactly two edges which meet at $q_i$. Depending on the up/down character of the two edges of $\Gamma'$ which meet at $q_i$, $1\leq i\leq d$, we can count:\ (+) If $\langle e, \cdot \rangle$ is greater than $\langle e, q \rangle$ on both edges, then $q_i$ is a local minimum point; there are ${\rm lmin}(e,q)$ of these among $q_1, \dots, q_d$.\ (-) If $\langle e, \cdot \rangle$ is less than $\langle e, q \rangle$ on both edges, then $q_i$ is a local maximum point; there are ${\rm lmax}(e,q)$ of these.\ (0) In all remaining cases, the linear function $\langle e, \cdot \rangle$ is greater than $\langle e, q \rangle$ along one edge and less along the other, in which case $q_i$ is not counted in computing ${\rm lmax}(e,q)$ nor ${\rm lmax}(e,q)$; there are $d(q)-{\rm lmax}(e,q)-{\rm lmin}(e,q)$ of these. Now count the individual edges of $\Gamma'$:\ (+) There are ${\ \rm lmin}(e,q)$ pairs of edges, each of which is part of a local minimum, both of which are counted among the $2 d^+(e,q)$ edges of $\Gamma'$ with $\langle e, \cdot \rangle$ greater than $\langle e, q \rangle$.\ (-) There are ${\ \rm lmax}(e,q)$ pairs of edges, each of which is part of a local maximum; these are counted among the number $2d^-(e,q)$ of edges of $\Gamma'$ with $\langle e, \cdot \rangle$ less than $\langle e, q \rangle$. Finally,\ (0) there are $d(q)-{\rm lmax}(e,q)-{\rm lmin}(e,q)$ edges of $\Gamma'$ which are not part of a local maximum or minimum, with $\langle e, \cdot \rangle$ greater than $\langle e, q \rangle$; and an equal number of edges with $\langle e, \cdot \rangle$ less than $\langle e, q \rangle$. Thus, the total number of these edges of $\Gamma'$ with $\langle e, \cdot \rangle$ greater than $\langle e, q \rangle$ is $$2d^+= 2{\ \rm lmin}+(d-{\rm lmax}-{\rm lmin})=d+{\rm lmin}-{\rm lmax}.$$ Similarly, $$2d^-= 2{\ \rm lmax}+(d-{\rm lmax}-{\rm lmin})=d+{\rm lmax}-{\rm lmin}.$$ Subtracting gives the conclusion: $${\rm nlm}(e,q):= \frac{{\rm lmax}(e,q)-{\rm lmin}(e,q)}{2}= \frac{d^-(e,q)-d^+(e,q)}{2}.$$ [width0pt ]{} \[cor1\] The number of net local maxima ${\rm nlm}(e,q)$ is independent of the choice of parameterization $\Gamma'$ of the double of $\Gamma$. [[***Proof.*** ]{}]{}Given a direction $e\in S^2$, the up-degree and down-degree $d^\pm(e,q)$ at a vertex $q\in \Gamma$ are defined independently of the choice of $\Gamma'$. [width0pt ]{} \[cor2\] For any $q \in \Gamma$, we have ${\rm ntc}(q) = \frac12\int_{S^2} \Big[{\rm nlm}(e,q)\Big]^+ \,dA_{S^2}.$ [[***Proof.*** ]{}]{}Consider $e \in S^2$. In the definition of ${\rm ntc}(q),$ $\chi_i(e) = \pm 1$ whenever $\pm \langle e, T_i \rangle < 0$. But the number of $1\leq i \leq d$ with $\pm \langle e, T_i \rangle < 0$ equals $d^{\mp}(e,q)$, so that $$\sum_{i=1}^d\chi_i(e)=d^-(e,q)-d^+(e,q)=2\,{\rm nlm}(e,q)$$ by Lemma \[combin\], for almost all $e \in S^2$. [width0pt ]{} \[defmu\] For a graph $\Gamma$ in ${{\mathbb R}}^3$ and $e \in S^2$, define the [*multiplicity at $e$*]{} as $$\mu(e) = \mu_\Gamma(e) = \sum\{{\rm nlm}^+(e,q): q {\rm \ a\ vertex\ of\ } \Gamma {\rm \ or\ a\ critical\ point\ of\ } \langle e,\cdot \rangle\}.$$ Note that $\mu(e)$ is a half-integer. Note also that in the case when $\Gamma$ is a knot, or equivalently, when $d(q) \equiv 2$, $\mu(e)$ is exactly the integer $\mu(\Gamma, e)$, the number of local maxima of $\langle e, \cdot \rangle$ along $\Gamma$ as defined in [@M], p. 252. \[mucompare\] For almost all $e\in S^2$ and for any parameterization $\Gamma'$ of the double of $\Gamma$, $\mu_\Gamma(e) \leq \frac12\mu_{\Gamma'}(e).$ [[***Proof.*** ]{}]{}We have $\mu_\Gamma(e) = \frac12\sum_q[{\rm lmax}_{\Gamma'}(e,q)-{\rm lmin}_{\Gamma'}(e,q)], \leq \frac12\sum_q {\rm lmax}_{\Gamma'}(e,q) =\frac12\mu_{\Gamma'}.$ [width0pt ]{}\ If, in place of the positive part, we sum ${\rm nlm}(e,q)$ itself over $q$ located above a plane orthogonal to $e$, we find a useful quantity: \[fibcard\] For almost all $s_0\in {{\mathbb R}}$ and almost all $e \in S^2$, $$2 \sum\{{\rm nlm}(e,q): \langle e,q \rangle > s_0\} = \#(e,s_0),$$ the cardinality of the fiber $\{p\in \Gamma: \langle e,p \rangle = s_0\}$. [[***Proof.*** ]{}]{} If $s_0 > \max_{p\in \Gamma} \langle e,p \rangle$, then $\#(e,s_0)=0$. Now proceed downward, using Lemma \[combin\] by induction. [width0pt ]{}\ Note that the fiber cardinality of Corollary \[fibcard\] is also the value obtained for knots, where the more general ${\rm nlm}$ may be replaced by the number of local maxima [@M]. \[hidim\] In analogy with Corollary \[fibcard\], we expect that an appropriate generalization of ${{\rm NTC}}$ to curved polyhedral complexes of dimension $\geq 2$ will in the future allow computation of the homology of level sets and sub-level sets of a (generalized) Morse function in terms of a generalization of ${\rm nlm}(e,q)$. \[absnlm\] The multiplicity of a graph in direction $e\in S^2$ may also be computed as $ \mu(e) = \frac12\sum_{q\in\Gamma}|{\rm nlm}(e,q)|$. [[***Proof.*** ]{}]{} It follows from Corollary \[fibcard\] with $s_0<\min_{\Gamma}\langle e,\cdot \rangle$ that $\sum_{q\in\Gamma} {\rm nlm}(e,q)=0$, which is the difference of positive and negative parts. The sum of these parts is $\sum_{q\in\Gamma}|{\rm nlm}(e,q)|=2\mu(e).$ [width0pt ]{} It was shown in Theorem 3.1 of [@M] that, in the case of knots, ${\mathcal C}(\Gamma)=\frac12 \int_{S^2}\mu(e)\, dA_{S^2}$, where Milnor refers to Crofton’s formula. We may now extend this result to [*graphs*]{}: \[muthm\] For a (piecewise $C^2$) graph $\Gamma$ mapped into ${{\mathbb R}}^3,$ the net total curvature has the following representation: $${{\rm NTC}}(\Gamma) = \frac12 \int_{S^2} \mu(e) \,dA_{S^2}(e).$$ [[***Proof.*** ]{}]{}We have $ {{\rm NTC}}(\Gamma) = \sum_{j=1}^N {\rm ntc}(q_j) + \int_{\Gamma_{\rm reg}} |\vec{k}| \, ds,$ where $q_1, \dots, q_N$ are the vertices of $\Gamma$, including local extrema as vertices of degree $d(q_j) = 2$, and where $\rm{ntc}(q):= \frac14\int_{S^2}\left[\sum_{i=1}^d\chi_i(e)\right]^+\,dA_{S^2}(e)$ by the definition (\[defnc\]) of ${\rm ntc}(q)$. Applying Milnor’s result to each $C^2$ edge, we have ${\mathcal C}(\Gamma_{\rm reg}) = \frac12 \int_{S^2} \mu_{\Gamma_{\rm reg}}(e) \, dA_{S^2}$. But $\mu_{\Gamma}(e) = \mu_{\Gamma_{\rm reg}}(e) + \sum_{j=1}^N \rm{nlm}^+(e,q_j)$, and the theorem follows. [width0pt ]{}\ \[notemb\] If $f:\Gamma \to {{\mathbb R}}^3$ is piecewise $C^2$ but is not an embedding, then the net total curvature ${{\rm NTC}}(\Gamma)$ is well defined, using the right-hand side of the conclusion of Theorem \[muthm\]. Moreover, ${{\rm NTC}}(\Gamma)$ has the same value when points of self-intersection of $\Gamma$ are redefined as vertices. For $e \in S^2$, we shall use the notation $p_e:{{\mathbb R}}^3 \to e{{\mathbb R}}$ for the orthogonal projection $\langle e,\cdot \rangle$. We shall sometimes identify ${{\mathbb R}}$ with the one-dimensional subspace $e{{\mathbb R}}$ of ${{\mathbb R}}^3$. \[1dsuff\] For any homeomorphism type $\{\Gamma\}$ of graphs, the infimum ${{\rm NTC}}(\{\Gamma\})$ of net total curvature among mappings $f:\Gamma \to {{\mathbb R}}^n$ is assumed by a mapping $f_0:\Gamma \to {{\mathbb R}}$. For any isotopy class $[\Gamma]$ of embeddings $f:\Gamma \to {{\mathbb R}}^3$, the infimum ${{\rm NTC}}([\Gamma])$ of net total curvature is assumed by a mapping $f_0:\Gamma \to {{\mathbb R}}$ in the closure of the given isotopy class. Conversely, if $f_0:\Gamma \to {{\mathbb R}}$ is in the closure of a given isotopy class $[\Gamma]$ of embeddings into ${{\mathbb R}}^3$, then for all $\delta >0$ there is an embedding $f:\Gamma \to {{\mathbb R}}^3$ in that isotopy class with ${{\rm NTC}}(f)\leq{{\rm NTC}}(f_0)+\delta$. [[***Proof.*** ]{}]{}Let $f:\Gamma \to {{\mathbb R}}^3$ be any piecewise smooth mapping. By Corollary \[notemb\] and Corollary \[fibcard\], the net total curvature of the projection $p_e\circ f:\Gamma \to {{\mathbb R}}$ of $f$ onto the line in the direction of almost any $e\in S^2$ is given by $2\pi \mu(e)=\pi(\mu(e)+\mu(-e)).$ It follows from Theorem \[muthm\] that ${{\rm NTC}}(\Gamma)$ is the average of $2\pi\mu(e)$ over $e$ in $S^2$. But the half-integer-valued function $\mu(e)$ is lower semi-continuous almost everywhere, as may be seen using Definition \[defnlm\]. Let $e_0 \in S^2$ be a point where $\mu$ attains its essential infimum. Then ${{\rm NTC}}(\Gamma) \geq \pi\mu(e_0)={{\rm NTC}}(p_{e_0}\circ f).$ But $(p_{e_0}\circ f)e_0$ is the limit as $\varepsilon \to 0$ of the map $f_\varepsilon$ whose projection in the direction $e_0$ is the same as that of $f$ and is multiplied by $\varepsilon$ in all orthogonal directions. Since $f_\varepsilon$ is isotopic to $f$, $(p_{e_0}\circ f)e_0$ is in the closure of the isotopy class of $f$. Conversely, given $f_0:\Gamma \to {{\mathbb R}}$ in the closure of a given isotopy class, let $f$ be an embedding in that isotopy class uniformly close to $f_0\, e_0$; $f_\varepsilon$ as constructed above converges uniformly to $f_0$ as $\varepsilon \to 0$, and ${{\rm NTC}}(f_\varepsilon)\to {{\rm NTC}}(f_0)$. [width0pt ]{} \[defflat\] We call a mapping $f:\Gamma\to{{\mathbb R}}^n$ [*flat*]{} (or ${{\rm NTC}}$-flat) if ${{\rm NTC}}(f)= {{\rm NTC}}(\{\Gamma\})$, the minimum value for the topological type of $\Gamma$, among all ambient dimensions $n$. In particular, Corollary \[1dsuff\] above shows that for any $\Gamma$, there is a flat mapping $f:\Gamma\to{{\mathbb R}}$. \[minmonot\] Consider a piecewise $C^2$ mapping $f_1:\Gamma \to {{\mathbb R}}$. There is a mapping $f_0:\Gamma \to {{\mathbb R}}$ which is monotonic along the topological edges of $\Gamma$, has values at topological vertices of $\, \Gamma$ arbitrarily close to those of $f_1$, and has ${{\rm NTC}}(f_0) \leq {{\rm NTC}}(f_1).$ [[***Proof.*** ]{}]{}Any piecewise $C^2$ mapping $f_1:\Gamma \to {{\mathbb R}}$ may be approximated uniformly by mappings with a finite set of local extreme points, using the compactness of $\Gamma$. Thus, we may assume without loss of generality that $f_1$ has only finitely many local extreme points. Note that for a mapping $f:\Gamma \to {{\mathbb R}}={{\mathbb R}}e$, ${\rm NTC}(f)=2\pi \mu(e)$: hence, we only need to compare $\mu_{f_0}(e)$ with $\mu_{f_1}(e)$. If $f_1$ is not monotonic on a topological edge $E$, then it has a local extremum at a point $z$ in the interior of $E$. For concreteness, we shall assume $z$ is a local maximum point; the case of a local minimum is similar. Write $v, w$ for the endpoints of $E$. Let $v_1$ be the closest local minimum point to $z$ on the interval of $E$ from $z$ to $v$ (or $v_1=v$ if there is no local minimum point between), and let $w_1$ be the closest local minimum point to $z$ on the interval from $z$ to $w$ (or $w_1=w$). Let $E_1\subset E$ denote the interval between $v_1$ and $w_1$. Then $E_1$ is an interval of a topological edge of $\Gamma$, having end points $v_1$ and $w_1$ and containing an interior point $z$, such that $f_1$ is monotone increasing on the interval from $v_1$ to $z$, and monotone decreasing on the interval from $z$ to $w_1$. By switching $v_1$ and $w_1$ if needed, we may assume that $f_1(v_1) < f_1(w_1) < f_1(z)$. Let $f_0$ be equal to $f_1$ except on the interior of the interval $E_1$, and map $E_1$ monotonically to the interval of ${{\mathbb R}}$ between $f_1(v_1)$ and $f_1(w_1)$. Then for $f_1(w_1) < s < f_1(z)$, the cardinality $\#(e,s)_{f_0} = \#(e,s)_{f_1} -2$. For $s$ in all other intervals of ${{\mathbb R}}$, this cardinality is unchanged. Therefore, ${\rm nlm}_{f_1}(w_1) = {\rm nlm}_{f_0}(w_1)-1$, by Lemma \[combin\]. This implies that ${\rm nlm}^+_{f_1}(w_1) \geq {\rm nlm}^+_{f_0}(w_1)-1$. Meanwhile, ${\rm nlm}_{f_1}(z)=1$, a term which does not appear in the formula for $\mu_{f_0}$ (see Definition \[defmu\]).Thus $\mu_{f_0} \leq \mu_{f_1},$ and ${{\rm NTC}}(f_0)\leq{{\rm NTC}}(f_1)$. Proceeding inductively, we remove each local extremum in the interior of any edge of $\Gamma$, without increasing ${{\rm NTC}}$. [width0pt ]{} REPRESENTATION FORMULA FOR NOWHERE-SMOOTH GRAPHS {#nonsmooth} ================================================ Recall, while defining the total curvature for continuous graphs in section \[deftc\] above, we needed the monotonicity of ${{\rm NTC}}(P)$ under refinement of [*polygonal*]{} graphs $P$. We are now ready to prove this. \[monotmu\] Let $P$ and $\widetilde{P}$ be polygonal graphs in ${{\mathbb R}}^3$, having the same topological vertices, and homeomorphic to each other. Suppose that every vertex of $P$ is also a vertex of $\widetilde{P}$: $\widetilde{P}$ is a [*refinement*]{} of $P$. Then for almost all $e \in S^2$, the multiplicity $\mu_{\widetilde{P}}(e) \geq \mu_P(e).$ As a consequence, ${{\rm NTC}}(\widetilde{P}) \geq {{\rm NTC}}(P)$. [[***Proof.*** ]{}]{}We may assume, as an induction step, that $\widetilde{P}$ is obtained from $P$ by replacing the edge having endpoints $q_0$, $q_2$ with two edges, one having endpoints $q_0$, $q_1$ and the other having endpoints $q_1$, $q_2$. Choose $e \in S^2$. We consider various cases: If the new vertex $q_1$ satisfies $\langle e, q_0\rangle < \langle e, q_1\rangle < \langle e, q_2\rangle$, then ${\rm nlm}_{\widetilde{P}}(e,q_i)={\rm nlm}_P(e,q_i)$ for $i = 0,2$ and ${\rm nlm}_{\widetilde{P}}(e,q_1)=0$, hence $\mu_{\widetilde{P}}(e) = \mu_P(e)$. If $\langle e, q_0\rangle < \langle e, q_2\rangle < \langle e, q_1\rangle$, then ${\rm nlm}_{\widetilde{P}}(e,q_0)={\rm nlm}_P(e,q_0)$ and ${\rm nlm}_{\widetilde{P}}(e,q_1)=1$. The vertex $q_2$ requires more careful counting: the up- and down-degree $d_{\widetilde{P}}^\pm(e,q_2)=d_P^\pm(e,q_2) \pm 1$, so that by Lemma \[combin\], ${\rm nlm}_{\widetilde{P}}(e,q_2)={\rm nlm}_P(e,q_2)-1$. Meanwhile, for each of the polygonal graphs, $\mu(e)$ is the sum over $q$ of ${\rm nlm}^+(e,q)$, so the change from $\mu_P(e)$ to $\mu_{\widetilde{P}}(e)$ depends on the value of ${\rm nlm}_P(e,q_2)$:\ (a) if ${\rm nlm}_P(e,q_2)\leq 0$, then ${\rm nlm}_{\widetilde{P}}^+(e,q_2)={\rm nlm}_P^+(e,q_2)=0$;\ (b) if ${\rm nlm}_P(e,q_2) = \frac12,$ then ${\rm nlm}_{\widetilde{P}}^+(e,q_2)= {\rm nlm}_P^+(e,q_2)-\frac12$;\ (c) if ${\rm nlm}_P(e,q_2)\geq 1$, then ${\rm nlm}_{\widetilde{P}}^+(e,q_2)= {\rm nlm}_P^+(e,q_2)-1$.\ Since the new vertex $q_1$ does not appear in $P$, recalling that ${\rm nlm}_{\widetilde{P}}(e,q_1)=1$, we have $\mu_{\widetilde{P}}(e) - \mu_P(e) = +1, +\frac12$ or $0$ in the respective cases (a), (b) or (c). In any case, $\mu_{\widetilde{P}}(e) \geq \mu_P(e)$. The reverse inequality $\langle e, q_1\rangle < \langle e, q_2\rangle < \langle e, q_0\rangle$ may be reduced to the case just above by replacing $e \in S^2$ with $-e$, since $\mu_P(-e)=-\mu_P(e)$ for any polhedral graph $P$. Then, depending whether ${\rm nlm}_P(e,q_2)$ is $\leq -1$, $=-\frac12$ or $\geq 0$, we find that $\mu_{\widetilde{P}}(e)-\mu_P(e)= {\rm nlm}^+_{\widetilde{P}}(e, q_2)-{\rm nlm}^+_P(e,q_2) = 0$, $\frac12$, or $1$. In any case, $\mu_{\widetilde{P}}(e)\geq \mu_P(e)$. These arguments are unchanged if $q_0$ is switched with $q_2$. This covers all cases except those in which equality occurs between $\langle e,q_i \rangle$ and $\langle e,q_j \rangle$ ($i\neq j$). The set of such unit vectors $e$ form a set of measure zero in $S^2$. The conclusion ${{\rm NTC}}(\widetilde{P}) \geq {{\rm NTC}}(P)$ now follows from Theorem \[muthm\]. [width0pt ]{}\ We remark here that this step of proving the monotonicity for the nowhere-smooth case differs from Milnor’s argument for the knot total curvature, where it was shown by two applications of the triangle inequality for spherical triangles. Milnor extended his results for piecewise smooth knots to continuous knots in [@M]; we shall carry out an analogous extension to continuous graphs. \[critpt\] We say a point $q\in \Gamma$ is [*critical*]{} relative to $e \in S^2$ when $q$ is a topological vertex of $\Gamma$ or when $\langle e, \cdot\rangle$ is not monotone in any open interval of $\Gamma$ containing $q$. Note that at some points of a differentiable curve, $\langle e, \cdot\rangle$ may have derivative zero but still not be considered a critical point relative to $e$ by our definition. This is appropriate to the $C^0$ category. For a continuous graph $\Gamma$, when ${{\rm NTC}}(\Gamma)$ is finite, we shall show that the number of critical points is finite for almost all $e$ in $S^2$ (see Lemma \[fincrit\] below). \[monotcvge\] Let $\Gamma$ be a continuous, finite graph in ${{\mathbb R}}^3$, and choose a sequence $\widehat{P_k}$ of $\Gamma$-approximating polygonal graphs with ${{\rm NTC}}(\Gamma)= \lim_{k \rightarrow \infty} {{\rm NTC}}(\widehat{P_k}).$ Then for each $e \in S^2$, there is a refinement $P_k$ of $\widehat{P_k}$ such that $\lim_{k \rightarrow \infty}\mu_{P_k}(e)$ exists in $[0,\infty]$. [[***Proof.*** ]{}]{} First, for each $k$ in sequence, we refine $\widehat{P_k}$ to include all vertices of $\widehat{P_{k-1}}$. Then for all $e\in S^2$, $\mu_{\widehat{P_k}}(e) \geq \mu_{\widehat{P_{k-1}}}(e)$, by Proposition \[monotmu\]. Second, we refine $\widehat{P_k}$ so that the arc of $\Gamma$ corresponding to each edge of $\widehat{P_k}$ has diameter $\leq 1/k$. Third, given a particular $e \in S^2$, for each edge $\widehat{E_k}$ of $\widehat{P_k}$, we add $0,1$ or $2$ points from $\Gamma$ as vertices of $\widehat{P_k}$ so that $\max_{\widehat{E_k}}\langle e,\cdot\rangle= \max_E\langle e,\cdot\rangle$ where $E$ is the closed arc of $\Gamma$ corresponding to $\widehat{E_k}$; and similarly so that $\min_{\widehat{E_k}} \langle e,\cdot\rangle= \min_E\langle e,\cdot\rangle$. Write $P_k$ for the result of this three-step refinement. Note that all vertices of $P_{k-1}$ appear among the vertices of $P_k$. Then by Proposition \[monotmu\], $${{\rm NTC}}(\widehat{P_k}) \leq {{\rm NTC}}(P_k) \leq {{\rm NTC}}(\Gamma),$$ so we still have ${{\rm NTC}}(\Gamma)= \lim_{k \rightarrow \infty} {{\rm NTC}}(P_k).$ Now compare the values of $\mu_{P_k}(e)=\sum_{q\in P_k} {\rm nlm_{P_k}}^+(e,q)$ with the same sum for $P_{k-1}$. Since $P_k$ is a refinement of $P_{k-1}$, we have $\mu_{P_k}(e) \geq \mu_{P_{k-1}}(e)$ by Proposition \[monotmu\]. Therefore the values $\mu_{P_k}(e)$ are non-decreasing in $k$, which implies they are either convergent or properly divergent; in the latter case we write $\lim_{k \rightarrow \infty}\mu_{P_k}(e)= \infty$. [width0pt ]{} For a continuous graph $\Gamma$, define the [*multiplicity*]{} at $e\in S^2$ as $\mu_\Gamma(e):= \lim_{k \rightarrow \infty}\mu_{P_k}(e) \in [0,\infty]$, where $P_k$ is a sequence of $\Gamma$-approximating polygonal graphs, refined with respect to $e$, as given in Lemma \[monotcvge\]. Note that any two $\Gamma$-approximating polygonal graphs have a common refinement. Hence, from the proof of Lemma \[monotcvge\], any two choices of sequences $\{\widehat{P_k}\}$ of $\Gamma$-approximating polygonal graphs lead to the same value $\mu_\Gamma(e)$. \[a.a.e\] Let $\Gamma$ be a continuous, finite graph in ${{\mathbb R}}^3$. Then $\mu_\Gamma: S^2 \to [0,\infty]$ takes its values in the half-integers, or $+ \infty$. Now assume ${{\rm NTC}}(\Gamma) < \infty$. Then $\mu_\Gamma$ is integrable, hence finite almost everywhere on $S^2$, and $$\label{nc=int} {{\rm NTC}}(\Gamma) = \frac12 \int_{S^2} \mu_\Gamma(e) \, dA_{S^2}(e).$$ For almost all $e \in S^2$, a sequence $P_k$ of $\Gamma$-approximating polygonal graphs, converging uniformly to $\Gamma$, may be chosen (depending on $e$) so that each local extreme point $q$ of $\langle e,\cdot\rangle$ along $\Gamma$ occurs as a vertex of $P_k$ for sufficiently large $k$. [[***Proof.*** ]{}]{}Given $e\in S^2$, let $\{P_k\}$ be the sequence of $\Gamma$-approximating polygonal graphs from Lemma \[monotcvge\]. If $\mu_\Gamma(e)$ is finite, then $\mu_{P_k}(e)=\mu_\Gamma(e)$ for $k$ sufficiently large, a half-integer. Suppose ${{\rm NTC}}(\Gamma) < \infty$. Then the half-integer-valued functions $\mu_{P_k}$ are non-negative, integrable on $S^2$ with bounded integrals since ${{\rm NTC}}(P_k) < {{\rm NTC}}(\Gamma) < \infty$, and monotone increasing in $k$. Thus for almost all $e \in S^2$, $\mu_{P_k}(e) = \mu_\Gamma(e)$ for $k$ sufficiently large. Since the functions $\mu_{P_k}$ are non-negative and pointwise non-decreasing almost everywhere on $S^2$, it now follows from the Monotone Convergence Theorem that $$\int_{S^2} \mu_\Gamma(e)\, dA_{S^2}(e) = \lim_{k\to \infty}\int_{S^2} \mu_{P_k}(e)\, dA_{S^2}(e)= 2 {{\rm NTC}}(\Gamma).$$ Finally, the polygonal graphs $P_k$ have maximum edge length $\to 0$. For almost all $e \in S^2$, $\langle e,\cdot\rangle$ is not constant along any open arc of $\Gamma$, and $\mu_\Gamma(e)$ is finite. Given such an $e$, choose $\ell = \ell(e)$ sufficiently large that $\mu_{P_k}(e) = \mu_\Gamma(e)$ and $\mu_{P_k}(-e) = \mu_\Gamma(-e)$ for all $k \geq \ell$. Then for $k \geq \ell$, along any edge $E_k$ of $P_k$ with corresponding arc $E$ of $\Gamma$, the maximum and minimum values of $\langle e,\cdot\rangle$ along $E$ occur at the endpoints, which are also the endpoints of $E_k$. Otherwise, as $P_k$ is further refined, new interior local maximum resp. local minimium points of $E$ would contribute a new, positive value to $\mu_{P_k}(e)$ resp. to $\mu_{P_k}(-e)$ as $k$ increases. Since the diameter of the corresponding arc $E$ of $\Gamma$ tends to zero as $k \to \infty$, any local maximum or local minimum of $\langle e,\cdot\rangle$ must become an endpoint of some edge of $P_k$ for $k$ sufficiently large, and for $k\geq \ell$ in particular. [width0pt ]{}\ Our next lemma focuses on the regularity of a graph $\Gamma$, originally only assumed continuous, provided it has finite net total curvature, or another notion of total curvature of a graph which includes the total curvature of the edges. \[fincrit\] Let $\Gamma$ be a continuous, finite graph in ${{\mathbb R}}^3$, with ${{\rm NTC}}(\Gamma)<\infty$. Then $\Gamma$ has continuous one-sided unit tangent vectors $T_1(p)$ and $T_2(p)$ at each point $p$, not a topological vertex. If $p$ is a vertex of degree $d$, then each of the $d$ edges which meet at $p$ have well-defined unit tangent vectors at $p$: $T_1(p),\dots,T_d(p)$. For almost all $e \in S^2$, $$\label{mu=sum} \mu_\Gamma(e) = \sum_q\{{\rm nlm}(e,q)\}^+,$$ where the sum is over the [*finite*]{} number of topological vertices of $\Gamma$ and critical points $q$ of $\langle e, \cdot \rangle$ along $\Gamma$. Further, for each $q$, ${\rm nlm}(e,q)= \frac12[d^-(e,q) - d^+(e,q)]$. All of these critical points which are not topological vertices are local extrema of $\langle e,\cdot\rangle$ along $\Gamma$. [[***Proof.*** ]{}]{}We have seen in the proof of Lemma \[a.a.e\] that for almost all $e \in S^2$, the linear function $\langle e,\cdot\rangle$ is not constant along any open arc of $\Gamma$, and by Lemma \[monotcvge\] there is a sequence $\{P_k\}$ of $\Gamma$-approximating polygonal graphs with $\mu_\Gamma(e) = \mu_{P_k}(e)$ for $k$ sufficiently large. We have further shown that each local maximum point of $\langle e,\cdot\rangle$ is a vertex of $P_k$, possibly of degree two, for $k$ large enough. Recall that $\mu_{P_k}(e) = \sum_q{\rm nlm}_{P_k}^+(e,q)$. Thus, each local maximum point $q$ for $\langle e, \cdot \rangle$ along $\Gamma$ provides a non-negative term ${\rm nlm}_{P_k}^+(e,q)$ in the sum for $\mu_{P_k}(e)$. Fix such an integer $k$. Consider a point $q\in \Gamma$ which is not a topological vertex of $\Gamma$ but is a critical point of $\langle e,\cdot\rangle$. We shall show, by an argument similar to one used by van Rooij in [@vR], that $q$ must be a local extreme point. As a first step, we show that $\langle e,\cdot\rangle$ is monotone on a sufficiently small interval on either side of $q$. Choose an ordering of the closed edge $E$ of $\Gamma$ containing $q$, and consider the interval $E_+$ of points $\geq q$ with respect to this ordering. Suppose that $\langle e,\cdot\rangle$ is not monotone on any subinterval of $E_+$ with $q$ as endpoint. Then in any interval $(q,r_1)$ there are points $p_2 > q_2 > r_2$ so that the numbers $\langle e,p_2 \rangle,\langle e,q_2\rangle,\langle e, r_2\rangle$ are not monotone. It follows by an induction argument that there exist decreasing sequences $p_n \to q$, $q_n \to q$, and $r_n \to q$ of points of $E_+$ such that for each $n$, $r_{n-1} > p_n > q_n > r_n > q$, but the value $\langle e,q_n\rangle$ lies outside of the closed interval between $\langle e,p_n\rangle$ and $\langle e,r_n\rangle$. As a consequence, there is a local extremum $s_n \in (r_n, p_n)$. Since $r_{n-1} > p_n$, the $s_n$ are all distinct, $1\leq n < \infty$. But by Lemma \[a.a.e\], all local extreme points, specifically $s_n$, of $\langle e, \cdot \rangle$ along $\Gamma$ occur among the [*finite*]{} number of vertices of $P_k$, a contradiction. This shows that $\langle e, \cdot \rangle$ is monotone on an interval to the right of $q$. An analogous argument shows that $\langle e, \cdot \rangle$ is monotone on an interval to the left of $q$. Recall that for a [*critical point*]{} $q$ relative to $e$, $\langle e,\cdot\rangle$ is not monotone on any neighborhood of $q$. Since $\langle e,\cdot\rangle$ is monotone on an interval on either side, the sense of monotonicity must be opposite on the two sides of $q$. Therefore every critical point $q$ along $\Gamma$ for $\langle e, \cdot \rangle$, which is not a topological vertex, is a local extremum. We have chosen $k$ large enough that $\mu_\Gamma(e) = \mu_{P_k}(e)$. Then for any edge $E_k$ of $P_k$, the function $\langle e, \cdot \rangle$ is monotone along the corresponding arc $E$ of $\Gamma$, as well as along $E_k$. Also, $E$ and $E_k$ have common end points. It follows that for each $t \in {{\mathbb R}}$, the cardinality $\#(e,t)$ of the fiber $\{q\in \Gamma: \langle e,q \rangle =t \}$ is the same for $P_k$ as for $\Gamma$. We may see from Lemma \[combin\] applied to $P_k$ that for each vertex or critical point $q$, ${\rm nlm}_{P_k}(e,q) = \frac12[d_{P_k}^-(e,q) - d_{P_k}^+(e,q)]$; but ${\rm nlm}(e,q)$ and $d^\pm(e,q)$ have the [*same*]{} values for $\Gamma$ as for $P_k$. The formula $\mu_\Gamma(e) = \sum_q\{{\rm nlm}_\Gamma(e,q)\}^+$ now follows from the corresponding formula for $P_k$, for almost all $e \in S^2$. Consider an open interval $E$ of $\Gamma$ with endpoint $q$. We have just shown that for a.a. $e\in S^2$, $\langle e, \cdot \rangle$ is monotone on a subinterval with endpoint $q$. Choose a sequence $p_\ell$ from $E$, $p_\ell\to q$, and write $T_\ell := \frac{p_\ell-q}{|p_\ell-q|} \in S^2$. Then $\lim_{\ell\to\infty}T_\ell$ exists. Otherwise, since $S^2$ is compact, there are subsequences $\{T_{m_n}\}$ and $\{T_{k_n}\}$ with $T_{m_n} \to T'$ and $T_{k_n} \to T'' \neq T'$. But for an open set of $e\in S^2$, $\langle e, T' \rangle < 0 < \langle e, T'' \rangle$. For such $e$, $\langle e, q_{m_n}\rangle<\langle e, q\rangle<\langle e, q_{k_n} \rangle$ for $n >> 1$. That is, as $p\to q$, $p\in E$, $\langle e, p \rangle$ assumes values above and below $\langle e, q\rangle$ infinitely often, contradicting monotonicity on an interval starting at $q$ for a.a. $e\in S^2$. This shows that $\Gamma$ has one-sided tangent vectors $T_1(q), \dots, T_d(q)$ at each point $q\in \Gamma$ of degree $d=d(q)$ ($d=2$ if q is not a topological vertex). Further, as $k\to \infty,$ $T_i^{P_k}(q) \to T_i^\Gamma(q)$, $1\leq i\leq d(q)$, since edges of $P_k$ have diameter $\leq \frac{1}{k}$. The remaining conclusions follow readily. [width0pt ]{} \[ctsnc\] Let $\Gamma$ be a continuous, finite graph in ${{\mathbb R}}^3$, with ${{\rm NTC}}(\Gamma)<\infty$. Then for each point $q$ of $\Gamma$, the contribution at $q$ to net total curvature is given by equation , where for $e \in S^2$, $\chi_i(e)=$ the sign of $\langle -T_i(q), e \rangle$, $1\leq i \leq d(q)$. (Here, if $q$ is not a topological vertex, we understand $d=2$.) [[***Proof.*** ]{}]{}According to Lemma \[fincrit\], for $1\leq i \leq d(q)$, $T_i(q)$ is defined and tangent to an edge $E_i$ of $\Gamma$, which is continuously differentiable at its end point $q$. If $P_n$ is a sequence of $\Gamma$-approximating polygonal graphs with maximum edge length tending to $0$, then the corresponding unit tangent vectors $T^{P_n}_i(q) \to T^{\Gamma}_i(q)$ as $n \to \infty$. For each $P_n$, we have $${\rm ntc}^{P_n}(q) = \frac{1}{4}\int_{S^2} \left[\sum_{i=1}^d{\chi_i}^{P_n}(e)\right]^+\,dA_{S^2}(e),$$ and ${\chi_i}^{P_n} \to {\chi_i}^\Gamma$ in measure on $S^2$. Hence, the integrals for $P_n$ converge to those for $\Gamma$, which is equation . [width0pt ]{}\ We are ready to state the formula for net total curvature, by localization on $S^2$, a generalization of Theorem \[muthm\]: \[muthm2\] For a continuous graph $\Gamma,$ the net total curvature ${{\rm NTC}}(\Gamma) \in (0,\infty]$ has the following representation: $${{\rm NTC}}(\Gamma) = \frac14 \int_{S^2} \mu(e) \,dA_{S^2}(e),$$ where, for almost all $e \in S^2$, the multiplicity $\mu(e)$ is a positive half-integer or $+\infty$, given as the finite sum . [[***Proof.*** ]{}]{}If ${{\rm NTC}}(\Gamma)$ is finite, then the theorem follows from Lemma \[a.a.e\] and Lemma \[fincrit\]. Suppose ${{\rm NTC}}(\Gamma) = \sup {{\rm NTC}}(P_k)$ is infinite, where $P_k$ is a refined sequence of polygonal graphs as in Lemma \[monotcvge\]. Then $\mu_\Gamma(e)$ is the non-decreasing limit of $\mu_{P_k}(e)$ for all $e \in S^2$. Thus $\mu_\Gamma(e) \geq \mu_{P_k}(e)$ for all $e$ and $k$, and $\mu_\Gamma(e) = \mu_{P_k}(e)$ for $k\geq\ell(e)$. This implies that $\mu_\Gamma(e)$ is a positive half-integer or $\infty$. Since ${{\rm NTC}}(\Gamma) = \infty$, the integral $${{\rm NTC}}(P_k) = \frac12\int_{S^2}\mu_{P_k}(e) \,dA_{S^2}(e)$$ is arbitrarily large as $k \to \infty$, but for each $k$ is less than or equal to $$\frac12 \int_{S^2} \mu_\Gamma(e) \,dA_{S^2}(e).$$ Therefore this latter integral equals $\infty$, and thus equals ${{\rm NTC}}(\Gamma).$ [width0pt ]{}\ We turn our attention next to the tameness of graphs of finite total curvature. \[untangle\] Let $n$ be a positive integer, and write $Z$ for the set of $n$-th roots of unity in ${{\mathbb C}}= {{\mathbb R}}^2$. Given a continuous one-parameter family $S_t$, $ 0 \leq t < 1$, of sets of $n$ points in ${{\mathbb R}}^2$, there exists a continuous one-parameter family $\Phi_t:{{\mathbb R}}^2 \to {{\mathbb R}}^2$ of homeomorphisms with compact support such that $\Phi_t(S_t) = Z$, $0 \leq t < 1$. [[***Proof.*** ]{}]{}It is well known that there is an isotopy $\Phi_0:{{\mathbb R}}^2 \to {{\mathbb R}}^2$ such that $\Phi_0(S_0) = Z$ and $\Phi_0 =$ id outside of a compact set. This completes the case $ t_0 = 0$ of the following continuous induction argument. Suppose that $[0,t_0] \subset [0,1)$ is a subinterval such that there exists a continuous one-parameter family $\Phi_t:{{\mathbb R}}^2 \to {{\mathbb R}}^2$ of homeomorphisms with compact support, with $\Phi_t(S_t) = Z$ for all $0 \leq t \leq t_0$. We shall extend this property to an interval $[0,t_0+\delta]$. Write $B_\varepsilon (Z)$ for the union of balls $B_\varepsilon (\zeta_i)$ centered at the $n$ roots of unity $\zeta_1, \dots \zeta_n$. For $\varepsilon < \sin{\frac{\pi}{n}},$ these balls are disjoint. We may choose $0 < \delta < 1-t_0$ such that $\Phi_{t_0}(S_t) \subset B_\varepsilon(Z)$ for all $t_0 \leq t \leq t_0 + \delta.$ Write the points of $S_t$ as $x_i(t), \ 1 \leq i \leq n,$ where $\Phi_{t_0}(x_i(t)) \in B_\varepsilon(\zeta_i)$. For each $t \in [t_0, t_0 + \delta],$ each of the balls $B_\varepsilon(\zeta_i)$ may be mapped onto itself by a homeomorphism $\psi_t$, varying continuously with $t$, such that $\psi_{t_0}$ is the identity, $\psi_t$ is the identity near the boundary of $B_\varepsilon(\zeta_i)$ for all $t \in [t_0, t_0 + \delta]$, and $\psi_t(\Phi_{t_0}(x_i(t))) = \zeta_i$ for all such $t$. For example, we may construct $\psi_t$ so that for each $y \in B_\varepsilon(\zeta_i)$, $y-\psi_t(y)$ is parallel to $\Phi_{t_0}(x_i(t)) - \zeta_i$. We now define $\Phi_t = \psi_t \circ \Phi_{t_0}$ for each $t \in [t_0, t_0 + \delta].$ As a consequence, we see that there is no maximal interval $[0,t_0] \subset [0, 1)$ such that there is a continuous one-parameter family $\Phi_t:{{\mathbb R}}^2 \to {{\mathbb R}}^2$ of homeomorphisms with compact support with $\Phi_t(S_t) = Z$, for all $0 \leq t \leq t_0$. Thus, this property holds for the entire interval $0 \leq t<1$. [width0pt ]{}\ In the following theorem, the total curvature of a graph may be understood in terms of any definition which includes the total curvature of edges and which is continuous as a function of the unit tangent vectors at each vertex. This includes net total curvature, TC of [@T] and CTC of [@GY1]. \[tame\] Suppose $\Gamma \subset {{\mathbb R}}^3$ is a continuous graph with finite total curvature. Then for any $\varepsilon > 0$, $\Gamma$ is isotopic to a $\Gamma$-approximating polygonal graph $P$ with edges of length at most $\varepsilon$, whose total curvature is less than or equal to that of $\Gamma$. [[***Proof.*** ]{}]{}Since $\Gamma$ has finite total curvature, by Lemma \[fincrit\], at each topological vertex of degree $d$ the edges have well-defined unit tangent vectors $T_1, \dots, T_d$, which are each the limit of the unit tangent vectors to the corresponding edges. If at each vertex the unit tangent vectors $T_1, \dots, T_d$ are distinct, then any sufficiently fine $\Gamma$-approximating polygonal graph will be isotopic to $\Gamma$; this easier case is proven. We consider therefore $n$ edges $E_1, \dots, E_n$ which end at a vertex $q$ with common unit tangent vectors $T_1 = \dots = T_n$. Choose orthogonal coordinates $(x,y,z)$ for ${{\mathbb R}}^3$ so that this common tangent vector $T_1=\dots=T_n=(0,0,-1)$ and $q = (0,0,1)$. For some $\varepsilon>0,$ in the slab $1-\varepsilon\leq z\leq 1,$ the edges $E_1, \dots, E_n$ project one-to-one onto the $z$-axis. After rescaling about $q$ by a factor $\geq \frac{1}{\varepsilon}$, $E_1, \dots, E_n$ form a braid $B$ of $n$ strands in the slab $0 \leq z < 1$ of ${{\mathbb R}}^3$, plus the point $q=(0,0,1)$. Each strand $E_i$ has $q$ as an endpoint, and the coordinate $z$ is strictly monotone along $E_i$, $1\leq i \leq n$. Write $S_t = B \cap \{ z = t\}$. Then $S_t$ is a set of $n$ distinct points in the plane $\{ z = t\}$ for each $0\leq t<1$. According to Proposition \[untangle\], there are homeomorphisms $\Phi_t$ of the plane $\{ z = t\}$ for each $0\leq t<1$, isotopic to the identity in that plane, continuous as a function of $t$, such that $\Phi_t(S_t) = Z \times \{t\},$ where $Z$ is the set of $n$th roots of unity in the $(x,y)$-plane, and $\Phi_t$ is the identity outside of a compact set of the plane $\{ z = t\}$. We may suppose that $S_t$ lies in the open disk of radius $a(1-t)$ of the plane $\{ z = t\}$, for some (arbitrarily small) constant $a>0$. We modify $\Phi_t$, first replacing its values with $(1-t) \Phi_t$ inside the disk of radius $a(1-t)$. We then modify $\Phi_t$ outside the disk of radius $a(1-t)$, such that $\Phi_t$ is the identity outside the disk of radius $2a(1-t)$. Having thus modified the homeomorphisms $\Phi_t$ of the planes $\{ z = t\}$, we may now define an isotopy $\Phi$ of ${{\mathbb R}}^3$ by mapping each plane $\{ z = t\}$ to itself by the homeomorphism $\Phi_0^{-1} \circ \Phi_t$, $0\leq t<1$; and extend to the remaining planes $\{ z = t\}$, $t\geq 1$ and $t<0$, by the identity. Then the closure of the image of the braid $B$ is the union of line segments from $q =(0,0,1)$ to the $n$ points of $S_0$ in the plane $\{ z = 0\}$. Since each $\Phi_t$ is isotopic to the identity in the plane $\{ z = t\}$, $\Phi$ is isotopic to the identity of ${{\mathbb R}}^3$. This procedure may be carried out in disjoint sets of ${{\mathbb R}}^3$ surrounding each unit vector which occurs as tangent vector to more than one edge at a vertex of $\Gamma$. Outside these sets, we inscribe a polygonal arc in each edge of $\Gamma$ to obtain a $\Gamma$-approximating polygonal graph $P$. By Definition \[gendefnet\], $P$ has total curvature less than or equal to the total curvature of $\Gamma$. [width0pt ]{} Artin and Fox [@AF] introduced the notion of [*tame*]{} and [*wild*]{} knots in ${{\mathbb R}}^3$; the extension to graphs is the following We say that a graph in ${{\mathbb R}}^3$ is [*tame*]{} if it is isotopic to a polyhedral graph; otherwise, it is [*wild*]{}. Milnor proved in [@M] that knots of finite total curvature are tame. More generally, we have A continuous graph $\Gamma \subset {{\mathbb R}}^3$ of finite total curvature is tame. [[***Proof.*** ]{}]{}This is an immediate consequence of Theorem \[tame\], since the $\Gamma$-approximating polygonal graph $P$ is isotopic to $\Gamma$. [width0pt ]{} Tameness does not imply finite total curvature. For a well-known example, consider $\Gamma\subset{{\mathbb R}}^2$ to be the continuous curve $\{(x,h(x)):x\in [-1,1]\}$ where the function $$h(x)=-\frac{x}{\pi}\sin\frac{\pi}{x},$$ $h(0)=0$, has a sequence of zeroes $\pm\frac{1}{n}\to 0$ as $n\to\infty$. Then the total curvature of $\Gamma$ between $(0,\frac{1}{n})$ and $(0,\frac{1}{n+1})$ converges to $\pi$ as $n \to\infty$. Thus ${\mathcal C}(\Gamma) = \infty$. On the other hand, $h(x)$ is continuous on $[-1,1]$, from which it readily follows that $\Gamma$ is tame. ON VERTICES OF SMALL DEGREE {#three/four} =========================== We are now in a position to illustrate some properties of net total curvature ${{\rm NTC}}(\Gamma)$ in a few relatively simple cases, and to make some observations regarding ${{\rm NTC}}(\{\Gamma\})$, the minimum net total curvature for the homeomorphism type of a graph $\Gamma \subset {{\mathbb R}}^n$ (see Definition \[defflat\] above). Minimum curvature for given degree ---------------------------------- \[val3\] If a vertex $q$ has [**odd**]{} degree, then $\rm{ntc}(q) \geq \pi/2$. If $d(q)=3$, then equality holds if and only if the three tangent vectors $T_1, T_2, T_3$ at $q$ are coplanar but do not lie in any open half-plane. If $q$ has [**even**]{} degree $2m$, then the minimum value of $\rm{ntc}(q)$ is $0$. Moreover, the equality $\rm{ntc}(q)=0$ only occurs when $T_1(q), \dots, T_{2m}(q)$ form $m$ opposite pairs. [[***Proof.*** ]{}]{} Let $q$ have odd degree $d(q)=2m+1$. Then from Lemma \[combin\], for any $e\in S^2$, we see that ${\rm nlm}(e,q)$ is a half-integer $\pm \frac12, \dots, \pm \frac{2m+1}{2}$. In particular, $|{\rm nlm}(e,q)|\geq \frac12$. Corollary \[cor2\] and the proof of Corollary \[absnlm\] show that $${\rm ntc}(q) = \frac{1}{4}\int_{S^2} \Big|{\rm nlm}(e,q)\Big|\,dA_{S^2}.$$ Therefore ${\rm ntc}(q) \geq \pi/2$. If the degree $d(q)=3$, then $|{\rm nlm}(e,q)|= \frac12$ if and only if both $d^+(q)$ and $d^-(q)$ are nonzero, that is, $q$ is not a local extremum for $\langle e,\cdot \rangle$. If $\rm{ntc}(q) = \pi/2$, then this must be true for almost every direction $e\in S^2$. Thus, the three tangent vectors must be coplanar, and may not lie in an open half-plane. If $d(q)=2m$ is even and equality $\rm{ntc}(q)=0$ holds, then the formula above for ${\rm ntc}(q)$ in terms of $|{\rm nlm}(e,q)|$ would require ${\rm nlm}(e,q)\equiv 0$, and hence $d^+(e,q)=d^-(e,q)=m$ for almost all $e\in S^2$: whenever $e$ rotates so that the plane orthogonal to $e$ passes $T_i$, another tangent vector $T_j$ must cross the plane in the opposite direction, for a.a. $e$, which implies $T_j=-T_i$. [width0pt ]{} \[odd&gt;3\] If a vertex $q$ of odd degree $d(q)=2p+1$, has the minimum value $\rm{ntc}(q)=\pi/2$, and a hyperplane $P\subset {{\mathbb R}}^n$ contains an even number of the tangent vectors at $q$, and no others, then these tangent vectors form opposite pairs. The proof is seen by fixing any $(n-2)$-dimensional subspace $L$ of $P$ and rotating $P$ by a small positive or negative angle $\delta$ to a hyperplane $P_\delta$ containing $L$. Since $P_\delta$ must have $k$ of the vectors $T_1, \dots, T_{2p+1}$ on one side and $k+1$ on the other side, for some $0\leq k \leq p$, by comparing $\delta>0$ with $\delta<0$ it follows that exactly half of the tangent vectors in $P$ lie nonstrictly on each side of $L$. The proof may be continued as in the last paragraph of the proof of Proposition \[val3\]. In particular, any two independent tangent vectors $T_i$ and $T_j$ share the $2$-plane they span with a third, the three vectors not lying in any open half-plane: in fact, the third vector needs to lie in any hyperplane containing $T_i$ and $T_j$. For example, a flat $K_{5,1}$ in ${{\mathbb R}}^3$ must have five straight segments, two being opposite; and the remaining three being coplanar but not in any open half-plane. This includes the case of four coplanar line segments, since the four must be in opposite pairs, and either opposing pair may be considered as coplanar with the fifth segment. Non-monotonicity of ${{\rm NTC}}$ for subgraphs ----------------------------------------------- \[notmonotone\] If $\Gamma_0$ is a subgraph of a graph $\Gamma$, then ${{\rm NTC}}(\Gamma_0)$ might [**not**]{} be $\leq {{\rm NTC}}(\Gamma).$ For a simple polyhedral example, we may consider the “butterfly" graph $\Gamma$ in the plane with six vertices: $q_0^\pm = (0,\pm 1), q_1^\pm = (1,\pm 3),$ and $q_2^\pm = (-1,\pm 3)$. $\Gamma$ has seven edges: three vertical edges $L_0, L_1$ and $L_2$ are the line segments $L_i$ joining $q_i^-$ to $q_i^+$. Four additional edges are the line segments from $q_0^\pm$ to $q_1^\pm$ and from $q_0^\pm$ to $q_2^\pm$, which form the smaller angle $2 \alpha$ at $q_0^\pm$, where $\tan \alpha = 1/2$, so that $\alpha < \pi/4.$ The subgraph $\Gamma_0$ will be $\Gamma$ minus the interior of $L_0$. Then ${{\rm NTC}}(\Gamma_0) = {\mathcal C}(\Gamma_0)= 6 \pi - 8 \alpha.$ However, ${{\rm NTC}}(\Gamma) = 4(\pi - \alpha) + 2(\pi/2) = 5 \pi - 4 \alpha,$ which is $<{{\rm NTC}}(\Gamma_0).$ [width0pt ]{}\ The monotonicity property, which is shown in Observation \[notmonotone\] to fail for ${{\rm NTC}}(\Gamma)$, is a virtue of Taniyama’s total curvature ${{\rm TC}}(\Gamma)$. Net total curvature $\neq$ cone total curvature $\neq$ Taniyama’s total curvature --------------------------------------------------------------------------------- It is not difficult to construct three unit vectors $T_1, T_2, T_3$ in ${{\mathbb R}}^3$ such that the values of ${\rm ntc}(q)$, ${\rm ctc}(q)$ and ${\rm tc}(q)$, with these vectors as the $d(q) = 3$ tangent vectors to a graph at a vertex $q$, have different values. For example, we may take $T_1, T_2$ and $T_3$ to be three unit vectors in a plane, making equal angles $2\pi/3$. According to Proposition \[val3\], we have the contribution to net total curvature ${\rm ntc}(q) = \pi/2$. But the contribution to cone total curvature is ${\rm ctc}(q) = 0$. Namely, ${\rm ctc}(q) := \sup_{e \in S^2} \sum_{i=1}^3\left(\frac{\pi}{2}-\arccos\langle T_i, e\rangle \right).$ In this supremum, we may choose $e$ to be normal to the plane of $T_1, T_2$ and $T_3$, and ${\rm ctc}(q) = 0$ follows. Meanwhile, ${\rm tc}(q)$ is the sum of the exterior angles formed by the three pairs of vectors, each equal to $\pi/3$, so that ${\rm tc}(q) = \pi$. A similar computation for degree $d$ and coplanar vectors making equal angles gives ${\rm ctc}(q) = 0$, and ${\rm tc}(q) = \frac{\pi}{2}\Big[\frac{(d-1)^2}{2}\Big]$ (brackets denoting integer part), while ${\rm ntc}(q) = \pi/2$ for $d$ odd, ${\rm ntc}(q) = 0$ for $d$ even. This example indicates that ${\rm tc}(q)$ may be significantly larger than ${\rm ntc}(q)$. In fact, we have \[tc&gt;&gt;ntc\] If a vertex $q$ of a graph $\Gamma$ has degree $d=d(q)\geq 2$, then ${\rm tc}(q) \geq (d-1) {\rm ntc}(q)$. This follows from the definition of ${\rm ntc}(q)$. Let $T_1, \dots, T_d$ be the unit tangent vectors at $q$. The exterior angle between $T_i$ and $T_j$ is $$\arccos\langle -T_i,T_j \rangle = \frac{1}{4} \int_{S^2} (\chi_i + \chi_j)^+ \, dA_{S^2}.$$ The contribution ${\rm tc}(q)$ at $q$ to total curvature ${{\rm TC}}(\Gamma)$ equals the sum of these integrals over all $1 \leq i < j \leq d$. The sum of the integrands is $$\sum_{1\leq i<j\leq d}(\chi_i + \chi_j)^+\geq \Bigg[\sum_{1\leq i<j\leq d}(\chi_i + \chi_j)\Bigg]^+ = (d-1)\Big[\sum_{i=1}^d \chi_i\Big]^+.$$ Integrating over $S^2$ and dividing by $4$, we have ${\rm tc}(q) \geq (d-1) {\rm ntc}(q)$. [width0pt ]{} Conditional additivity of net total curvature under taking union ---------------------------------------------------------------- Observation 3 shows the failure of monotonicity of ${{\rm NTC}}$ for subgraphs due to the cancellation phenomena at each vertex. The following subadditivity statement specifies the necessary and sufficient condition for the additivity of net total curvature under taking union of graphs. \[subadditivity\] Given two graphs $\Gamma_1$ and $\Gamma_2\subset {{\mathbb R}}^n$ with $\Gamma_1 \cap \Gamma_2 = \{p_1, \dots, p_N\}$, the net total curvature of $\,\Gamma = \Gamma_1 \cup \Gamma_2$ obeys the sub-additivity law $$\begin{aligned} \label{subadd} {{\rm NTC}}(\Gamma) & = & {{\rm NTC}}(\Gamma_1) + {{\rm NTC}}(\Gamma_2)+\nonumber\\ &+&\frac12\sum_{j=1}^N\int_{S^2}[{\rm nlm}_{\Gamma}^+(e, p_j)- {\rm nlm}_{\Gamma_1}^+(e, p_j)- {\rm nlm}_{\Gamma_2}^+(e, p_j)]\, dA_{S^2} \\ & \leq & {{\rm NTC}}(\Gamma_1) + {{\rm NTC}}(\Gamma_2).\nonumber\end{aligned}$$ In particular, additivity holds if and only if $${\rm nlm}_{\Gamma_1}(e, p_j)\, {\rm nlm}_{\Gamma_2}(e, p_j) \geq 0$$ for all points $p_j$ of $\,\Gamma_1 \cap \Gamma_2$ and almost all $e \in S^2$. [[***Proof.*** ]{}]{}The edges of $\Gamma$ and vertices other than $p_1, \dots, p_N$ are edges and vertices of $\Gamma_1$ or of $\Gamma_2$, so we only need to consider the contribution at the vertices $p_1, \dots, p_N$ to $\mu(e)$ for $e\in S^2$ (see Definition \[defmu\]). The sub-additivity follows from the general inequality $(a+b)^+ \leq a^+ + b^+$ for any real numbers $a$ and $b$. Namely, let $a:= {\rm nlm}_{\Gamma_1}(e, p_j)$ and $b:= {\rm nlm}_{\Gamma_2}(e, p_j)$, so that ${\rm nlm}_{\Gamma}(e, p_j) = a+b$, as follows from Lemma \[combin\]. Now integrate both sides of the inequality over $S^2$, sum over $j=1, \dots, N$ and apply Theorem \[muthm\]. As for the equality case, suppose that $ab\geq 0$. We then note that either $a > 0 $ & $b > 0$, or $a < 0 $ & $b < 0$, or $a = 0 $, or $b = 0$. In all four cases, we have $a^+ + b^+ = (a+b)^+$. Applied with $a= {\rm nlm}_{\Gamma_1}(e, p_j)$ and $b= {\rm nlm}_{\Gamma_2}(e, p_j)$, assuming that ${\rm nlm}_{\Gamma_1}(e, p_j){\rm nlm}_{\Gamma_2}(e,p_j)\geq 0$ holds for all $j = 1, \dots, N$ and almost all $e \in S^2$, this implies that ${{\rm NTC}}(\Gamma_1\cup\Gamma_2)={{\rm NTC}}(\Gamma_1)+{{\rm NTC}}(\Gamma_2).$ To show that the equality ${{\rm NTC}}(\Gamma_1 \cup\Gamma_2)={{\rm NTC}}(\Gamma_1)+{{\rm NTC}}(\Gamma_2)$ implies the inequality ${\rm nlm}_{\Gamma_1}(e, p_j){\rm nlm}_{\Gamma_2}(e, p_j)\geq 0$ for all $j=1, \dots, N$ and for almost all $e \in S^2$, we suppose, to the contrary, that there is a set $U$ of positive measure in $S^2$, such that for some vertex $p_j$ in $\Gamma_1 \cap \Gamma_2$, whenever $e$ is in $U$, the inequality $ab<0$ is satisfied, where $a={\rm nlm}_{\Gamma_1}(e, p_j)$ and $b={\rm nlm}_{\Gamma_2}(e, p_j)$. Then for $e$ in $U$, $a$ and $b$ are of opposite signs. Let $U_1$ be the part of $U$ where $a< 0<b$ holds: we may assume $U_1$ has positive measure, otherwise exchange $\Gamma_1$ with $\Gamma_2$. On $U_1$, we have $$(a+b)^+ < b^+= a^+ + b^+.$$ Recall that $a+b={\rm nlm}_{\Gamma}(e,p_j).$ Hence the inequality between half-integers $${\rm nlm}_\Gamma^+(e,p_j)< {\rm nlm}_{\Gamma_1}^+(e,p_j)+{\rm nlm}_{\Gamma_2}^+(e, p_j)$$ is valid on the set of positive measure $U_1$, which in turn implies that ${{\rm NTC}}(\Gamma_1 \cup \Gamma_2) < {{\rm NTC}}(\Gamma_1) + {{\rm NTC}}(\Gamma_2)$, contradicting the assumption of equality. [width0pt ]{} One-point union of graphs ------------------------- \[1ptunion\] If the graph $\Gamma$ is the one-point union of graphs $\Gamma_1$ and $\Gamma_2$, where the points $p_1$ chosen in $\Gamma_1$ and $p_2$ chosen in $\Gamma_2$ are not topological vertices, then the minimum ${{\rm NTC}}$ among all mappings is subadditive, and the minimum ${{\rm NTC}}$ minus $2\pi$ is superadditive: $${{\rm NTC}}(\{\Gamma_1\})+{{\rm NTC}}(\{\Gamma_2\})-2\pi\leq {{\rm NTC}}(\{\Gamma\})\leq {{\rm NTC}}(\{\Gamma_1\})+{{\rm NTC}}(\{\Gamma_2\}).$$ Further, if the points $p_1 \in\Gamma_1$ and $p_2 \in\Gamma_2$ may appear as extreme points on mappings of minimum ${{\rm NTC}}$, then the minimum net total curvature among all mappings, minus $2\pi$, is additive: $${{\rm NTC}}(\{\Gamma\})= {{\rm NTC}}(\{\Gamma_1\})+{{\rm NTC}}(\{\Gamma_2\})-2\pi.$$ [[***Proof.*** ]{}]{}Write $p \in \Gamma$ for the identified points $p_1=p_2=p$. Choose flat mappings $f_1:\Gamma_1\to{{\mathbb R}}$ and $f_2:\Gamma_2\to{{\mathbb R}}$, adding constants so that the chosen points $p_1 \in \Gamma_1$ and $p_2 \in \Gamma_2$ have $f_1(p_1)=f_2(p_2)=0.$ Further, by Proposition \[minmonot\], we may assume that $f_1$ and $f_2$ are strictly monotone on the edges of $\Gamma_1$ resp. $\Gamma_2$ containing $p_1$ resp. $p_2$. Let $f:\Gamma\to{{\mathbb R}}$ be defined as $f_1$ on $\Gamma_1$ and as $f_2$ on $\Gamma_2$. Then at the common point of $\Gamma_1$ and $\Gamma_2$, $f(p)=0$, and $f$ is continuous. But since $f_1$ and $f_2$ are monotone on the edges containing $p_1$ and $p_2$, ${\rm nlm}_{\Gamma_1}(p_1)=0={\rm nlm}_{\Gamma_2}(p_2)$, so we have ${{\rm NTC}}(\{\Gamma\})\leq {{\rm NTC}}(f)= {{\rm NTC}}(f_1)+ {{\rm NTC}}(f_2)= {{\rm NTC}}(\{\Gamma_1\})+{{\rm NTC}}(\{\Gamma_2\})$ by Proposition \[subadditivity\]. Next, for all $g:\Gamma \to {{\mathbb R}}$, we shall show that ${\rm NTC}(g) \geq {\rm NTC}(\{\Gamma_1\})+{\rm NTC}(\{\Gamma_2\})-2\pi.$ Given $g$, write $g_1$ resp. $g_2$ for the restriction of $g$ to $\Gamma_1$ resp. $\Gamma_2$. Then $\mu_g(e)=\mu_{g_1}(e)-{\rm nlm}^+_{g_1}(p_1) +\mu_{g_2}(e) -{\rm nlm}^+_{g_2}(p_2) +{\rm nlm}^+_g(p).$ Now for any real numbers $a$ and $b$, the difference $(a+b)^+ - (a^++b^+)$ is equal to $\pm a$, $\pm b$ or $0$, depending on the various signs. Let $a={\rm nlm}_{g_1}(p_1)$ and $b={\rm nlm}_{g_2}(p_2)$. Then since $p_1$ and $p_2$ are not topological vertices of $\Gamma_1$ resp. $\Gamma_2$, $a,b \in \{-1,0,+1\}$ and $a+b = {\rm nlm}_{g}(p)$ by Lemma \[combin\]. In any case, we have $${\rm nlm}_{g}^+(p)-{\rm nlm}_{g_1}^+(p_1)- {\rm nlm}_{g_2}^+(p_2) \geq -1.$$ Thus, $\mu_g(e)\geq \mu_{g_1}(e)+\mu_{g_2}(e) -1$, and multiplying by $2\pi$, ${\rm NTC}(g) \geq {{\rm NTC}}(g_1)+{{\rm NTC}}(g_2)-2\pi \geq {\rm NTC}(\{\Gamma_1\})+{\rm NTC}(\{\Gamma_2\})-2\pi.$ Finally, assume $p_1$ and $p_2$ are extreme points for flat mappings $f_1:\Gamma_1 \to {{\mathbb R}}$ resp. $f_2:\Gamma_2 \to {{\mathbb R}}$. We may assume that $f_1(p_1)=0=\min f_1(\Gamma_1)$ and $f_2(p_2)=0=\max f_2(\Gamma_2)$. Then ${\rm nlm}_{f_2}(p_2)=1$ and ${\rm nlm}_{f_1}(p_1)=-1$, and hence using Lemma \[combin\], ${\rm nlm}_{f}(p)=0$. So $\mu_f(e)=\mu_{f_1}(e)-{\rm nlm}^+_{f_1}(p_1) +\mu_{f_2}(e)-{\rm nlm^+}_{f_2}(p_2)+{\rm nlm}^+_f(p)= \mu_{f_1}(e)+\mu_{f_2}(e)-1.$ Multiplying by $2\pi$, we have ${\rm NTC}(\{\Gamma\})\leq {\rm NTC}(f)= {\rm NTC}(\{\Gamma_1\})+{\rm NTC}(\{\Gamma_2\})-2\pi.$ [width0pt ]{}\ NET TOTAL CURVATURE FOR DEGREE $3$ {#deg3} ================================== Simple description of net total curvature {#simple} ----------------------------------------- \[net3\] For any graph $\Gamma$ and any parameterization $\,\Gamma'$ of its double, ${{\rm NTC}}(\Gamma) \leq \frac12 {\mathcal C}(\Gamma')$. If $\, \Gamma$ is a [*trivalent*]{} graph, that is, having vertices of degree at most three, then ${{\rm NTC}}(\Gamma) = \frac12 {\mathcal C}(\Gamma')$ for any parameterization $\Gamma'$ which does not immediately repeat any edge of $\Gamma$. [[***Proof.*** ]{}]{}The first conclusion follows from Corollary \[mucompare\]. Now consider a trivalent graph $\Gamma$. Observe that $\Gamma'$ would be forced to immediately repeat any edge which ends in a vertex of degree $1$; thus, we may assume that $\, \Gamma$ has only vertices of degree $2$ or $3$. Since $\Gamma'$ covers each edge of $\Gamma$ twice, we need only show, for every vertex $q$ of $\Gamma$, having degree $d = d(q) \in \{2,3\}$, that $$\label{*} 2\, {\rm ntc}_\Gamma(q)= \sum_{i=1}^d {\rm c}_{\Gamma'}(q_i),$$ where $q_1, \dots, q_d$ are the vertices of $\Gamma'$ over $q$. If $d=2$, since $\Gamma'$ does not immediately repeat any edge of $\Gamma,$ we have ${\rm ntc}_\Gamma(q)={\rm c}_{\Gamma'}(q_1)= {\rm c}_{\Gamma'}(q_2)$, so equation clearly holds. For $d=3$, write both sides of equation as integrals over $S^2$, using the definition of ${\rm ntc}_\Gamma(q)$. Since $\Gamma'$ does not immediately repeat any edge, the three pairs of tangent vectors $\{T_1^{\Gamma'}(q_j),T_2^{\Gamma'}(q_j)\}$, $1\leq j\leq 3$, comprise all three pairs taken from the triple $\{T_1^\Gamma(q),T_2^\Gamma(q),T_3^\Gamma(q)\}$. We need to show that $$\begin{aligned} 2\int_{S^2}\left[\chi_1+\chi_2+\chi_3\right]^+\,dA_{S^2}&=& \int_{S^2}\left[\chi_1+\chi_2\right]^+\,dA_{S^2}+ \\ + \int_{S^2}\left[\chi_2+\chi_3\right]^+\,dA_{S^2}&+& \int_{S^2}\left[\chi_3+\chi_1\right]^+\,dA_{S^2},\end{aligned}$$ where at each direction $e\in S^2$, $\chi_j(e) = \pm 1$ is the sign of $\langle -e, T_j^\Gamma(q)\rangle$. But the integrands are equal at almost every point $e$ of $S^2$: $$2\left[\chi_1+\chi_2+\chi_3\right]^+ = \left[\chi_1+\chi_2\right]^+ + \left[\chi_2+\chi_3\right]^+ + \left[\chi_3+\chi_1\right]^+,$$ as may be confirmed by cases: $6=6$ if $\chi_1=\chi_2=\chi_3=+1$; $2=2$ if exactly one of the $\chi_i$ equals $-1$, and $0=0$ in the remaining cases. [width0pt ]{} Simple description of net total curvature fails, $d \geq 4$ ----------------------------------------------------------- \[notinf\] We have seen in Corollary \[mucompare\] that for graphs with vertices of degree $\leq 3$, if a parameterization $\Gamma'$ of the double $\widetilde\Gamma$ of $\Gamma$ does not immediately repeat any edge of $\Gamma$, then ${{\rm NTC}}(\Gamma) = \frac12 {\mathcal C}(\Gamma')$, the total curvature in the usual sense of the link $\Gamma'$. A natural suggestion would be that for general graphs $\Gamma$, ${{\rm NTC}}(\Gamma)$ might be half the infimum of total curvature of all such parameterizations $\Gamma'$ of the double. However, in some cases, we have the [**strict inequality**]{} ${{\rm NTC}}(\Gamma) < \inf_{\Gamma'}\frac12 {{\rm NTC}}(\Gamma')$. In light of Proposition \[net3\], we choose an example of a vertex $q$ of degree four, and consider the local contributions to ${{\rm NTC}}$ for $\Gamma = K_{1,4}$ and for $\Gamma'$, which is the union of four arcs. Suppose that for a small positive angle $\alpha$, ($\alpha \leq 1$ radian would suffice) the four unit tangent vectors at $q$ are $T_1 = (1,0,0)$; $T_2 = (0,1,0)$; $T_3=(-\cos\alpha,0,\sin\alpha)$; and $T_4 = (0,-\cos\alpha,-\sin\alpha)$. Write the exterior angles as $\theta_{ij} = \pi - \arccos \langle T_i, T_j \rangle.$ Then $\inf_{\Gamma'}\frac12 {\mathcal C}(\Gamma')= \theta_{13}+\theta_{24} = 2\alpha.$ However, ${\rm ntc}(q)$ is strictly less than $2\alpha$. This may be seen by writing ${\rm ntc}(q)$ as an integral over $S^2$, according to the definition , and noting that cancellation occurs between two of the four lune-shaped sectors. [width0pt ]{} Minimum NTC for trivalent graphs -------------------------------- Using the relation ${{\rm NTC}}(\Gamma) = \frac12 {{\rm NTC}}(\Gamma')$ between the net total curvature of a given trivalent graph $\Gamma$ and the total curvature for a non-reversing double cover $\Gamma'$ of the graph, we can determine the minimum net total curvature of a trivalent graph embedded in ${{\mathbb R}}^n$, whose value is then related to the Euler characteristic of the graph $\chi(\Gamma)=- k/2$. First we introduce the following definition. \[bridge\] For a given graph $\Gamma$ and a mapping $f:\Gamma \to {{\mathbb R}}$, let the [*extended bridge number*]{} $B(f)$ be one-half the number of local extrema. Write $B(\{\Gamma\})$ for the minimum of $B(f)$ among all mappings $f:\Gamma \to {{\mathbb R}}$. For a given isotopy type $[\Gamma]$ of embeddings into ${{\mathbb R}}^3$, let $B([\Gamma])$ be one-half the minimum number of local extrema for a mapping $f:\Gamma\to{{\mathbb R}}$ in the closure of the isotopy class $[\Gamma]$. For an integer $m\geq 3$, let $\theta_m$ be the graph with two vertices $q^+, q_-$ and $m$ edges, each of which has $q^+$ and $q^-$ as its two endpoints. Then $\theta = \theta_3$ has the form of the lower-case Greek letter $\theta$. For a knot, the number of local maxima equals the number of local minima. The minimum number of local maxima is called the [*bridge number*]{}, and equals the number of local minima. This is consistent with our Definition \[bridge\] of the extended bridge number. Of course, for knots, the minimum bridge number among all isotopy classes $B(\{S^1\})=1$, and only $B([S^1])$ is of interest for a specific isotopy class $[S^1]$. For certain graphs, the minimum numbers of local maxima and local minima may not occur at the same time for any mapping: see the example of Observation \[B&gt;1\] below. For isotopy classes of $\theta$-graphs, Goda [@Go] has given a definition of an integer-valued bridge index which is similar in spirit to the definition above. \[trivalent\] If $\, \Gamma$ is a trivalent graph, and if $f_0:\Gamma\to{{\mathbb R}}$ is monotone on topological edges and has the minimum number $2B(\{\Gamma\})$ of local extrema, then ${{\rm NTC}}(f_0) = {{\rm NTC}}(\{\Gamma\})= \pi\Big(2B(\{\Gamma\}) + \frac{k}{2}\Big)$, where $k$ is the number of topological vertices of $\, \Gamma.$ For a given isotopy class $[\Gamma]$, ${{\rm NTC}}([\Gamma])= \pi\Big(2B([\Gamma]) + \frac{k}{2}\Big)$. [[***Proof.*** ]{}]{}Recall that ${{\rm NTC}}(\{\Gamma\})$ denotes the infimum of ${{\rm NTC}}(f)$ among $f:\Gamma \to {{\mathbb R}}^3$ or among $f:\Gamma \to {{\mathbb R}}$, as may be seen from Corollary \[1dsuff\]. We first consider a mapping $f_1:\Gamma \to {{\mathbb R}}$ with the property that any local maximum or local minimum points of $f_1$ are interior points of topological edges. Then all topological vertices $v$, since they have degree $d(v)=3$ and $d^\pm(v)\neq 0$, have ${\rm nlm}(v)=\pm 1/2$, by Proposition \[val3\]. Let $\Lambda$ be the number of local maximum points of $f_1$, $V$ the number of local minimum points, $\lambda$ the number of vertices with ${\rm nlm}=+1/2$, and ${\tt y}$ the number of vertices with ${\rm nlm}=-1/2$. Then $\lambda + {\tt y} = k$, the total number of vertices, and $\Lambda + V \geq 2B(\{\Gamma\})$. Hence applying Corollary \[absnlm\], $$\label{exacttri} \mu = \frac12\sum_v|{\rm nlm}(v)|= \frac12[\Lambda+V+\frac{\lambda+{\tt y}}{2}]\geq B(\{\Gamma\})+k/4,$$ with equality iff $\Lambda + V = 2B(\{\Gamma\})$. We next consider any mapping $f_0:\Gamma \to {{\mathbb R}}$ in general position: in particular, the critical valuess of $f_0$ are isolated. In a similar fashion to the proof of Proposition \[minmonot\], we shall replace $f_0$ with a mapping whose local extrema are not topological vertices. Specifically, if $f_0$ assumes a local maximum at any topological vertex $v$, then, since $d(v)=3$, ${\rm nlm}_{f_0}(v)=3/2$. $f_0$ may be isotoped in a small neighborhood of $v$ to $f_1:\Gamma \to {{\mathbb R}}$ so that near $v$, the local maximum occurs at an interior point $q$ of one of the three edges with endpoint $v$, and thus ${\rm nlm}_{f_1}(q)=1$; while the up-degree $d_{f_1}^+(v)=1$ and the down-degree $d_{f_1}^-(v)=2$, so that ${\rm nlm}_{f_1}(v)$ is now $\frac12$. Thus, $\mu_{f_1}(e)=\mu_{f_0}(e)$. Similarly, if $f_0$ assumes a local minimum at a topological vertex $w$, then $f_0$ may be isotoped in a neighborhood of $w$ to $f_1:\Gamma \to {{\mathbb R}}$ so that the local minimum of $f_1$ near $w$ occurs at an interior point of any of the three edges with endpoint $w$, and $\mu_{f_1}(e)=\mu_{f_0}(e)$. Then any local extreme points of $f_1$ are interior points of topological edges. Thus, we have shown that $\mu_{f_0}(e) \geq B(\{\Gamma\}) + k/4$, with equality if $f_1$ has exactly $2B(\{\Gamma\})$ as its number of local extrema, which holds iff $f_0$ has the minimum number $2B(\{\Gamma\})$ of local extrema. Thus ${{\rm NTC}}(\{\Gamma\})=2\pi\mu_{f_0}(e)= 2\pi\Big(B(\{\Gamma\})+k/4\Big)= \pi\Big(2B(\{\Gamma\})+k/2\Big).$ Similarly, for a given isotopy class $[\Gamma]$ of embeddings into ${{\mathbb R}}^3$, we may choose $f_0:\Gamma\to{{\mathbb R}}$ in the closure of the isotopy class, deform $f_0$ to a mapping $f_1$ in the closure of $[\Gamma]$ having no topological vertices as local extrema and count $\mu_{f_0}(e)=\mu_{f_1}(e)\geq B([\Gamma]) + k/4$, with equality if $f_0$ has the minimum number $2B([\Gamma])$ of local extrema. This shows that ${{\rm NTC}}([\Gamma])=\pi\Big(2B([\Gamma])+k/2\Big).$ [width0pt ]{} An example geometrically illustrating the lower bound is given by the dual graph $\Gamma^*$ of the one-skeleton $\Gamma$ of a triangulation of $S^2$, with the $\{\infty\}$ not coinciding with any of the vertices of $\Gamma^*$. The Koebe-Andreev-Thurston theorem says that there is a circle packing which realizes the vertex set of $\Gamma^*$ as the set of centers of the circles (see [@S]). The so realized $\Gamma^*$, stereographically projected to ${{\mathbb R}}^2 \subset {{\mathbb R}}^3$, attains the lower bound of Theorem \[trivalent\] with $B(\{\Gamma^*\}) = 1$, namely ${{\rm NTC}}([\Gamma])=\pi(2+\frac{k}{2})=\pi( 2-\chi(\Gamma^*))$, where $k$ is the number of vertices. \[muformula\] If $\Gamma$ is a trivalent graph with $k$ topological vertices, and $f_0:\Gamma \to {{\mathbb R}}$ is a mapping in general position, having $\Lambda$ local maximum points and $V$ local minimum points, then $$\mu_{f_0}(e)=\frac12(\Lambda + V) +\frac{k}{4}\geq B(\{\Gamma\})+\frac{k}{4}.$$ [[***Proof.*** ]{}]{}Follows immediately from the proof of Theorem \[trivalent\]: $f_0$ and $f_1$ have the same number of local maximum or minimum points. [width0pt ]{}\ An interesting trivalent graph is $L_m$, the “ladder of $m$ rungs" obtained from two unit circles in parallel planes by adding $m$ line segments (“rungs") perpendicular to the planes, each joining one vertex on the first circle to another vertex on the second circle. For example, $L_4$ is the $1$-skeleton of the cube in ${{\mathbb R}}^3$. Note that $L_m$ may be embedded in ${{\mathbb R}}^2$, and that the bridge number $B(\{L_m\})=1$. Since $L_m$ has $2m$ trivalent vertices, we may apply Theorem \[trivalent\] to compute the minimum ${{\rm NTC}}$ for the type of $L_m$: The minimum net total curvature ${{\rm NTC}}(\{L_m\})$ for graphs of the type of $L_m$ equals $\pi(2+m)$. \[B&gt;1\] For certain connected trivalent graphs $\Gamma$ containing cut points, the minimum extended bridge number $B(\{\Gamma\})$ may be greater than $1$. [*Example:*]{} Let $\Gamma$ be the union of three disjoint circles $C_1, C_2, C_3$ with three edges $E_i$ connecting a point $p_i \in C_i$ with a fourth vertex $p_0$, which is not in any of the $C_i$, and which is a [*cut point*]{} of $\Gamma$: the number of connected components of $\Gamma\backslash p_0$ is greater than for $\Gamma$. Given $f:\Gamma \to {{\mathbb R}}$, after a permutation of $\{1,2,3\}$, we may assume there is a minimum point $q_1\in C_1\cup E_1$ and a maximum point $q_3\in C_3\cup E_3$. If $q_1$ and $q_3$ are both in $C_1\cup E_1$, we may choose $C_2$ arbitrarily in what follows. Restricted to the closed set $C_2 \cup E_2$, $f$ assumes either a maximum or a minimum at a point $q_2 \neq p_0$. Since $q_2 \neq p_0$, $q_2$ is also a local maximum or a local minimum for $f$ on $\Gamma$. That is, $q_1, q_2, q_3$ are all local extrema. In the notation of the proof of Theorem \[trivalent\], we have the number of local extrema $V + \Lambda \geq 3$. Therefore $B(\{\Gamma\}) \geq \frac{3}{2}$, and ${\rm NTC}(\{\Gamma\})\geq \pi(3+k/2)=5\pi.$ The reader will be able to construct similar trivalent examples with $B(\{\Gamma\})$ arbitrarily large. [width0pt ]{} In contrast to the results of Theorem \[trivalent\] and of Theorem \[allbut1tri\], below, for trivalent or nearly trivalent graphs, the minimum of ${{\rm NTC}}$ for a given graph type cannot be computed merely by counting vertices, but depends in a more subtle way on the topology of the graph: \[samedegree\] When $\Gamma$ is not trivalent, the minimum ${{\rm NTC}}(\{\Gamma\})$ of net total curvature for a connected graph $\Gamma$ with $B(\{\Gamma\})=1$ is not determined by the number of vertices and their degrees. [*Example:*]{} We shall construct two planar graphs $S_m$ and $R_m$ having the same number of vertices, all of degree $4$. Choose an integer $m\geq 3$ and take the image of the embedding $f_\varepsilon$ of the “sine wave" $S_m$ to be the union of the polar-coordinate graphs $C_\pm\subset{{\mathbb R}}^2$ of two functions: $r=1 \pm \varepsilon\sin(m\theta)$. $S_m$ has $4m$ edges; and $2m$ vertices, all of degree $4$, at $r=1$ and $\theta = \pi/m, 2\pi/m, \dots , 2\pi$. For $0<\varepsilon <1$, $f_\varepsilon(S_m)=C_+\cup C_-$ is the union of two smooth cycles. For small positive $\varepsilon$, $C_+$ and $C_-$ are convex. The $2m$ vertices all have ${\rm nlm}(q)=0$, so ${{\rm NTC}}(f_\varepsilon)={{\rm NTC}}(C_+)+{{\rm NTC}}(C_-)=2\pi+2\pi$. Therefore ${{\rm NTC}}(\{S_m\}) \leq {{\rm NTC}}(f_\varepsilon) = 4\pi$. For the other graph type, let the “ring graph" $R_m\subset{{\mathbb R}}^2$ be constructed by adding $m$ disjoint small circles $C_i$, each crossing one large circle $C$ at two points $v_{2i-1}, v_{2i}$, $1\leq i\leq m$. Then $R_m$ has $4m$ edges. We construct $R_m$ so that the $2m$ vertices $v_1,v_2,\dots,v_{2m}$, appear in cyclic order around $C$. Then $R_m$ has the same number $2m$ of vertices as does $S_m$, all of degree $4$. At each vertex $v_j$, we have ${\rm nlm}(v_j)=0$, so in this embedding, ${{\rm NTC}}(R_m) = 2\pi(m+1)$. We shall show that ${{\rm NTC}}(f_1) \geq 2\pi m$ for any $f_1:R_m \to {{\mathbb R}}^3$. According to Corollary \[1dsuff\], it is enough to show for every $f:R_m \to {{\mathbb R}}$ that $\mu_{f}\geq m$. We may assume $f$ is monotone on each topological edge, according to Proposition \[minmonot\]. Depending on the order of $f(v_{2i-2}), f(v_{2i-1})$ and $f(v_{2i})$, ${\rm nlm}(v_{2i-1})$ might equal $\pm 1$ or $\pm 2$, but cannot be $0$, as follows from Lemma \[combin\], since the unordered pair $\{d^-(v_{2i-1}),d^+(v_{2i-1})\}$ may only be $\{1,3\}$ or $\{0,4\}$. Similarly, $v_{2i}$ is connected by three edges to $v_{2i-1}$ and by one edge to $v_{2i+1}$. For the same reasons, ${\rm nlm}(v_{2i})$ might equal $\pm 1$ or $\pm 2$, and cannot $=0$. So $|{\rm nlm}(v_j)| \geq 1$, $1 \leq j \leq 2m$, and thus by Corollary \[absnlm\], $\mu = \frac12\sum_j |{\rm nlm}(v_j)|\geq m$. Therefore the minimum of net total curvature ${{\rm NTC}}(\{R_m\})\geq 2m\pi$, which is greater than ${{\rm NTC}}(\{S_m\})\leq 4\pi$, since $m\geq 3$. (A more detailed analysis shows that ${{\rm NTC}}(\{S_m\})= 4\pi$ and ${{\rm NTC}}(\{R_m\})= 2\pi(m+1)$.) [width0pt ]{} Finally, we may extend the methods of proof for Theorem \[trivalent\] to allow [**one**]{} vertex of higher degree: \[allbut1tri\] If $\, \Gamma$ is a graph with one vertex $w$ of degree $d(w)=m \geq 3$, all other vertices being trivalent, and if $w$ shares edges with $m$ distinct trivalent vertices, then ${{\rm NTC}}(\{\Gamma\})= \pi\Big(2B(\{\Gamma\}) + \frac{k}{2}\Big)$, where $k$ is the number of vertices of $\, \Gamma$ having odd degree. For a given isotopy class $[\Gamma]$, ${{\rm NTC}}([\Gamma])\geq \pi\Big(2B([\Gamma]) + \frac{k}{2}\Big)$. [[***Proof.*** ]{}]{}Consider any mapping $g:\Gamma \to {{\mathbb R}}$ in general position. If $m$ is even, then $|{\rm nlm}_g(w)|\geq 0$; if $m$ is odd, then $|{\rm nlm}_g(w)|\geq \frac{1}{2}$, by Proposition \[val3\]. If some topological vertex is a local extreme point, then as in the proof of Theorem \[trivalent\], $g$ may be modified without changing ${{\rm NTC}}(g)$ so that all $\Lambda+V$ local extreme points are interior points of edges, with ${\rm nlm}=\pm 1$. By Corollary \[absnlm\], we have $\mu_g(e)=\frac12\sum|{\rm nlm}(v)|\geq \frac12\Big(\Lambda+V+\frac{k}{2}\Big)\geq B(\{\Gamma\})+\frac{k}{4}$. This shows that $${{\rm NTC}}(\{\Gamma\})\geq \pi\Big(2B(\{\Gamma\})+\frac{k}{2}\Big).$$ Now let $f_0:\Gamma \to {{\mathbb R}}$ be monotone on topological edges and have the minimum number $2B(\{\Gamma\})$ of local extreme points (see Corollary \[minmonot\]). As in the proof of Theorem \[trivalent\], $f_0$ may be modified without changing ${{\rm NTC}}(f_0)$ so that all $2B(\{\Gamma\})$ local extreme points are interior points of edges. $f_0$ may be further modified so that the distinct vertices $v_1, \dots, v_m$ which share edges with $w$ are balanced: $f(v_j)<f(w)$ for half of the $j=1,\dots,m$, if $m$ is even, or for half of $m+1$, if $m$ is odd. Having chosen $f(v_j)$, we define $f$ along the (unique) edge from $w$ to $v_j$ to be monotone, for $j=1,\dots,m$. Therefore if $m$ is even, then ${\rm nlm}_f(w)= 0$; and if $m$ is odd, then ${\rm nlm}_f(w)= \frac{1}{2}$, by Lemma \[combin\]. We compute $\mu_f(e)=\frac12\sum|{\rm nlm}(v)|= \frac12(\Lambda+V+\frac{k}{2})= B(\{\Gamma\})+\frac{k}{4}$. We conclude that ${{\rm NTC}}(\{\Gamma\})= \pi\Big(2B(\{\Gamma\})+\frac{k}{2}\Big)$. For a given isotopy class $[\Gamma]$, the proof is analogous to the above. Choose a mapping $g:\Gamma \to {{\mathbb R}}$ in the closure of $[\Gamma]$, and modify $g$ without leaving the closure of the isotopy class. Choose $f:\Gamma \to {{\mathbb R}}$ which has the minimum number $2B([\Gamma])$ of local extreme points, and modify it so that topological vertices are not local extreme points. In contrast to the proof of Theorem \[trivalent\], a balanced arrangement of vertices may not be possible in the given isotopy class. In any case, if $m$ is even, then $|{\rm nlm}_f(w)|\geq 0$; and if $m$ is odd, $|{\rm nlm}_f(w)|\geq \frac{1}{2}$, by Proposition \[val3\]. Thus applying Corollary \[absnlm\], we find ${{\rm NTC}}([\Gamma])\geq\pi\Big(2B([\Gamma])+\frac{k}{2}\Big)$. [width0pt ]{} When all vertices of $\Gamma$ are trivalent except $w$, $d(w)\geq 4$, and when $w$ shares more than one edge with another vertex of $\Gamma$, then in certain cases, ${{\rm NTC}}(\{\Gamma\})>\pi\Big(2B(\{\Gamma\})+\frac{k}{2}\Big)$, where $k$ is the number of vertices of odd degree. [*Example:*]{} Choose $\Gamma$ to be the one-point union of $\Gamma_1$, $\Gamma_2$ and $\Gamma_3,$ where $\Gamma_i= \theta=\theta_3$, $i=1,2,3$, and the point $w_i$ chosen from $\Gamma_i$ is one of its two vertices $v_i, w_i$. Then the identified point $w=w_1=w_2=w_3$ of $\Gamma$ has $d(w)=9$, and each of the other three vertices $v_1, v_2, v_3$ has degree $3$. Choose a flat map $f:\Gamma\to{{\mathbb R}}$. We may assume that $f$ is monotone on each edge, applying Proposition \[minmonot\]. If $f(v_1)<f(v_2)<f(w)<f(v_3)$, then $d^+(w)=3$, $d^-(w)=6$, so ${\rm nlm}(w)=\frac32$, while $v_i$ is a local extreme point, so ${\rm nlm}(v_i)=\pm\frac32$, $1=1,2,3$. This gives $\mu = 3$. The case where $f(v_1)<f(w)<f(v_2)<f(v_3)$ is similar. If $w$ is an extreme point of $f$, then ${\rm nlm}(w)=\pm\frac92$ and $\mu\geq\frac92>3$, contradicting flatness of $f$. This shows that ${{\rm NTC}}(\{\Gamma\})={{\rm NTC}}(f)=6\pi$. On the other hand, we may show as in Observation \[B&gt;1\] that $B(\{\Gamma\})=\frac32$. All four vertices have odd degree, so $k=4$, and $\pi\Big(2B(\{\Gamma\})+\frac{k}{2}\Big) = 5\pi$. [width0pt ]{} Let $W_m$ denote the “wheel" of $m$ spokes, consisting of a cycle $C$ containing $m$ vertices $v_1,\dots,v_m$ (the “rim"), a central vertex $w$ (the “hub") not on $C$, and edges $E_i$ (the “spokes") connecting $w$ to $v_i$, $1\leq i \leq m$. The minimum net total curvature ${{\rm NTC}}(\{W_m\})$ for graphs in ${{\mathbb R}}^3$ homeomorphic to $W_m$ equals $\pi(2+\lceil\frac{m}{2}\rceil)$. [[***Proof.*** ]{}]{}We have one “hub" vertex $w$ with $d(w)=m$, and all other vertices have degree $3$. Observe that the bridge number $B(\{W_m\})=1$. According to Theorem \[allbut1tri\], we have ${{\rm NTC}}(\{W_m\})= \pi\Big(2B(\{W_m\}) + \frac{k}{2}\Big)$, where $k$ is the number of vertices of odd degree: $k=m$ if $m$ is even, or $k=m+1$ if $m$ is odd: $k=2\lceil\frac{m}{2}\rceil$. Thus ${{\rm NTC}}(\{W_m\})= \pi\Big(2+\lceil\frac{m}{2}\rceil\Big)$. [width0pt ]{}\ LOWER BOUNDS OF NET TOTAL CURVATURE {#lowbds} =================================== The [*width*]{} of an isotopy class $[\Gamma]$ of embeddings of a graph $\Gamma$ into ${{\mathbb R}}^3$ is the minimum among representatives of the class of the maximum number of points of the graph meeting a family of parallel planes. More precisely, we write ${\rm width}([\Gamma]):= \min_{f:\Gamma \to {{\mathbb R}}^3\vert f\in[\Gamma]} \min_{e\in S^2} \max_{s\in{{\mathbb R}}} \#(e,s).$ For any homeomorphism type $\{\Gamma\}$ define ${\rm width}(\{\Gamma\})$ to be the minimum over isotopy types. \[incrdecr\] Let $\Gamma$ be a graph, and consider an isotopy class $[\Gamma]$ of embeddings $f:\Gamma \to {{\mathbb R}}^3$. Then $${{\rm NTC}}([\Gamma]) \geq \pi \ {\rm width}([\Gamma]).$$ As a consequence, ${{\rm NTC}}(\{\Gamma\}) \geq \pi \ {\rm width}(\{\Gamma\}).$ Moreover, if for some $e\in S^2$, an embedding $f:\Gamma \to {{\mathbb R}}^3$ and $s_0\in {{\mathbb R}}$, the integers $\#(e,s)$ are increasing in $s$ for $s<s_0$ and decreasing for $s>s_0$, then ${{\rm NTC}}([\Gamma]) = \#(e,s_0)\,\pi.$ [[***Proof.*** ]{}]{}Choose an embedding $g:\Gamma \to {{\mathbb R}}^3$ in the given isotopy class, with $\max_{s\in{{\mathbb R}}} \#(e,s)={\rm width}([\Gamma])$. There exist $e\in S^2$ and $s_0\in{{\mathbb R}}$ with $\#(e,s_0) = \max_{s\in{{\mathbb R}}} \#(e,s) = {\rm width}([\Gamma])$. Replace $e$ if necessary by a nearby point in $S^2$ so that the values $g(v_i)$, $i=1,\dots,m$ are distinct. Next do cylindrical shrinking: without changing $\#(e,s)$ for $s\in {{\mathbb R}}$, shrink the image of $g$ in directions orthogonal to $e$ by a factor $\delta>0$ to obtain a family $\{g_\delta\}$ from the same isotopy class $[\Gamma]$, with ${{\rm NTC}}(g_\delta)\to {{\rm NTC}}(g_0)$, where we may identify $g_0: \Gamma \to {{\mathbb R}}e \subset{{\mathbb R}}^3$ with $p_e\circ g=p_e\circ g_\delta:\Gamma \to {{\mathbb R}}$. But $${{\rm NTC}}(p_e\circ g)= \frac12\int_{S^2}\mu(u)\,dA_{S^2}(u)=2\,\pi\,\mu(e),$$ since for $p_u\circ p_e\circ g$, the local maximum and minimum points are the same as for $p_e\circ g$ if $\langle e,u\rangle >0$ and reversed if $\langle e,u\rangle <0$ (recall that $\mu(-e)=\mu(e)$). We write the topological vertices and the local extrema of $g_0$ as $v_1,\dots,v_m$. Let the indexing be chosen so that $g_0(v_i)<g_0(v_{i+1})$, $i=1,\dots, m-1$. Now estimate $\mu(e)$ from below: using Lemma \[combin\], $$\label{mueqno} \mu(e)= \sum_{i=1}^m {\rm nlm}^+_{g_0}(v_i)\geq \sum_{i=k+1}^m {\rm nlm}_{g_0}(v_i) = \frac12 \#(e,s)$$ for any $s$, $g_0(v_k)<s<g_0(v_{k+1})$. This shows that $\mu(e)\geq \frac12 {\rm width}([\Gamma])$, and therefore $${{\rm NTC}}(g)\geq{{\rm NTC}}(g_0)=2\pi\,\mu(e)\geq\,\pi\,{\rm width}([\Gamma]).$$ Now suppose that the integers $\#(e,s)$ are increasing in $s$ for $s<s_0$ and decreasing for $s>s_0$. Then for $g_0(v_i)>s_0$, we have ${\rm nlm}(g_0(v_i))\geq 0$ by Lemma \[combin\], and the inequality becomes equality at $s=s_0$. [width0pt ]{} \[widthK\_m\] For an integer $\ell$, the minimum width of the complete graph $K_{2\ell}$ on $2\ell$ vertices is ${\rm width}(\{K_{2\ell}\})=\ell^2$; for $2\ell + 1$ vertices, ${\rm width}(\{K_{2\ell+1}\})= \ell(\ell+1).$ [[***Proof.*** ]{}]{} Write $E_{ij}$ for the edge of $K_m$ joining $v_i$ to $v_j$, $1\leq i < j \leq m$, and suppose $g:K_m\to {{\mathbb R}}$ has distinct values at the vertices: $g(v_1)<g(v_2)<\cdots <g(v_m)$. Then for any $g(v_k) <s< g(v_k+1)$, there are $k(m-k)$ edges $E_{ij}$ with $i\leq k <j$; each of these edges has at least one interior point mapping to $s$, which shows that $\#(e,s)\geq k(m-k).$ If $m$ is even: $m=2\ell$, these lower bounds have the maximum value $\ell^2$ when $k=\ell$. If $m$ is odd: $m=2\ell+1,$ these lower bounds have the maximum value $\ell(\ell+1)$ when $k=\ell$ or $k=\ell+1$. This shows that the width of $K_{2\ell} \geq\ell^2$ and the width of $K_{2\ell+1}\geq \ell(\ell+1).$ On the other hand, equality holds for the piecewise linear embedding of $K_m$ into ${{\mathbb R}}$ with vertices in general position and straight edges $E_{ij}$, which shows that ${\rm width}(\{K_{2\ell}\}) = \ell^2$ and ${\rm width}(\{K_{2\ell+1}\}) = \ell(\ell+1).$ [width0pt ]{} \[NTCK\_m\] For all $g:K_m\to {{\mathbb R}}$, ${{\rm NTC}}(g) \geq \pi\,\ell^2$ if $m=2\ell$ is even; and ${{\rm NTC}}(g) \geq \pi\,\ell(\ell+1)$ if $m=2\ell+1$ is odd. Equality holds for an embedding of $K_m$ into ${{\mathbb R}}$ with vertices in general position and monotone on each edge; therefore ${{\rm NTC}}(\{K_{2\ell}\})=\pi\,\ell^2$, and ${{\rm NTC}}(\{K_{2\ell+1}\})=\pi\,\ell(\ell+1)$. [[***Proof.*** ]{}]{}The lower bound on ${{\rm NTC}}(\{K_m\})$ follows from Theorem \[incrdecr\] and Lemma \[widthK\_m\]. Now suppose $g:K_m\to {{\mathbb R}}$ is monotone on each edge, and number the vertices of $K_m$ so that for all $i$, $g(v_i)< g(v_{i+1})$. Then as in the proof of Lemma \[widthK\_m\], $\#(e,s)=k(m-k)$ for $g(v_k)<s<g(v_{k+1})$. These cardinalities are increasing for $0\leq k \leq\ell$ and decreasing for $\ell+1<k<m$. Thus, if $g(v_\ell)<s_0<g(v_{\ell+1})$, then by Theorem \[incrdecr\], ${{\rm NTC}}([\Gamma]) = \#(e,s_0)\,\pi= \ell(m-\ell)\,\pi,$ as claimed. [width0pt ]{}\ Let $K_{m,n}$ be the complete bipartite graph with $m+n$ vertices divided into two sets: $v_i, 1\leq i \leq m$ and $w_j, 1\leq j\leq n$, having one edge $E_{ij}$ joining $v_i$ to $w_j$, for each $1\leq i \leq m$ and $1\leq j\leq n$. \[bipartite\] ${{\rm NTC}}(\{K_{m,n}\})=\lceil\frac{mn}{2}\rceil\,\pi$. [[***Proof.*** ]{}]{}$K_{m,n}$ has vertices $v_1,\dots, v_m$ of degree $d(v_i)=n$ and vertices $w_1,\dots, w_n$ of degree $d(w_j)=m$. Consider a mapping $g:K_{m,n} \to {{\mathbb R}}$ in general position, so that the $m+n$ vertices of $K_{m,n}$ have distinct images. We wish to show $\mu(e)=\mu_g(e)\geq\frac{mn}{4},$ if $m$ or $n$ is even, or $\frac{mn+1}{4},$ if both $m$ and $n$ are odd. For this purpose, according to Proposition \[minmonot\], we may first reduce $\mu(e)$ or leave it unchanged by replacing $g$ with a mapping (also called $g$) which is monotone on each edge $E_{ij}$ of $K_{m,n}$. The values of ${\rm nlm}(w_j)$ and of ${\rm nlm}(v_i)$ are now determined by the order of the vertex images $g(v_1),\dots,g(v_m),g(w_1),\dots,g(w_n)$. Since $K_{m,n}$ is symmetric under permutations of $\{v_1,\dots,v_m\}$ and permutations of $\{w_1,\dots,w_n\}$, we shall assume that $g(v_i)<g(v_{i+1})$, $i=1,\dots,m-1$ and $g(w_j)<g(w_{j+1})$, $j=1,\dots,n-1$. For $i=1,\dots, m$ we write $k_i$ for the largest index $j$ such that $g(w_j)<g(v_i).$ Then $0\leq k_1\leq \dots\leq k_m \leq n$, and these integers determine $\mu(e)$. According to Lemma \[combin\], ${\rm nlm}(v_i)=k_i-\frac{n}{2}, i=1,\dots,m$. For $j\leq k_1$ and for $j\geq k_m+1$, we have ${\rm nlm}(w_j)=\pm\frac{m}{2}$; for $k_1<j\leq k_2$ and for $k_{m-1}<j\leq k_m$, we find ${\rm nlm}(w_j)=\pm\Big(\frac{m}{2}-1\Big)$; and so on until we find ${\rm nlm}(w_j)=0$ on the middle interval $k_p<j\leq k_{p+1}$, if $m=2p$ is even; or, if $m=2p+1$ is odd, ${\rm nlm}(w_j)=-\frac12$ for $k_p<j\leq k_{p+1}$ and ${\rm nlm}(w_j)=+\frac12$ for the other middle interval $k_{p+1}<j\leq k_{p+2}$. Thus according to Lemma \[combin\] and Corollary \[absnlm\], if $m=2p$ is [**even**]{}, $$\begin{aligned} \label{muformeven} 2\mu(e)&=&\sum_{i=1}^m|{\rm nlm}(v_i)|+\sum_{j=1}^n|{\rm nlm}(w_j)|= \sum_{i=1}^m|k_i-\frac{n}{2}|+(k_1+n-k_m)\frac{m}{2}\nonumber\\ &+& (k_2-k_1+k_m-k_{m-1})\Big[\frac{m}{2}-1\Big]+ \dots \nonumber\\ &+& (k_p-k_{p-1}+k_{p+2}-k_{p+1}) \Big[\frac{m}{2}-(p-1)\Big] + (k_{p+1}-k_p)\Big[0\Big]\\ &=& \sum_{i=1}^m\Big|k_i-\frac{n}{2}\Big|+\frac{mn}{2}+\sum_{i=1}^p k_i- \sum_{i=p+1}^m k_i \nonumber\\ &=& \frac{mn}{2}+\sum_{i=1}^p\Big[|k_i-\frac{n}{2}|+(k_i-\frac{n}{2})\Big]+ \sum_{i=p+1}^m\Big[|k_i-\frac{n}{2}|-(k_i-\frac{n}{2})\Big].\nonumber \end{aligned}$$ Note that formula assumes its minimum value $2\mu(e)=\frac{mn}{2}$ when $k_1\leq\dots\leq k_p\leq\frac{n}{2}\leq k_{p+1}\leq\dots\leq k_m.$ If $m=2p+1$ is [**odd**]{}, then $$\begin{aligned} \label{muformodd} 2\mu(e)&=& \sum_{i=1}^m|k_i-\frac{n}{2}|+(k_1+n-k_m)\frac{m}{2}+ (k_2-k_1+k_m-k_{m-1})\Big[\frac{m}{2}-1\Big]+ \dots \nonumber\\ &+&(k_{p+3}-k_{p+2})\Big[\frac{m}{2}-(p-1)\Big]+ (k_{p+2}-k_p)\Big[\frac12\Big]= \nonumber \\ &=&\sum_{i=1}^m|k_i-\frac{n}{2}|+\frac{mn}{2}+ \sum_{i=1}^p k_i- \sum_{i=p+2}^m k_i\\ &=& \frac{mn}{2}+\sum_{i=1}^p\Big[|k_i-\frac{n}{2}|+(k_i-\frac{n}{2})\Big]+ \sum_{i=p+2}^m\Big[|k_i-\frac{n}{2}|-(k_i-\frac{n}{2})\Big]+ |k_{p+1}-\frac{n}{2}|.\nonumber\end{aligned}$$ Observe that formula has the minimum value $2\mu(e)=\frac{mn}{2}$ when $n$ is even and $k_1\leq\dots\leq k_p\leq\frac{n}{2}= k_{p+1}\leq\dots\leq k_m.$ If $n$ as well as $m$ is odd, then the last term $|k_{p+1}-\frac{n}{2}|\geq \frac12$, and the minimum value of $2\mu(e)$ is $\frac{mn+1}{2}$, attained iff $k_1\leq\dots\leq k_p\leq\frac{n}{2}\leq k_{p+2}\leq\dots\leq k_m.$ This shows that for either parity of $m$ or of $n$, $\mu(e)\geq\frac{mn}{4}$. If $n$ and $m$ are both odd, we have the stronger inequality $\mu(e)\geq\frac{mn+1}{4}$. We may summarize these conclusions as $2\mu(e)\geq\lceil\frac{mn}{2}\rceil$, and therefore as in the proof of Corollary \[1dsuff\], ${{\rm NTC}}(\{K_{m,n}\})\geq\lceil\frac{mn}{2}\rceil\,\pi,$ as we wished to show. By abuse of notation, write the formula or as $\mu(k_1,\dots,k_m)$. To show the inequality in the opposite direction, we need to find a mapping $f:K_{m,n}\to {{\mathbb R}}$ with ${{\rm NTC}}(f)=\frac{mn\,\pi}{2}$ ($m$ or $n$ even) or ${{\rm NTC}}(f)=\frac{(mn+1)\,\pi}{2}$ ($m$ and $n$ odd). The above computation suggests choosing $f$ with $f(v_1), \dots, f(v_m)$ together in the middle of the images of the $w_j$. Write $n=2\ell$ if $n$ is even, or $n=2\ell+1$ if $n$ is odd. Choose values $f(w_1)<\dots<f(w_\ell)<f(v_1)<\dots<f(v_m)<f(w_{\ell+1})<\dots<f(w_n)$, and extend $f$ monotonically to each of the $mn$ edges $E_{ij}$. From formulas and , we have $\mu_f(e)=\mu(\ell,\dots,\ell)=\frac{mn}{4}$, if $m$ or $n$ is even; or $\mu_f(e)=\mu(\ell,\dots,\ell)=\frac{mn+1}{4}$, if $m$ and $n$ are odd. [width0pt ]{} Recall that $\theta_m$ is the graph with two vertices $q^+, q_-$ and $m$ edges. \[theta\_m\] ${{\rm NTC}}(\{\theta_m\})=m\,\pi.$ [[***Proof.*** ]{}]{}$\theta_m$ is homeomorphic to the complete bipartite graph $K_{m,2}$, and by the proof of Proposition \[bipartite\], we find $\mu(e)\geq \frac{m}{2}$ for a.a. $e\in S^2$, and hence ${{\rm NTC}}(\{K_{m,2}\})=m\,\pi$. [width0pt ]{} FÁRY-MILNOR TYPE ISOTOPY CLASSIFICATION {#FaryMilnor} ======================================= Recall the Fáry-Milnor theorem, which states that if the total curvature of a Jordan curve $\Gamma$ in ${{\mathbb R}}^3$ is less than or equal to $4 \pi$, then $\Gamma$ is unknotted. As we have demonstrated above, there are a collection of graphs whose values of the minimum total net curvatures are known. It is natural to hope when the net total curvature is small, in the sense of being in a specific interval to the right of the minimal value, that the isotopy type of the graph is restricted, as is the case for knots: $\Gamma=S^1$. The following proposition and corollaries, however, tell us that results of the Fáry-Milnor type [**cannot**]{} be expected to hold for more general graphs. \[Gamma\_q\] If $\,\Gamma$ is a graph in ${{\mathbb R}}^3$ and if $C \subset \Gamma$ is a cycle, such that for some $e \in S^2$, $p_e \circ C$ has at least two local maximum points, then for each positive integer $q$, there is a nonisotopic embedding $\widetilde\Gamma_q$ of $\Gamma$ in which $C$ is replaced by a knot not isotopic to $C$, with ${{\rm NTC}}(\widetilde\Gamma_q)$ as close as desired to ${{\rm NTC}}(p_e\circ\Gamma).$ [[***Proof.*** ]{}]{}It follows from Corollary \[1dsuff\] that the one-dimensional graph $p_e \circ \Gamma$ may be replaced by an embedding $\widehat\Gamma$ into a small neighborhood of the line ${{\mathbb R}}e$ in ${{\mathbb R}}^3$, with arbitrarily small change in its net total curvature. Since $p_e \circ C$ has at least two local maximum points, there is an interval of ${{\mathbb R}}$ over which $p_e\circ C$ contains an interval which is the image of four oriented intervals $J_1,J_2,J_3,J_4$ appearing in that cyclic order around the oriented cycle $C$. Consider a plane presentation of $\Gamma$ by orthogonal projection into a generic plane containing the line ${{\mathbb R}}e$. Choose an integer $q\in{{\mathbb Z}}$, $|q|\geq 3.$ We modify $\widehat\Gamma$ by wrapping its interval $J_1$ $q$ times around $J_3$ and returning, passing over any other edges of $\Gamma$, including $J_2$ and $J_4$, which it encounters along the way. The new graph in ${{\mathbb R}}^3$ is called $\widetilde{\Gamma_q}$. Then, if $C$ was the unknot, the cycle $\widetilde C_q$ which has replaced it is a $(2,q)$-torus knot (see [@L]). In any case, $\widetilde C_q$ is not isotopic to $C$, and therefore $\widetilde\Gamma_q$ is not isotopic to $\Gamma$. As in the proof of Theorem \[incrdecr\], let $g_\delta:{{\mathbb R}}^3 \to {{\mathbb R}}^3$ be defined by cylindrical shrinking, so that $g_1$ is the identity and $g_0=p_e$. Then $p_e\circ \widetilde{\Gamma_q}=g_0(\widetilde{\Gamma_q})$, and for $\delta>0$, $g_\delta(\widetilde{\Gamma_q})$ is isotopic to $\widetilde{\Gamma_q}$. But ${{\rm NTC}}(g_\delta)\to {{\rm NTC}}(g_0)$ as $\delta \to 0$. [width0pt ]{} \[2cycle\] If $e=e_0\in S^{n-1}$ minimizes ${{\rm NTC}}(p_e\circ \Gamma)$, and there is a cycle $C \subset \Gamma$ so that $p_{e_0}\circ C$ has two (or more) local maximum points, then there is a sequence of nonisotopic embeddings $\widetilde\Gamma_q$ of $\Gamma$ with ${{{\rm NTC}}(\widetilde\Gamma_q)}$ less than, or as close as desired, to ${{\rm NTC}}(\Gamma)$, in which $C$ is replaced by a $(2,q)$-torus knot. \[K\_m2cycle\] If $\,\Gamma$ is an embedding of $K_m$ into ${{\mathbb R}}^3$, linear on each topological edge of $K_m$, $m\geq 4$, then there is a sequence of nonisotopic embeddings $\widetilde\Gamma_q$ of $\,\Gamma$ with ${{\rm NTC}}([\widetilde\Gamma_q])$ as close as desired to ${{\rm NTC}}([\Gamma])$, in which an unknotted cycle $C$ of $\,\Gamma$ is replaced by a $(2,q)$-torus knot. [[***Proof.*** ]{}]{}According to Corollary \[2cycle\], we only need to construct an isotopy of $K_m$ with the minimum value of ${{\rm NTC}}$, such that there is a cycle $C$ so that $p_e\circ C$ has two local maximum points, where $\mu(e)$ is a minimum among $e \in S^2$. Choose $g:K_m\to{{\mathbb R}}$ which is monotone on each edge of $K_m$, and has distinct values at vertices. Then according to Proposition \[NTCK\_m\], we have ${{\rm NTC}}(g)={{\rm NTC}}(\{K_m\})$. Number the vertices $v_1,\dots,v_m$ so that $g(v_1)<g(v_2)<\dots<g(v_m)$. Write $E_{ji}$ for the edge $E_{ij}$ with the reverse orientation, $i\neq j$. Then the cycle $C$ formed in sequence from $E_{13},E_{32},E_{24}$ and $E_{41}$ has local maximum points at $v_3$ and $v_4$, and covers the interval $\Big(g(v_2),g(v_3)\Big)\subset{{\mathbb R}}$ four times. Since $C$ is formed out of four straight edges, it is unknotted. The procedure of Corollary \[2cycle\] replaces $C$ with a $(2,q)$-torus knot, with an arbitrarily small increase in NTC. [width0pt ]{}\ Note that Corollary \[2cycle\] gives a set of conditions for those graph types where a Fáry-Milnor type isotopy classification might hold. In particular, we consider one of the simpler homeomorphism types of graphs, the [**theta graph**]{}, $\theta=\theta_3=K_{3,2}$ (cf. description following Definition \[bridge\]). The [**standard theta graph**]{} is the isotopy class in ${{\mathbb R}}^3$ of a plane circle plus a diameter. We have seen in Corollary \[theta\_m\] that the minimum of net total curvature for a theta graph is $3\pi$. On the other hand note that in the range $3\pi\leq{{\rm NTC}}(\Gamma) < 4\pi$, for $e$ in a set of positive measure of $S^2$, $p_e(\Gamma)$ cannot have two local maximum points. In Theorem \[thetathm\] below, we shall show that a theta graph $\Gamma$ with ${{\rm NTC}}(\Gamma)< 4\pi$ is isotopically standard. We may observe that there are nonstandard theta graphs in ${{\mathbb R}}^3$. For example, the union of two edges might form a knot. Moreover, as S. Kinoshita has shown, there are $\theta$-graphs in ${{\mathbb R}}^3$, not isotopic to a planar graph, such that each of the three cycles formed by deleting one edge is unknotted [@Ki]. We begin with a well-known property of knots, whose proof we give for the sake of completeness. \[jordan\] Let $C \subset {{\mathbb R}}^3$ be homeomorphic to $S^1$, and [**not**]{} a convex planar curve. Then there is a nonempty open set of planes $P\subset {{\mathbb R}}^3$ which each meet $C$ in at least four points. [[***Proof.*** ]{}]{}For $e\in S^2$ and $t\in{{\mathbb R}}$ write the plane $P_t^e=\{x\in{{\mathbb R}}^3:\langle e,x\rangle=t\}$. If $C$ is not planar, then there exist four non-coplanar points $p_1, p_2, p_3, p_4$, numbered in order around $C$. Note that no three of the points can be collinear. Let an oriented plane $P_0$ be chosen to contain $p_1$ and $p_3$ and rotated until both $p_2$ and $p_4$ are above $P_0$ strictly. Write $e_1$ for the unit normal vector to $P_0$ on the side where $p_2$ and $p_3$ lie, so that $P_0=P_{t_0=0}^{e_1}$. .Then the set $P_t \cap C$ contains at least four points, for $t_0=0<t<\delta_1$, with some $\delta_1>0$, since each plane $P_t=P_t^{e_1}$ meets each of the four open arcs between the points $p_1, p_2, p_3, p_4$. This conclusion remains true, for some $0<\delta < \delta_1$, when the normal vector $e_1$ to $P_0$ is replaced by any nearby $e \in S^2$, and $t$ is replaced by any $0<t<\delta$. If $C$ is planar but nonconvex, then there exists a plane $P_0=P_0^{e_1}$, transverse to the plane containing $C$, which supports $C$ and touches $C$ at two distinct points, but does not include the arc of $C$ between these two points. Consider disjoint open arcs of $C$ on either side of these two points and including points not in $P_0$. Then for $0 < t < \delta \ll 1$, the set $P_t \cap C$ contains at least four points, since the planes $P_t=P_t^{e_1}$ meet each of the four disjoint arcs. Here once again $e_1$ may be replaced by any nearby unit vector $e$, and the plane $P_t^e$ will meet $C$ in at least four points, for $t$ in a nonempty open interval $t_1<t<t_1+\delta$. [width0pt ]{}\ Using the notion of net total curvature, we may extend the theorems of Fenchel [@Fen] as well as of Fáry-Milnor ([@Fa],[@M]), for curves homeomorpic to $S^1$, to graphs homeomorphic to the theta graph. An analogous result is given by Taniyama in [@T], who showed that the minimum of ${{\rm TC}}$ for polygonal $\theta$-graphs is $4\pi$, and that any $\theta$-graph $\Gamma$ with ${{\rm TC}}(\Gamma)<5\pi$ is isotopically standard, \[thetathm\] Suppose $f:\theta \to{{\mathbb R}}^3$ is a continuous embedding, $\Gamma=f(\theta)$. Then ${{\rm NTC}}(\Gamma) \geq 3\pi$. If ${{\rm NTC}}(\Gamma) < 4\pi$, then $\Gamma$ is isotopic in ${{\mathbb R}}^3$ to the planar theta graph. Moreover, ${{\rm NTC}}(\Gamma) = 3\pi$ iff the graph is a planar convex curve plus a straight chord. [[***Proof.*** ]{}]{}We consider first the case when $f:\theta\to{{\mathbb R}}^3$ is piecewise $C^2$. [**(1)**]{} We have shown the [**lower bound**]{} $3\pi$ for ${{\rm NTC}}(f)$, where $f:\theta\to {{\mathbb R}}^n$ is any piecewise $C^2$ mapping, since $\theta=\theta_3$ is one case of Corollary \[theta\_m\], with $m=3$. [**(2)**]{} We show next that if there is a cycle $C$ in a graph $\Gamma$ (a subgraph homeomorphic to $S^1$) which satisfies the conclusion of Lemma \[jordan\], then $\mu(e) \geq 2$ for $e$ in a nonempty open set of $S^2$. Namely, for $t_0<t<t_0+\delta$, a family of planes $P_t^e$ meets $C$, and therefore meets $\Gamma$, in at least four points. This is equivalent to saying that the cardinality $\#(e,t)\geq 4$. This implies, by Corollary \[fibcard\], that $\sum\{{\rm nlm}(e,q): p_e(q) > t_0\}\geq 2$. Thus, since ${\rm nlm}^+(e,q)\geq {\rm nlm}(e,q)$, using Definition \[defmu\], we have $\mu(e) \geq 2$. Now consider the [**equality**]{} case of a theta graph $\Gamma$ with ${{\rm NTC}}(\Gamma) = 3\pi$. As we have seen in the proof of Proposition \[bipartite\] with $m=3$ and $n=2$, the multiplicity $\mu(e) \geq \frac32=\frac{mn}{4}$ for a.a. $e \in S^2$, while the integral of $\mu(e)$ over $S^2$ equals $ 2\,{{\rm NTC}}(\Gamma) = 6\pi$ by Theorem \[muthm\], implying $\mu(e) = 3/2$ a.e. on $S^2$. Thus, the conclusion of Lemma \[jordan\] is impossible for any cycle $C$ in $\Gamma$. By Lemma \[jordan\], all cycles $C$ of $\Gamma$ must be planar and convex. Now $\Gamma$ consists of three arcs $a_1$, $a_2$ and $a_3$, with common endpoints $q^+$ and $q^-$. As we have just shown, the three Jordan curves $\Gamma_1:=a_2\cup a_3$, $\Gamma_2:=a_3\cup a_1$ and $\Gamma_3:=a_1\cup a_2$ are each planar and convex. It follows that $\Gamma_1,\,\Gamma_2$ and $\Gamma_3$ lie in a common plane. In terms of the topology of this plane, one of the three arcs $a_1$, $a_2$ and $a_3$ lies in the middle between the other two. But the middle arc, say $a_2$, must be a line segment, as it needs to be a shared piece of two curves $\Gamma_1$ and $\Gamma_3$ bounding disjoint convex open sets in the plane. The conclusion is that $\Gamma$ is a planar, convex Jordan curve $\Gamma_2$, plus a straight chord $a_2$, whenever ${{\rm NTC}}(\Gamma)= 3\pi.$ [**(3)**]{} We next turn our attention to the [**upper bound**]{} of ${{\rm NTC}}$, to imply that a $\theta$-graph is isotopically standard: we shall assume that $g:\theta \to {{\mathbb R}}^3$ is an embedding in general position with ${{\rm NTC}}(g) < 4\pi$, and write $\Gamma=g(\theta)$. By Theorem \[muthm\], since $S^2$ has area $4\pi$, the average of $\mu(e)$ over $S^2$ is less than $2$, and it follows that there exists a set of positive measure of $e_0\in S^2$ with $\mu(e_0) < 2$. Since $\mu(e_0)$ is a half-integer, and since $\mu(e) \geq 3/2$, as we have shown in part [**(1)**]{} of this proof, we have $\mu(e_0) = 3/2$ exactly. From Corollary \[muformula\] applied to $p_{e_0}\circ g:\theta \to {{\mathbb R}}$, we find $\mu_g(e_0)=\frac12(\Lambda+V)+\frac{k}{4}$, where $\Lambda$ is the number of local maximum points, $V$ is the number of local minimum points and $k=2$ is the number of vertices, both of degree $3$. Thus, $\frac{3}{2}=\frac12(\Lambda+V)+\frac12$, so that $\Lambda+V=2$. This implies that the local maximum/minimum points are unique, and must be the unique global maximum/minimum points $p_{\rm max}$ and $p_{\rm min}$ (which may be one of the two vertices $q^\pm$). Then $p_{e_0}\circ g$ is monotone along edges except at the points $p_{\rm max}$, $p_{\rm min}$ and $q^\pm$. Introduce Euclidean coordinates $(x,y,z)$ for ${{\mathbb R}}^3$ so that $e_0$ is in the increasing $z$-direction. Write $t_{\rm max}=p_{e_0}\circ g(p_{\rm max})= \langle e_0, p_{\rm max}\rangle$ and $t_{\rm min}=\langle e_0, p_{\rm min}\rangle$ for the maximum and minimum values of $z$ along $g(\theta)$. Write $t^\pm$ for the value of $z$ at $g(q^\pm)$, where we may assume $t_{\rm min} \leq t^- < t^+ \leq t_{\rm max}$. We construct a “model" standard $\theta$-curve $\widehat\Gamma$ in the $(x,z)$-plane, as follows. $\widehat\Gamma$ will consist of a circle $C$ plus the straight chord of $C$, joining $\widehat{q}^-$ to $\widehat{q}^+$ (points to be chosen). Choose $C$ so that the maximum and minimum values of $z$ on $C$ equal $t_{\rm max}$ and $t_{\rm min}$. Write $\widehat{p}_{\rm max}$ resp. $\widehat{p}_{\rm min}$ for the maximum and minimum points of $z$ along $C$. Choose $\widehat{q}^+$ as a point on $C$ where $z=t^+$. There may be two nonequivalent choices for $\widehat{q}^-$ as a point on $C$ where $z=t^-$: we choose so that $\widehat{p}_{\rm max}$ and $\widehat{p}_{\rm min}$ are in the same or different topological edge of $\widehat\Gamma$, where $p_{\rm max}$ and $p_{\rm min}$ are in the same or different topological edge, resp., of $\Gamma$. Note that there is a homeomorphism from $g(\theta)$ to $\widehat\Gamma$ which preserves $z$. We now proceed to extend this homeomorphism to an isotopy. For $t\in{{\mathbb R}}$, write $P_t$ for the plane $\{z=t\}$. As in the proof of Proposition \[untangle\], there is a continuous $1$-parameter family of homeomorphisms $\Phi_t:P_t\to P_t$ such that $\Phi_t(\Gamma\cap P_t)=\widehat\Gamma\cap P_t$; $\Phi_t$ is the identity outside a compact subset of $P_t$; and $\Phi_t$ is isotopic to the identity of $P_t$, uniformly with respect to $t$. Defining $\Phi:{{\mathbb R}}^3\to{{\mathbb R}}^3$ by $\Phi(x,y,z):=\Phi_z(x,y)$, we have an isotopy of $\Gamma$ with the model graph $\widehat\Gamma$. [**(4)**]{} Finally, consider an embedding $g:\theta\to {{\mathbb R}}^3$ which is only [**continuous**]{}, and write $\Gamma=g(\theta)$. It follows from Theorem \[tame\] that for any $\theta$-graph $\Gamma$ of finite net total curvature, there is a $\Gamma$-approximating polygonal $\theta$-graph $P$ isotopic to $\Gamma$, with ${{\rm NTC}}(P) \leq {{\rm NTC}}(\Gamma)$ and as close as desired to ${{\rm NTC}}(\Gamma)$. If a $\theta$-graph $\Gamma$ would have ${{\rm NTC}}(\Gamma) < 3\pi$, then the $\Gamma$-approximating polygonal graph $P$ would also have ${{\rm NTC}}(P) < 3\pi$, in contradiction to what we have shown for piecewise $C^2$ theta graphs in part [**(1)**]{} above. This shows that ${{\rm NTC}}(\Gamma) \geq 3\pi$. If equality ${{\rm NTC}}(\Gamma) = 3\pi$ holds, then ${{\rm NTC}}(P) \leq {{\rm NTC}}(\Gamma) = 3\pi$, so that by the equality case part [**(2)**]{} above, ${{\rm NTC}}(P)$ must equal $3\pi$, and $P$ must be a convex planar curve plus a chord. But this holds for [*all*]{} $\Gamma$-approximating polygonal graphs $P$, implying that $\Gamma$ itself must be a convex planar curve plus a chord. Finally, If ${{\rm NTC}}(\Gamma) < 4\pi$, then ${{\rm NTC}}(P) < 4\pi$, implying by part [**(3)**]{} above that $P$ is isotopic to the standard $\theta$-graph. But $\Gamma$ is isotopic to $P$, and hence is isotopically standard. [width0pt ]{}\ [20]{} W. Allard and F. Almgren, [*The structure of stationary one dimensional varifolds with positive density*]{}, Invent. Math [**34**]{} (1976), 83–97. E. Artin and R. H. Fox, [*Some wild cells and spheres in three-dimensional space*]{}, Annals of Math. [**49**]{} (1948), 979-990. J. Douglas, [*Solution of the problem of Plateau*]{}, Trans. Amer. Math. Soc. [**33**]{}(1931), 263–321. T. Ekholm, B. White, and D. Wienholtz, [*Embeddedness of minimal surfaces with total boundary curvature at most $4\pi$*]{}, Annals of Mathematics [**155**]{} (2002), 109–234. I. Fáry, [*Sur la courbure totale d’une courbe gauche faisant un noeud*]{}, Bull. Soc. Math. France [**77**]{} (1949), 128-138. W. Fenchel, [*Über Krümmung und Windung geschlossener Raumkurven*]{}, Math. Ann. [**101**]{} (1929), 238–252. R. Gulliver, [*Total Curvature of Graphs in Space*]{}, Pure and Applied Mathematics Quarterly [**3**]{} (2007), 773–783. R. Gulliver and S. Yamada, [*Area density and regularity for soap film-like surfaces spanning graphs* ]{}, Math. Z. [**253**]{} (2006), 315–331. R. Gulliver and S. Yamada, [*Total Curvature and isotopy of graphs in $R^3$*]{}, ArXiv:0806.0406. H. Goda, [*Bridge index for theta curves in the 3-sphere*]{}, Topology Applic. [**79**]{} (1997), 177-196. S. Kinoshita, [*On elementary ideals of polyhedra in the $3$-sphere*]{}, Pacific J. Math. [**42**]{} (1972), 89-98. Lickorish, W.B.R., [*An Introduction to Knot Theory.*]{} Graduate Texts in Mathematics [**175**]{}. Springer, 1997. Springer, 1971. J. Milnor, [*On the total curvature of knots*]{}, Annals of Math. [**52**]{} (1950), 248–257. T. Radó, [*On the Problem of Plateau*]{}. Springer, 1971. A. C. M. van Rooij, [*The total curvature of curves*]{}, Duke Math. J. [**32**]{}, 313–324 (1965). K. Stephenson, [Circle packing: a mathematical tale]{} Notices AMS, [**50**]{} No.11, 1376–1388 (2003). K. Taniyama, [*Total curvature of graphs in Euclidean spaces*]{}. Differential Geom. Appl. [**8**]{} (1998), 135–155. aaaaaaaaaaaaaaaaaaaaaaaaaaaaaasssssssssssssssss = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb Robert Gulliver Sumio Yamada\ School of Mathematics Mathematical Institute\ University of Minnesota Tohoku University\ Minneapolis MN 55414 Aoba, Sendai, Japan 980-8578\ [gulliver@math.umn.edu]{}\ [www.math.umn.edu/ gulliver]{}\ [^1]: Supported in part by JSPS Grant-in-aid for Scientific Research No.17740030 [^2]: Thanks to the Korea Institute for Advanced Study for invitations.
--- abstract: 'Our aim is to study invariant hypersurfaces immersed in the Euclidean space $\mathbb{R}^{n+1}$, whose mean curvature is given as a linear function in the unit sphere $\mathbb{S}^n$ depending on its Gauss map. These hypersurfaces are closely related with the theory of manifolds with density, since their weighted mean curvature in the sense of Gromov is constant. In this paper we obtain explicit parametrizations of constant curvature hypersurfaces, and also give a classification of rotationally invariant hypersurfaces.' --- [**Invariant hypersurfaces with linear prescribed mean curvature [^1]**]{}\ \ $^\dagger$Departamento de Geometría y Topología, Universidad de Granada, E-18071 Granada, Spain.\ *E-mail address:* jabueno@ugr.es\ $^\ddagger$Departamento de Ciencias e Informática, Centro Universitario de la Defensa de San Javier, E-30729 Santiago de la Ribera, Spain.\ *E-mail address:* irene.ortiz@cud.upct.es Contents $\begin{array}{lr} 1.\hspace{.25cm} \text{Introduction} &\hspace{3cm}\hyperlink{page.1}{1}\\ 2.\hspace{.25cm} \text{Constant curvature}\ \lHn&\hyperlink{page.3}{3}\\ 3.\hspace{.25cm} \text{The phase plane of rotational}\ \lHn &\hyperlink{page.7}{7}\\ 4.\hspace{.25cm} \text{Classification of rotational}\ \lHn &\hyperlink{page.8}{8}\\ 5.\hspace{.25cm} \text{References} & \hyperlink{page.21}{21} \end{array} $ Introduction {#intro} ============ Let us consider an oriented hypersurface $\sig$ immersed into $\rnn$ whose mean curvature is denoted by $H_\sig$ and its Gauss map by $\eta:\sig\rightarrow\S^n\subset\rnn$. Following [@BGM1], given a function $\H\in C^1(\S^n)$, $\sig$ is said to be a hypersurface of *prescribed mean curvature* $\H$ if $$\label{prescribedMC} H_\sig(p)=\H(\eta_p),$$ for every point $p\in\sig$. Observe that when the prescribed function $\H$ is constant, $\sig$ is a hypersurface of constant mean curvature (CMC). It is a classical problem in the Differential Geometry the study of hypersurfaces which are defined by means of a prescribed curvature function in terms of the Gauss map, being remarkable the Minkowski and Christoffel problems for ovaloids ([@Min; @Chr]). In particular, when such prescribed function is the mean curvature, the hypersurfaces arising are the ones governed by . For them, the existence and uniqueness of ovaloids was studied, among others, by Alexandrov and Pogorelov in the ’50s, [@Ale; @Pog], and more recently by Guan and Guan in [@GuGu]. Nevertheless, the global geometry of complete, non-compact hypersurfaces of prescribed mean curvature in $\rnn$ has been unexplored for general choices of $\H$ until recently. In this framework, the first author jointly with Gálvez and Mira have started to develop the *global theory of hypersurfaces with prescribed mean curvature* in [@BGM1], taking as a starting point the well-studied global theory of CMC hypersurfaces in $\rnn$. The same authors have also studied rotational hypersurfaces in $\rnn$, getting a Delaunay-type classification result and several examples of rotational hypersurfaces with further symmetries and topological properties (see [@BGM2]). For prescribed mean curvature surfaces in $\r3$, see [@Bue1] for the resolution of the Björling problem and [@Bue2] for the obtention of half-space theorems properly immersed surfaces. Our objective in this paper is to further investigate the geometry of complete hypersurfaces of prescribed mean curvature for a relevant choice of the prescribed function. In particular, let us consider $\H\in C^1(\S^n)$ a linear function, that is, $$\H(x)=a\langle x,v\rangle+\lambda$$ for every $x\in\S^n$, where $a,\lambda\in\R$ and $v$ is a unit vector called the *density vector*. Note that if $a=0$ we are studying hypersurfaces with constant mean curvature equal to $\lambda$. Moreover, if $\lambda=0$, we are studying self-translating solitons of the mean curvature flow, case which is widely studied in the literature (see e.g. [@CSS; @HuSi; @Ilm; @MSHS; @SpXi] and references therein). Therefore, we will assume that $a$ and $\lambda$ are not null in order to avoid the trivial cases. Furthermore, after a homothety of factor $1/a$ in $\rnn$, we can get $a=1$ without loss of generality. Bearing in mind these considerations, we focus on the following class of hypersurfaces. An immersed, oriented hypersurface $\sig$ in $\rnn$ is an $\H_\lambda$-hypersurface if its mean curvature function $H_\sig$ is given by $$\label{defilambdasup} H_\sig(p)=\H_\lambda(\eta_p)=\langle\eta_p,v\rangle+\lambda, \quad \forall p\in\sig.$$ See that if $\sig$ is an $\lH$ with Gauss map $\eta$, then $\sig$ with the opposite orientation $-\eta$ is trivially a $\H_{-\lambda}$-hypersurface. Thus, up to a change of the orientation, we assume $\lambda>0$. The relevance of the class of $\lHn$ lies in the fact that they satisfy some characterizations which are closely related to the theory of manifolds with density. Firstly, following Gromov [@Gro], for an oriented hypersurface $\sig$ in $\R^{n+1}$ with respect to the density $e^\phi\in C^1(\R^{n+1})$, the *weighted mean curvature* $H_\phi$ of $\sig$ is defined by $$\label{weightedMC} H_\phi:=H_\sig-\langle\eta,\nabla\phi\rangle,$$ where $\nabla$ is the gradient operator of $M$. Note that when the density is $\phi_v(x)=\langle x,v\rangle$, by using and it follows that $\sig$ is an $\lH$ if and only if $H_{\phi_v}=\lambda$. In particular, as pointed out by Ilmanen [@Ilm], self-translating solitons are *weighted minimal*, i.e. $H_{\phi_v}=0$. On the other hand, although hypersurfaces of prescribed mean curvature do not come in general associated to a variational problem, the $\lHn$ do. To be more specific, consider any measurable set $\Omega\subset\rnn$ having as boundary $\sig=\partial\Omega$ and inward unit normal $\eta$ along $\sig$. Then, the *weighted area and volume* of $\Omega$ with respect to the density $\phi_v$ are given respectively by $$A_{\phi_v}(\sig):=\int_\sig e^{\phi_v} d\sig,\hspace{.5cm} V_{\phi_v}(\Omega):=\int_\Omega e^{\phi_v} dV,$$ where $d\sig$ and $dV$ are the usual area and volume elements in $\rnn$. So, in [@BCMR] it is proved that $\sig$ has constant weighted mean curvature equal to $\lambda$ if and only if $\sig$ is a critical point under compactly supported variations of the functional $J_{\phi_v}$, where $$J_{\phi_v}:=A_{\phi_v}-\lambda V_{\phi_v}.$$ Finally, observe that if $f:\sig\rightarrow\R^{n+1}$ is an $\lH$, the family of translations of $f$ in the $v$ direction given by $F(p,t)=f(p)+tv$ is the solution of the geometric flow $$\label{geoflo} \left(\frac{\parc F}{\parc t}\right)^{\bot}=(H_\sig-\lambda)\eta,$$ which corresponds to the mean curvature flow with a constant forcing term, that is, $f$ is a self-translating soliton of the geometric flow . This flow already appeared in the context of studying the *volume preserving mean curvature flow*, introduced by Huisken [@Hui]. Throughout this work we focus our attention on $\lHn$ which are *invariant* under the flow of an $(n-1)$-group of translations and the isometric $SO(n)$-action of rotations that pointwise fixes a straight line. The first group of isometries generates *cylindrical flat hypersurfaces*, while the second one corresponds to *rotational hypersurfaces*. These isometries and the symmetries inherited by the invariant $\lHn$ are induced to Equation easing the treatment of its solutions. We must emphasize that, although the authors already defined the class of immersed $\lHn$ in [@BGM1], the classification of neither cylindrical nor rotational $\lHn$ in [@BGM2] was covered. We next detail the organization of the paper. In Section \[constantcurv\] we study complete $\lHn$ that have constant curvature. By classical theorems of Liebmann, Hilbert and Hartman-Nirenberg, any such $\lH$ must be flat, hence invariant by an $(n-1)$-group of translations and described as the riemannian product $\alpha\times\R^{n-1}$, where $\alpha$ is a plane curve called the *base curve*. This product structure allows us to relate the condition of being an $\lH$ with the geometry of $\alpha$. Indeed, the curvature $\kappa_\alpha$ is, essentially, the mean curvature of $\alpha\times\R^{n-1}$. In Theorem \[clasificacioncurvaturacte\] we classify such $\H_\lambda$-hypersurfaces by giving explicit parametrizations of the base curve. Later, in Section \[properties\] we introduce the phase plane for the study of rotational $\lHn$. In particular, we treat the ODE that the profile curve of a rotational $\lH$ satisfies as a non-linear autonomous system since the qualitative study of the solutions of this system will be carried out by a phase plane analysis, as the first author did jointly with Gálvez and Mira in [@BGM2]. Finally, in Section \[rot\] we give a complete classification of rotational $\lHn$ intersecting the axis of rotation in Theorem \[Classification1\] and non-intersecting such axis in Theorem \[Classification2\]. To get such results we develop along this section a discussion depending on the value of $\lambda$, namely $\lambda>1,\ \lambda=1$ and $\lambda<1$. Constant curvature $\H_\lambda$-hypersurfaces {#constantcurv} ============================================= The aim of this section is to obtain a classification result for complete $\lHn$ with constant curvature. By classical theorems of Liebmann, Hilbert, and Hartman-Nirenberg, any such $\lH$ must be flat, hence invariant by an $(n-1)$-parameter group of translations $\mathcal{G}_{a_1,...,a_{n-1}}=\{F_{t_1,...,t_{n-1}};\ t_i\in\R\}$ where $a_i\in\rnn$ with $i=1,...,n-1$, are linearly independent and $F_{t_1,...,t_{n-1}}(p)=p+\sum_{i=1}^{n-1}t_i a_i$, for every $p\in\rnn$. Any $\lH$ invariant by such a group is called a *cylindrical flat $\H_\lambda$-hypersurface*, and the directions $a_1,...,a_{n-1}$ are known as *ruling directions*. For cylindrical flat hypersurfaces having as rulings $a_1,...,a_{n-1}$, it is known that a global parametri- zation is given by $$\psi(s,t_1,...,t_{n-1})=\alpha(s)+\sum_{i=1}^{n-1}t_i a_i,$$ where $\alpha$ is a curve, called the *base curve*, contained in a 2-dimensional plane $\Pi$ orthogonal to the vector space $\mathrm{Lin}\langle a_1,...,a_{n-1}\rangle$. Henceforth, we will denote a cylindrical flat $\lH$ by $\sig_\alpha:=\alpha\times\R^{n-1}$, where $\R^{n-1}$ stands for the orthogonal complement of $\Pi$. From this parametrization we obtain that $\sig_\alpha$ has, at most, two different principal curvatures: one given by the curvature of $\alpha$, $\kappa_\alpha$, and the $n-1$ remaining being identically zero. Since the mean curvature $H_{\sig_\alpha}$ of $\sig_\alpha$ is given as the mean of its principal curvatures, it follows from Equation that $\kappa_\alpha$ satisfies $$\label{curvaturaalpha} \kappa_\alpha=n H_{\sig_\alpha}=n(\langle\textbf{n}_\alpha,v\rangle+\lambda),$$ where $\textbf{n}_\alpha:= J\alpha'$ is the positively oriented, unit normal of $\alpha$ in $\Pi$. We must emphasize that there is no a priori relation between the density vector $v$ and the ruling directions $a_i$. It is immediate that if $\Pi^\bot$ and $v$ are parallel, then Equation implies that $\kappa_\alpha=\lambda n$ is constant, and thus $\alpha$ is a straight line or a circle in $\Pi$ of radius $1/(\lambda n)$. Hence, *hyperplanes and right circular cylinders are the only $\H_\lambda$-hypersurfaces whose rulings are parallel to the density vector*. Another particular but important case appears when $\lambda=0$, that is, for translating solitons. It is known that if $v$ and $\Pi^\bot$ are orthogonal, the cylindrical translating solitons are hyperplanes generated by $\Pi^\bot$ and $v$, and *grim reaper* cylinders. After a change of Euclidean coordinates, we suppose that the plane $\Pi$ is the one generated by the vectors $e_1$ and $e\n1$, and after a rotation around $e_1$ we suppose that the density vector $v$ has coordinates $v=(0,v_2,...,v\n1)$. Moreover, we can assume that $v\n1\neq 0$; otherwise $v$ and the ruling directions $\Pi^\bot$ are parallel and $\alpha$ is a straight line or a circle in $\Pi$. Assume that $\alpha(s)=(x(s),0,...,0,z(s))$ is arc-length parametrized, that is $x'(s)=\cos\theta(s),\ z'(s)=\sin\theta(s)$ where the function $\theta(s)$ is the angle between $\alpha'(s)$ and the $e_1$-direction. Since the curvature $\kappa_\alpha(s)$ is given by $\theta'(s)$, Equation is equivalent to the following $$\label{sistemadiferencial} \left\lbrace\begin{array}{l} \vspace{.25cm} x'(s)=\cos\theta(s)\\ \vspace{.25cm} z'(s)=\sin\theta(s)\\ \theta'(s)=n\big(v\n1\cos\theta(s)+\lambda\big). \end{array} \right.$$ We point out that for certain values of $\lambda$, system has trivial solutions. Indeed, suppose that $\lambda\in [-v\n1,v\n1]$ and let $\theta_0$ be such that $\cos\theta_0=-\lambda/v\n1$. Then, the straight line parametrized by $x(s)=(\cos\theta_0) s,\ z(s)=(\sin\theta_0) s,\ \theta(s)=\theta_0$ solves . Thus, by uniqueness of the ODE , *if $\alpha$ has curvature vanishing at some point, it is a straight line*. Now we solve for the case that $v\n1\neq 0$ and $\theta'(s)\neq 0$. Integrating the last equation we obtain the explicit expression of the function $\theta(s)$, depending on $\lambda$ and $v\n1$: $$\theta(s)=\left\lbrace\begin{array}{ll} \vspace{.25cm} 2\arctan\left(\sqrt{\frac{\lambda+v\n1}{\lambda-v\n1}}\tan\left(\frac{n}{2}\sqrt{\lambda^2-v\n1^2}s\right)\right) & \hspace{.5cm}\mathrm{if}\ \lambda>v\n1\\ \vspace{.25cm} 2\arctan(nv\n1s)& \hspace{.5cm}\mathrm{if}\ \lambda=v\n1\\ \vspace{.25cm} 2\arctan\left(\sqrt{\frac{v\n1+\lambda}{v\n1-\lambda}}\tanh\left(\frac{n}{2}\sqrt{v\n1^2-\lambda^2}s\right)\right) & \hspace{.5cm}\mathrm{if}\ \lambda<v\n1,\ \mathrm{and}\ \theta(0)=0\\ \vspace{.25cm} 2\mathrm{arccotg}\left(\sqrt{\frac{v\n1-\lambda}{v\n1+\lambda}}\tanh\left(\frac{n}{2}\sqrt{v\n1^2-\lambda^2}s\right)\right) & \hspace{.5cm}\mathrm{if}\ \lambda<v\n1,\ \mathrm{and}\ \theta(0)=\pi. \end{array}\right.$$ Since $x'(s)=\cos\theta(s)$ and $z'(s)=\sin\theta(s)$, explicit integration yields the following classification result: \[clasificacioncurvaturacte\] Up to vertical translations, the coordinates of the base curve of a cylindrical flat $\H_\lambda$-hypersurface $\sig_\alpha$ are classified as follows: - Case $\lambda>v\n1$. The explicit coordinates of $\alpha(s)$ are: $$\begin{array}{l} \vspace{.25cm}x(s)=-\lambda s+\frac{2}{n}\arctan\left(\sqrt{\frac{\lambda+v\n1}{\lambda-v\n1}}\tan\left(\frac{n}{2}\sqrt{\lambda^2-v\n1^2}s\right)\right),\\ z(s)=\frac{1}{n}\log\left(\lambda-\cos\left(n\sqrt{\lambda^2-v\n1^2}s\right)\right). \end{array}$$ The angle function $\theta(s)$ is periodic, the $x(s)$-coordinate is unbounded and the $z(s)$-coordinate is also periodic. The curve $\alpha(s)$ self-intersects infinitely many times. ![The profile curve for the case $\lambda>v\n1$. Here, $n=2$, $v\n1=1$ and $\lambda=2$.[]{data-label="lmayor1"}](lmayor1.pdf){width=".5\textwidth"} - Case $\lambda=v\n1$. - Either $\alpha(s)$ is a horizontal straight line parametrized by $x(s)=-s,\ z(s)=c_0,\ c_0\in\R,\ \theta(s)=\pi$, or - its explicit coordinates are $$\begin{array}{l} \vspace{.25cm}x(s)=-s+\frac{n}{2}\arctan(ns),\\ z(s)=\frac{1}{n}\log(1+n^2s^2). \end{array}$$ The image of the angle function $\theta(s)$ in the circle $\S^1$ is $\S^1-\{(0,-1)\}$. The $z(s)$-coordinate decreases until reaching a minimum and then increases, and $\alpha(s)$ has a self-intersection. ![The profile curves for the case $\lambda=v\n1$. Here, $n=2$, $v\n1=1$ and $\lambda=1$.[]{data-label="ligual1"}](ligual1.pdf){width=".5\textwidth"} - Case $\lambda<v\n1$. - Either $\alpha(s)$ is a straight line parametrized by $x(s)=(\cos\theta_0)s,\ z(s)=\pm (\sin\theta_0)s,\ \theta(s)=\theta_0$, where $\theta_0$ is such that $\lambda+v\n1\cos\theta_0=0$, or - if $\theta(0)=0$, its explicit coordinates are $$\begin{array}{l} \vspace{.25cm}x(s)=-\lambda s+\frac{2}{nv\n1}\arctan\left(\sqrt{\frac{v\n1+\lambda}{v\n1-\lambda}}\tanh\left(\frac{n}{2}\sqrt{v\n1^2-\lambda^2}s\right)\right),\\ z(s)=\frac{1}{nv\n1}\log\left(-\lambda+\cosh\left(n\sqrt{v\n1^2-\lambda^2}s\right)\right). \end{array}$$ In this case, $\alpha(s)$ has a self-intersection. - if $\theta(0)=\pi$, its explicit coordinates are $$\begin{array}{l} \vspace{.25cm}x(s)=-\lambda s-\frac{2}{nv\n1}\arctan\left(\sqrt{\frac{v\n1+\lambda}{v\n1-\lambda}}\tanh\left(\frac{n}{2}\sqrt{v\n1^2-\lambda^2}s\right)\right),\\ z(s)=\frac{1}{nv\n1}\log\left(\lambda+\cosh\left(n\sqrt{v\n1^2-\lambda^2}s\right)\right). \end{array}$$ In this case, $\alpha(s)$ is a graph hence it is embedded. In the two latter cases, the image of the angle function $\theta(s)$ of each curve is a connected arc in $\S^1$ whose endpoints are $(\cos\theta_0,\pm\sin\theta_0)$. ![Left: the profile curves for the case $\lambda<v\n1$. In blue, the case $\theta(0)=0$; in orange, the case $\theta(0)=\pi$. Here, $n=2$, $v\n1=1$ and $\lambda=1/2$. Right: the values of $\theta(s)$ in $\S^1$ of each curve.[]{data-label="lmenor1"}](lmenor1.pdf){width=".6\textwidth"} The phase plane of rotational $\H_\lambda$-hypersurfaces {#properties} ======================================================== This section is devoted to compile the main features of the phase plane for the study of rotational $\lHn$. To do so we follow [@BGM2], where the phase plane was used to study rotational hypersurfaces of prescribed mean curvature given by Equation . Let us fix the notation. Firstly, observe that in contrast with cylindrical $\lHn$, where there was$C(\frac{n-1}{\lambda n})$ no a priori relation between the density vector and the ruling directions, for a rotational $\lH$ the density vector and the rotation axis must be parallel [@Lop Proposition 4.3]. Thus, after a change of Euclidean coordinates, we suppose that the density vector $v$ in Equation is $e\n1$. Then, we consider $\sig$ the rotational $\lH$ generated as the orbit of an arc-length parametrized curve $$\alpha(s)=(x(s),0,...,0,z(s)),\hspace{.5cm} s\in I\subset\R,$$ contained in the plane $[e_1,e\n1]$ generated by the vectors $e_1$ and $e\n1$, under the isometric $SO(n)$-action of rotations that leave pointwise fixed the $x\n1$-axis. From now on, we will denote the coordinates of $\alpha(s)$ simply by $(x(s),z(s))$ and omit the dependence of the variable $s$, unless necessary. Note that the unit normal of $\alpha$ in $[e_1,e\n1]$, given by $\textbf{n}_\alpha=J\alpha'=(-z',x')$, induces a unit normal to $\sig$ by just rotating $\textbf{n}_\alpha$ around the $x\n1$-axis, and the principal curvatures of $\sig$ with respect to this unit normal are given by $$\kappa_1=\kappa_\alpha=x'z''-x''z',\hspace{.5cm} \kappa_2=\cdots=\kappa_n=\frac{z'}{x}.$$ Consequently, the mean curvature $H_\sig$ of $\sig$, which satisfies , is related with $x$ and $z$ by $$\label{odemedia} nH_\sig=n(x'+\lambda)=x'z''-x''z'+(n-1)\frac{z'}{x}.$$ As $\alpha$ is arc-length parametrized, it follows that $x$ is a solution of the second order autonomous ODE: $$\label{odex} x''=(n-1)\frac{1-x'^2}{x}-n\varepsilon(x'+\lambda)\sqrt{1-x'^2}, \hspace{1cm} \varepsilon ={\rm sign}(z'),$$ on every subinterval $J\subset I$ where $z'(s)\neq 0$ for all $s\in J$. Here, the value $\varepsilon$ denotes whether the height of $\alpha$ is increasing (when $\varepsilon=1$) or decreasing (when $\varepsilon=-1$). After the change $x'=y$, transforms into the first order autonomous system $$\label{1ordersys} \left(\begin{array}{c} x\\ y \end{array}\right)'=\left(\begin{array}{c} y\\ (n-1)\frac{\displaystyle{1-y^2}}{\displaystyle{x}}-n\varepsilon(y+\lambda)\sqrt{1-y^2} \end{array}\right).$$ The *phase plane* is defined as the half-strip $\Theta_\varepsilon:=(0,\infty)\times(-1,1)$, with coordinates $(x,y)$ denoting, respectively, the distance to the axis of rotation and the *angle function* of $\sig$. The *orbits* are the solutions $\gamma(s)=(x(s),y(s))$ of system . Both the local and global behavior of an orbit in $\Theta_\varepsilon$ are strongly influenced by the underlying geometric properties of Equation . For example, since the profile curve $\alpha$ of a rotational $\lH$ only intersects the axis of rotation orthogonally, see e.g. [@BGM2 Theorem 4.1, pp. 13-14], an orbit in $\Theta_\varepsilon$ cannot converge to a point $(x_0,y_0)$ with $x_0=0,\ y_0\in (-1,1)$. Next, we highlight some consequences of the study of the phase plane carried out in Section 2 in [@BGM2] adapted to our particular case. \[resumenfases\] For each $\lambda>0$: - There is a unique equilibrium of in $\Theta_1$ given by $e_0:=\left(\frac{n-1}{\lambda n},0\right)$. This equilibrium generates the constant mean curvature, flat cylinder of radius $\frac{n-1}{\lambda n}$ and vertical rulings. - The Cauchy problem associated to system for the initial condition $(x_0,y_0)\in\Theta_\varepsilon$ has local existence and uniqueness. Consequently, the orbits provide a foliation of regular, proper, $C^1$ curves of $\Theta_\varepsilon-\{e_0\}$, and two distinct orbits cannot intersect in $\Theta_\varepsilon$. Moreover, by uniqueness of the Cauchy problem , if an orbit $\gamma(s)$ converges to $e_0$, the value of the parameter $s$ goes to $\pm\infty$. - The points of $\alpha$ with $\kappa_\alpha=0$ are the ones where $y'=0$. They are located in $\Gamma_\varepsilon:=\Theta_\varepsilon\cap\{x=\Gamma_\varepsilon(y)\}$, where $$\label{curvagamma} \Gamma_{\varepsilon}(y)=\frac{(n-1)\sqrt{1-y^2}}{n \varepsilon(y+\lambda)},$$ and $\varepsilon(y+\lambda)>0$. - The axis $y=0$ and $\Gamma_\varepsilon$ divide $\Theta_\varepsilon$ into connected components where the coordinate functions of an orbit $(x(s),y(s))$ are monotonous. Thus, at each of these monotonicity regions, the motion of an orbit is uniquely determined. - If an orbit $(x(s),y(s))$ intersects $\Gamma_\varepsilon$, the function $y(s)$ has a local extremum; if an orbit intersects the axis $y=0$, it does orthogonally. Finally, recall that system has a singularity for the values $x_0=0,\ y_0=\pm 1$, hence we cannot ensure the existence of a rotational $\lH$ intersecting orthogonally the axis of rotation by solving the Cauchy problem with this initial data. However, we can guarantee the existence of such a rotational $\lH$ by solving the Dirichlet problem over a small-enough domain, see [@Mar Corollary 1]. Now, Corollary 2.4 in [@BGM2] has the following implication in our phase plane study: \[existorbitaext\] Let $\varepsilon,\delta\in\{-1,1\}$ be such that $\varepsilon(\delta+\lambda)>0$. Then, there exists a unique orbit in $\Theta_\varepsilon$ that has $(0,\delta)\in\overline{\Theta_\varepsilon}$ as an endpoint. There is no such an orbit in $\Theta_{-\varepsilon}$. Classification of rotational $\H_\lambda$-hypersurfaces {#rot} ======================================================= Throughout this section we classify rotational $\lHn$ depending on the value of $\lambda$. As a first approach to arise such a classification, we must mention a technical, useful in the later, result which establishes that no closed examples exist in the class of immersed $\lHn$. In particular, the case $n=2$ was originally compiled in López [@Lop] and its proof can be easily extended to any dimension. \[noclosed\] There do not exist closed $\lHn$. At this point, we are going to study the aforementioned classification by analyzing the qualitative properties of system , most of them already studied in the previous section. To this end, it is useful to study its *linearized* system at the unique equilibrium $e_0=\big(\frac{n-1}{\lambda n},0\big)$. In particular, the linearized of at $e_0$ is given by $$\left(\begin{matrix} 0&1\\ \displaystyle{-\frac{n^2\lambda^2}{n-1}}& -n \end{matrix}\right),$$ whose eigenvalues are $$\mu_1=\frac{-n+ n\sqrt{1-\displaystyle{\frac{4\lambda^2}{n-1}}}}{2},\hspace{1cm}\text{and} \hspace{1cm} \mu_2=\frac{-n- n\sqrt{1-\displaystyle{\frac{4\lambda^2}{n-1}}}}{2}.$$ Standard theory of non-linear autonomous systems enables us to summarize the possible beha- viors of a solution around the equilibrium $e_0$: - if $\lambda>\sqrt{n-1}/2$, then $\mu_1$ and $\mu_2$ are complex conjugate with negative real part. Thus, $e_0$ has an *inward spiral* structure, and every orbit close enough to $e_0$ converges asymptotically to it spiraling around infinitely many times. - if $\lambda=\sqrt{n-1}/2$, then $\mu_1=\mu_2$ and they are real and negative, with only one eigenvector. Thus, $e_0$ is an asymptotically stable improper node, and every orbit close enough to $e_0$ converges asymptotically to it, maybe spiraling around a finite number of times. - if $\lambda\in (0,\sqrt{n-1}/2)$, then $\mu_1$ and $\mu_2$ are different, real and negative. Thus, $e_0$ is an asymptotically stable node and has a *sink* structure, and every orbit close enough to $e_0$ converges asymptotically to it *directly*, i.e. without spiraling around. ![The linearized of system depending on the values of $\lambda>0$ and the behavior of its orbits.[]{data-label="linearizados"}](linearizados.pdf){width=".7\textwidth"} We now analyze the rotational $\lHn$ in $\rnn$ by distinguishing three possibilities for $\lambda$: $\lambda>1$, $\lambda=1$ and $\lambda<1$. These three cases will deeply influence the global behavior of the orbits in each phase plane. Additionally, in our discussion we take into account if such hypersurfaces intersect orthogonally the axis of rotation or not. [****]{} Let us assume $\lambda>1$. On the one hand, for $\varepsilon=1$, the curve $\Gamma_1$ given by Equation is a compact, connected arc in $\Theta_1$ joining the points $(0,1)$ and $(0,-1)$. In order to study the monotonicity regions in $\Theta_1$, let us consider an arc-length parametrized curve $\alpha(s)=(x(s),z(s))$ satisfying and $\gamma(s)$ the corresponding orbit that solves . Combining items $\textit{3}$ and $\textit{4}$ in Lemma \[resumenfases\] we can ensure that in $\Theta_1$ there are four monotonicity regions which will be called $\Lambda_1,...,\Lambda_4$, respectively (see Figure \[lmayor1\], left). Moreover, if the orbit $\gamma$ is contained in $\Lambda_1\cup\Lambda_2$, it corresponds to points of $\alpha$ with positive geodesic curvature, whereas, if on the contrary, $\gamma$ is contained in $\Lambda_3\cup\Lambda_4$, it corresponds to points of $\alpha$ with negative geodesic curvature. On the other hand, for $\varepsilon=-1$, the curve $\Gamma_{-1}$ does not exist in $\Theta_{-1}$, and so there are only two monotonicity regions in $\Theta_{-1}$ called $\Lambda_+$ and $\Lambda_-$ (see Figure \[lmayor1\], right). In this case both regions correspond to points of $\alpha$ with positive geodesic curvature. ![The phase planes $\Theta_\varepsilon,\ \varepsilon=\pm1$ for $\lambda>1$, their monotonicity regions and two orbits following the motion at each monotonicity region.[]{data-label="lmayor1"}](planofaseslmayor1.pdf){width=".9\textwidth"} Our first goal is to describe the rotational $\lHn$ that intersect orthogonally the axis of rotation. By Lemma \[existorbitaext\] there is an orbit $\gamma_+(s)$ in $\Theta_1$ having $(0,1)$ as endpoint, and after a translation in $s$ we can suppose that $\gamma_+(0)=(0,1)$. This orbit generates an arc-length parametrized curve $\alpha_+(s)=(x_+(s),z_+(s))$ that intersects orthogonally the axis of rotation at the instant $s=0$. Since $\lambda>1$, by ODE we see that $z''_+(0)>0$ and so $z_+(s)$ has a minimum at $s=0$. As a matter of fact, for $s>0$ close enough to $s=0$ we have $z_+'(s)>0$ which implies that $x_+''(s)<0$. In particular, the geodesic curvature $\kappa_{\alpha_+}(s)$ of $\alpha_+$ is positive and so the orbit $\gamma_+(s)$ is strictly contained in the region $\Lambda_1$ for $s>0$ close enough to $s=0$. See Figure \[saleneje\] where the orbit $\gamma_+$ and the curve $\alpha_+$ are ploted in red. Once again, by Lemma \[existorbitaext\] there is an orbit $\gamma_-(s)$ in $\Theta_1$ with $(0,-1)$ as endpoint. Such an orbit also generates an arc-length parametrized curve $\alpha_-(s)=(x_-(s),z_-(s))$ that intersects orthogonally the axis of rotation at $s=0$. A similar discussion as above yields that $z_-''(0)<0$ and so $z_-(s)$ has a maximum at $s=0$. Thus, for $s<0$ we have $z_-'(s)>0$ which implies again that $x_-''(s)<0$. This time, $\gamma_-(s)$ is strictly contained in the region $\Lambda_2$ for $s<0$ close enough to $s=0$. See Figure \[saleneje\] where the orbit $\gamma_-$ and the curve $\alpha_-$ are ploted in orange. ![Left: the phase plane $\Theta_1$ and the orbits $\gamma_+$ and $\gamma_-$. Right: the corresponding arc-length parametrized curves $\alpha_+$ and $\alpha_-$.[]{data-label="saleneje"}](saleneje.pdf){width=".9\textwidth"} Let us study in more detail the behavior of both orbits $\gamma_+$ and $\gamma_-$ in $\Theta_1$. \[contradicecomparacioncurvaturamedia\] Let us consider the orbits $\gamma_+$ and $\gamma_-$ in the phase plane $\Theta_1$ as above. Then: - The orbit $\gamma_+(s)$ cannot stay forever in $\Lambda_1$. Moreover, it converges orthogonally to a point $(x_+,0)$ with $x_+\geq\frac{n-1}{\lambda n}$, which can be either the equilibrium $e_0$ with the parameter $s\rightarrow\infty$, or a finite point reaching it at some finite instant $s_+>0$. - The orbit $\gamma_-(s)$ cannot stay forever in $\Lambda_2$. Moreover, it intersects orthogonally the axis $y=0$ at a point $(x_-,0)$ with $x_->\frac{n-1}{\lambda n}$ reaching it at some finite instant $s_-<0$. - The points $(x_+,0)$ and $(x_-,0)$ are different. In fact, $x_+<x_-$. *1.* Arguing by contradiction, suppose that $\gamma_+(s)\subset\Lambda_1,\ \forall s>0$. Recall that $\gamma_+(0)=(0,1)$ and $\gamma_+(s)\subset\Lambda_1$ for $s>0$ small enough, hence the monotonicity properties of $\Lambda_1$ ensure that $\gamma_+$ can be expressed as a graph $y=f(x)$ with $f(x)$ satisfying $f(0)=0$ and $f'(x)<0$, for $x>0$ small enough. Consequently, since the orbits are proper curves in $\Theta_1$, $\gamma_+$ would be globally defined by the graph of $f(x)$ satisfying $f'(x)<0\, \forall x>0$ and $\lim_{x\rightarrow\infty}f(x)=c_0\geq 0$. Thus, the curve $\alpha_+(s)=(x_+(s),z_+(s))$ generated by $\gamma_+$ has positive geodesic curvature with $x_+'(s)>0,\ \forall s>0$ (since $\gamma_+$ lies over the axis $x'=y=0$). In this way, the $\lH$ $\sig_+$ generated by rotating $\alpha_+$ around the $x\n1$-axis is a strictly convex, entire graph over $\R^n$, whose mean curvature function is $H_{\sig_+}(p)=\langle\eta_p,e\n1\rangle+\lambda$ at each $p\in\sig_+$. Since $\lambda>1$, there exists a positive constant $H_0\in\R$ such that $H_{\sig_+}> H_0>0$. From here, as we can find a tangent point of intersection between the sphere $\S^n(1/H_0)$ of constant mean curvature equal to $H_0$ and $\sig_+$ in such a way that their unit normals agree and $\S^n(1/H_0)$ lies above $\sig_+$, the mean curvature comparison principle leads a contradiction. *2.* The same argument for the orbit $\gamma_-(s)$ carries over verbatim, that is, $\gamma_-$ cannot stay forever in $\Lambda_2$ and it converges to a point $(x_-,0)$ with $x_-\geq\frac{n-1}{\lambda n}$, being either $e_0$ with $s\rightarrow-\infty$, or a finite point reaching it at some finite instant $s_-<0$. Now, it remains to prove that $(x_-,0)$ cannot be the equilibrium point $e_0=\big(\frac{n-1}{\lambda n},0\big)$. To this end, note that $\gamma_-$ cannot intersect the curve $\Gamma_1$ because of the monotonicity properties of $\Lambda_2$, and the horizontal graph $\Gamma_1(y)$ given by achieves a global maximum at $y_0=-1/\lambda$, and so $\Gamma_1(y_0)>\frac{n-1}{\lambda n}=\Gamma_1(0)$. Thus, when $\gamma_-$ leaves the maximum of $\Gamma_1$ at his left-hand side, $\gamma_-$ cannot go backwards and converge to $e_0$, since it would contradict the monotonicity of $\Lambda_2$. See Figure \[contradiccionybien\] left, the pointed plot of the orbit $\gamma_-$. *3.* First we prove that $x_+\neq x_-$. Arguing by contradiction, suppose that $x_+=x_-:=\hat{x}$. Note that $(\hat{x},0)\neq e_0$ since we discussed in item *2* that $(x_-,0)\neq e_0$. In this situation the orbits $\gamma_+$ and $\gamma_-$ meet each other orthogonally at $(\hat{x},0)$ (see Figure \[contradiccionybien\] left, the continuous plot of $\gamma_+$ and $\gamma_-$). By uniqueness of the Cauchy problem they can be smoothly glued together to form a larger orbit $\gamma_0$ satisfying the following: $\gamma_0$ is a compact arc joining the points $(0,1)$ and $(0,-1)$, strictly contained in $\Lambda_1\cup\Lambda_2\cup\{(\hat{x},0)\}$. Hence, the rotational $\lH$ generated by this orbit would be a simply connected, closed hypersurface, i.e. a rotational sphere, but this fact contradicts Lemma \[noclosed\]. To finish, we check that $x_+<x_-$ by another contradiction argument. Indeed, suppose that $x_+>x_-$ and let us focus on the orbit $\gamma_-$. We will keep track of $\gamma_-(s)$ by moving within it with the parameter $s$ decreasing; recall that $\gamma_-(s)$ tends to $(0,-1)$ as the parameter $s$ increases. In this setting, the orbit $\gamma_-$ would be at the left-hand side of the orbit $\gamma_+$ when they intersect the axis $y=0$. As $\gamma_+$ and $\gamma_-$ cannot intersect each other and by properness of the orbits in $\Theta_1$, the only possibility is that $\gamma_-$ enters the region $\Lambda_2$ and later $\Lambda_4$ at some finite instant. By properness, monotonicity and since $\gamma_-$ cannot converge to the segment $\{(0,y),\ |y|<1\}$, as it was mentioned in Section $3$, $\gamma_-$ cannot do anything but enter the region $\Lambda_3$. As $\gamma_-$ cannot self-intersect, it follows that $\gamma_-$ ends up converging asymptotically to $e_0$ (Figure \[contradiccionybien\] left, the dashed plot of the orbit $\gamma_-$). But this is a contradiction with the fact that $e_0$ is asymptotically stable and with motion of the orbit $\gamma_-$, since it tends to *escape* from $e_0$ as $s$ increases. So, the only possibility is that $\gamma_+$ is at the left-hand side of $\gamma_-$ when they converge to the axis $y=0$, either converging to $e_0$ (Figure \[contradiccionybien\] right, dashed plot) or intersecting the axis $y=0$ at a finite point $(x_+,0)$ (Figure \[contradiccionybien\] right, continuous plot). ![Left: the configurations that cannot happen in $\Theta_1$ for $\gamma_+$ and $\gamma_-$. Right: the configuration of the orbits $\gamma_+$ and $\gamma_-$ in $\Theta_1$ when reaching the axis $y=0$.[]{data-label="contradiccionybien"}](contradiccionybien.pdf){width=".9\textwidth"} As seen on the right-hand side of Figure \[contradiccionybien\], we get a first approximation about how to represent properly the orbits $\gamma_+$ and $\gamma_-$ when they intersect the axis $y=0$. However, we must carry on analyzing the global behavior of $\gamma_+$ and $\gamma_-$ and its corresponding generated curves $\alpha_+$ and $\alpha_-$. On the one hand, if $\gamma_+$ intersects the axis $y=0$ at a finite point $(x_+,0)$ different to the equilibrium $e_0$, then $\gamma_+$ enters the region $\Lambda_2$ but cannot intersect $\gamma_-$, and so $\gamma_+$ has to enter the region $\Lambda_3$. By monotonicity, properness and since $\gamma_+$ cannot converge to the segment $\{(0,y),\ |y|<1\}\subset\Theta_1$, the only possibility is that $\gamma_+$ has to enter the region $\Lambda_4$. As $\gamma_+$ cannot self-intersect, we see that $\gamma_+$ ends up converging asymptotically to $e_0$ (see Figure \[lmayor1faseseje\], left). In any case, this orbit generates a complete, arc-length parametrized curve $\alpha_+(s)=(x_+(s),z_+(s))$ with the following properties: the $x_+(s)$-coordinate is bounded and converges to the value $\frac{n-1}{\lambda n}$, that is, $\alpha_+(s)$ converges to the straight line $x=\frac{n-1}{\lambda n}$ for $s\rightarrow\infty$; and the $z_+(s)$-coordinate is strictly increasing since $\gamma_+\subset\Theta_1$ and so $z_+'(s)>0$, which implies that $\alpha_+(s)$ has no self-intersections, i.e. is an embedded curve. Hence, the hypersurface $\sig_+$ generated after rotating $\alpha_+$ around the $x\n1$-axis, is a properly embedded, simply connected $\lH$ that converges to the CMC cylinder $C(\frac{n-1}{\lambda n})$ of radius $\frac{n-1}{\lambda n}$. To be more specific: - if $\lambda>\sqrt{n-1}/2$, then $\gamma_+$ converges to $e_0$ spiraling around it infinitely many times. This implies that $\alpha_+$ intersects the line $x=\frac{n-1}{\lambda n}$ infinitely many times, and so does $\sig_+$ with $C(\frac{n-1}{\lambda n})$. See Figure \[lmayor1faseseje\] left and right, the continuous plot. - if $\lambda<\sqrt{n-1}/2$, then $\gamma_+$ converges to $e_0$ *directly*, that is without spiraling around it. As a consequence, $\alpha_+'$ is never vertical and thus $\sig_+$ is a strictly convex graph that converges to $C\left(\frac{n-1}{n}\right)$. See Figure \[lmayor1faseseje\] left and right, the dashed plot. - if $\lambda=\sqrt{n-1}/2$, then $\gamma_+$ converges to $e_0$ after spiraling around it a finite number of times, and so $\sig_+$ is a graph outside a compact set. ![Left: the phase plane $\Theta_1$ and the possible orbits $\gamma_+$. Right: the corresponding arc-length parametrized curves $\alpha_+$.[]{data-label="lmayor1faseseje"}](lmayor1faseyperfil1.pdf){width=".9\textwidth"} On the other hand, recall that $\gamma_-$ intersects the axis $y=0$ at some finite point $\gamma_-(s_-)=(x_-,0),\ s_-<0$, lying on the right-hand side of $e_0$. Decreasing $s<s_-$ we get that $\gamma_-$ enters the region $\Lambda_1$. By monotonicity, properness and since $\gamma_+$ and $\gamma_-$ cannot intersect in $\Theta_1$, the only possibility for $\gamma_-$ is to have as endpoint some $\gamma_-(s_1)=(x_1,1)$ with $x_1>0$ and $s_1<s_-$ (see Figure \[lmayor1faseseje2\], top left). At this instant we have $x_-(s_1)=x_1$ and $x'_-(s_1)=1$, and ODE ensures us that $z''_-(s_1)>0$, that is the height of $\alpha_-$ reaches a minimum. As a consequence, for $s<s_1$ close enough to $s_1$ the height function $z_-(s)$ is decreasing, i.e. $z'_-(s)<0$ and thus $\alpha_-(s)$ generates an orbit which is contained in $\Theta_{-1}$; now, $\varepsilon=-1$ which agrees with the sign of $z'_-(s)$. For the sake of clarity, we will keep naming $\gamma_-$ to this orbit in $\Theta_{-1}$. In this situation, $\gamma_-\subset\Theta_{-1}$ is an orbit with $\gamma_-(s_1)=(x_1,1)$ as endpoint and lying in the region $\Lambda_+$. Again, by monotonictiy and properness the orbit $\gamma_-$ has to intersect the axis $y=0$ in an orthogonal way, and then enter the region $\Lambda_-$. Lastly, Proposition \[contradicecomparacioncurvaturamedia\] ensures us that $\gamma_-$ cannot stay contained in $\Lambda_-$ with the $x_-(s)$-coordinate tending to infinity, hence $\gamma_-$ intersects the line $y=-1$ at some $\gamma(s_2)=(x_2,-1),\ s_2<s_1$ (see Figure \[lmayor1faseseje2\], bottom left). Again, in virtue of Equation , at the instant $s=s_2$ the height function $z_-(s)$ of $\alpha_-$ satisfies $z''_-(s_2)<0$, and so $z_-(s)$ achieves a maximum at $s=s_2$ and thus $z_-(s)$ for $s<s_2$ close enough to $s_2$ is an increasing function, and so $\alpha_-(s)$ for $s<s_2$ close enough to $s_2$ generates an orbit in $\Theta_1$, which will be still named $\gamma_-$. Now, $\gamma_-$ starts at the point $(x_2,-1)$ and by monotonicty and properness it has to go from $\Lambda_2$ to $\Lambda_1$ as $s<s_2$ decreases. Since $\gamma_-$ cannot self-intersect, we get that $\gamma_-$ has to reach again the line $y=1$ at some point $(x_3,1)$, with $x_3>x_1$ (see again Figure \[lmayor1faseseje2\], top left). This process is repeated and we get a complete, arc-length parametrized curve $\alpha_-(s)$ with self-intersections and whose height function increases and decreases until reaching the $x\n1$-axis orthogonally (see Figure \[lmayor1faseseje2\], right). Therefore, the $\lH$ obtained by rotating $\alpha_-$ is properly immersed (with self-intersections) and simply connected. Our second goal concerns the classification of complete $\lHn$ non-intersecting the axis of rotation. For that, let us take $r_0>0$ and $\gamma(s)$ the orbit in $\Theta_1$ passing through the point $(r_0,0)$ at the instant $s=0$. Then, $\gamma$ is an arc having one endpoint of the form $(r_1,1),\ r_1>0$[^2], and either converges to $e_0$ as $s\rightarrow\infty$ or has another endpoint of the form $(r_2,-1)$. In the second case, the orbit $\gamma$ continues in $\Theta_{-1}$ as a compact arc and then goes in again in $\Theta_1$. By propernes, after a finite number of iterations, the orbit $\gamma$ eventually converges to $e_0$ (see Figure \[lmayor1fueraeje\], left). This configuration ensures us that the $\lHn$ associated to $\gamma$ is properly immersed and diffeomorphic to $\S^{n-1}\times\R$, with one end converging to $C\left(\frac{n-1}{\lambda n}\right)$ and the other end having unbounded distance to the axis of rotation, looping and self-intersecting infinitely many times (see Figure \[lmayor1fueraeje\], right). ![Left: the phase planes $\Theta_1$ and $\Theta_{-1}$ and the orbit $\gamma$. Right: the corresponding arc-length parametrized curve $\alpha$.[]{data-label="lmayor1fueraeje"}](lmayor1fueraeje.pdf){width=".7\textwidth"} [****]{} Now we suppose that $\lambda=1$. In this situation, the curve $\Gamma_1$ given by Equation for $\varepsilon=1$ is a connected arc in $\Theta_1$ having the point $(0,1)$ as endpoint, and the line $y=-1$ as an asymptote. Thus, $\Theta_1$ has four monotonicity regions, $\Lambda_1,...,\Lambda_4$ (see Figure \[fasesligual1\], left). The region $\Lambda_1\cup\Lambda_2$ corresponds to points with positive geodesic curvature, while the region $\Lambda_3\cup\Lambda_4$ corresponds to points with negative geodesic curvature. For $\varepsilon=-1$, the curve $\Gamma_{-1}$ in $\Theta_{-1}$ is empty, and there are only two monotonicity regions $\Lambda_+$ and $\Lambda_-$ (see Figure \[fasesligual1\], right). ![The phase planes $\Theta_\varepsilon,\ \varepsilon=\pm1$ for $\lambda=1$, their monotonicity regions and two orbits following the motion at each monotonicity region.[]{data-label="fasesligual1"}](fasesligual1.pdf){width=".9\textwidth"} We first study the rotational $\H_1$-hypersurfaces intersecting the axis of rotation. For this purpose, we must begin by pointing out that a horizontal hyperplane $\Pi$=$\{x\n1=c_0,\ c_0\in\R\}\subset\R^{n+1}$ oriented with unit normal $\eta=-e\n1$ is precisely an example of such an $\H_1$-hypersurface. Indeed, the mean curvature of $\Pi$ is identically zero, and Equation for the density vector $v=e\n1$ is $$H_\Pi=\langle\eta,e\n1\rangle+\lambda=\langle-e\n1,e\n1\rangle+1=0.$$ This fact, along with the uniqueness of the Cauchy problem associated to implies that any orbit $\gamma\in\Theta_\varepsilon$ cannot have a limit point in the line $y=-1$, since these points correspond to orbits that generate horizontal hyperplanes with downwards orientation. Now, with the aim of looking for the remaining $\H_1$-hypersurfaces intersecting the axis of rotation, we follow the same procedure than the one used for the case $\lambda>1$. Note that by Lemma \[existorbitaext\] there exists a unique orbit $\gamma_+(s)$ in $\Theta_1$ with $\gamma_+(0)=(0,1)$ generating an arc-length parametrized curve $\alpha_+$ intersecting the axis of rotation at the instant $s=0$ and with $\kappa_{\alpha_+}(s)>0$ for $s>0$ small enough. Again, item *1.* in Proposition \[contradicecomparacioncurvaturamedia\] ensures us that: either $\gamma_+$ converge directly to $e_0$ with $s\rightarrow\infty$; or $\gamma_+$ intersects the axis $y=0$ at a point $(x_+,0)$ with $x_+>\frac{n-1}{n}$ at some finite instant. In this latter case, $\gamma_+$ enters the region $\Lambda_2$ and by monotonicity and properness, $\gamma_+$ intersects the curve $\Gamma_1$ and then enters the region $\Lambda_3$. Since $\gamma_+$ cannot converge to a point $(0,y),\ |y|<1$, $\gamma_+$ has to enter the region $\Lambda_4$, and lastly $\gamma_+$ intersects the curve $\Gamma_1$ entering again the region $\Lambda_1$. Finally, since $\gamma_+$ cannot self-intersect, we see that $\gamma_+$ has to converge asymptotically to $e_0$. Specifically: - if $n=2,3,4$, then $1>\sqrt{n-1}/2$ and $\gamma_+$ spirals around $e_0$ an infinite number of times. - if $n=5$, then $1=\sqrt{n-1}/2$ and $\gamma_+$ converges to $e_0$ after spiraling around it a finite number of times. - if $n\geq 6$, then $1<\sqrt{n-1}/2$ and $\gamma_+$ converges directly to $e_0$, without looping around it. Hence, in any case, the $\H_1$-hypersurface $\sig_+$ obtained by rotating $\alpha_+$ around the $x\n1$-axis is a complete, properly embedded and simply connected hypersurface that converges to the CMC cylinder $C(\frac{n-1}{n})$ (see the right-hand side of Figure \[lmayor1faseseje\] since it is a similar case). Secondly, we describe rotational $\H_1$-hypersurfaces non-intersecting the axis of rotation. To do so, we first analyze the behavior of the orbits in $\Theta_1$. Let us fix $\hat{x}>0$, and consider the orbit $\gamma(s)$ in $\Theta_1$ such that $\gamma(0)=(\hat{x},0)$. Moreover, we can suppose that $\gamma\neq\gamma_+$. For $s>0$, the monotonicity properties of $\Theta_1$ ensure us that $\gamma(s)$ converge asymptotically to $e_0$, but $\gamma$ and $\gamma_+$ cannot intersect each other, and so $\gamma(s)$ unwraps from $e_0$ a finite number of times for $s<0$. Consequently, $\gamma(s)$ intersects the axis $y=0$ a finite number of times for $s<0$, and so we can denote $(x_0,0)$ to the last intersection of $\gamma$ with $y=0$. Now, we claim that $(x_0,0)$ is on the right-hand side of $e_0$. Arguing by contradiction, suppose that $(x_0,0)$ is on the left-hand side of $e_0$ (see Figure \[ligual1fases\], top left, blue orbit, to clarify this proof). Then, the orbit $\gamma(s)$ cannot intersect the curve $\Gamma_1$; otherwise, $\gamma$ would intersect $y=0$ again by monotonicity of $\Lambda_2$. So, by properness and since $\gamma$ cannot have an endpoint at $y=-1$, the only possibility for $\gamma(s)$ is to converge to the line $y=-1$. As a consequence, $\gamma$ can be locally expressed as a graph $(x,h(x))$ with $h(x_0)=0,\ h'(x)<0,\ \forall x>x_0$ and $h(x)\rightarrow -1$ when $x\rightarrow\infty$. To get the contradiction, we compare the orbits of the associated systems of rotational hypersurfaces of two different prescribed mean curvature. Firstly, we remind that $\H_\lambda$-hypersurfaces arises as a particular case when in Equation we prescribe the function $\H_\lambda(z)=\langle z,e\n1\rangle+\lambda,\ \forall z\in\S^n$. Now, consider the function $\f(z)=1/2\cos(\pi/2\langle z,e\n1\rangle),\ \forall z\in\sn$, which is a non-negative, even function in $\S^n$ and such that $\f(\pm e\n1)=0$, and as detailed in [@BGM2], we can also study the rotational $\f$-hypersurfaces by just substituting the prescribed function $\f(y)=1/2\cos(\pi/2y)$ in system instead of $y+\lambda$. The study made in Sections 2 and 4 in [@BGM2] ensures us that the orbits for the prescribed function $\f$ are closed curves, symmetric with respect to the axis $y=0$ and that never intersect the lines $y=\pm 1$. For this prescribed function we view its orbits $\sigma_\f(t)=(x_\f(t),y_\f(t))$ in the phase plane $\Theta_1$ of system . Suppose that there are instants $s_0,t_0$ such that $\sigma_\f(t_0)=\gamma(s_0)$. Then, since $\f(y)\leq 1+y$, with equality if and only if $y=-1$, a standard comparison of ODE’s yields that $y'(s_0)<y'_\f(t_0)$. At this point, we take $0<x_0^*<x_0$ and $\sigma_\f$ such that $\sigma_\f(0)=(x_0^*,0)$. This orbit $\sigma_\f$ can be also expressed as a graph $(x,f(x))$ such that $f(x_0^*)=0$, $f(x)$ decreases until reaching a minimum and then $f$ increases intersecting again the axis $y=0$. By continuity, there exists some $x_*>x_0$ such that $f(x_*)=h(x_*)$. Therefore, there exist $s_*,t_*<0$ such that $\gamma(s_*)=\sigma_\f(t_*)$, where their second coordinates would satisfy $y'(s_*)>y'_\f(t_*)$ (see Figure \[ligual1fases\], top left), arriving to the expected contradiction. Since $(x_0,0)$ is on the right-hand side of $e_0$, $\gamma(s)$ has to intersect $\Gamma_1$ at some instant $s_0<0$ and enter the region $\Lambda_2$. Now, monotonicity and properness allows us to ensure that $\gamma(s)$ reaches the line $y=1$ at some finite point $\gamma(s_1)=(x_1,1),\ s_1<0$, with $x_1>0$ (see Figure \[ligual1fases\], top right). Consequently, the arc-length parametrized curve $\alpha(s)=(x(s),z(s))$ associated to this orbit $\gamma$ satisfies $x(s_1)=x_1,\ x'(s_1)=1$ and for $s>s_1$ the $x(s)$-coordinate ends up converging to the value $\frac{n-1}{n}$, that is $\alpha(s)$ converges to the line $x=\frac{n-1}{n}$ as $s\rightarrow\infty$. The $z(s)$-coordinate is strictly increasing, since $\mathrm{sign}(z'(s))=\varepsilon=1$. To finish, note that the behavior of the orbit $\gamma$ in $\Theta_{-1}$ follows easily from the monotonicity properties. This orbit $\gamma$ has to intersect orthogonally the axis $y=0$ and then converge to the line $y=-1$ (see Figure \[ligual1fases\], bottom left). Note that $\gamma$ cannot converge to some line $\{y=y_0,\ y_0\in (-1,0)\}$ by using the same reasoning that the one contained in the proof of item *1* in Proposition \[contradicecomparacioncurvaturamedia\]. In this situation, the $x(s)$-coordinate of $\alpha$ is unbounded as $s\rightarrow -\infty$ and $z(s)$ is a strictly decreasing function, reaching its minimum at the instant $s_1$. The $\H_1$-hypersurface generated by rotating $\alpha$ around the $x\n1$-axis is complete, properly immersed and diffeomorphic to $\S^{n-1}\times\R$, with one end converging to the CMC cylinder $C\left(\frac{n-1}{n}\right)$ and the other end being a graph outside a ball in $\R^n$. Note that every such $\H_1$-hypersurface has a self-intersection, hence it is not embedded (see Figure \[ligual1fases\], bottom right). ![Top left: the configuration that cannot happen in $\Theta_1$ for $\gamma$. Top right and bottom left: the phase planes $\Theta_1$ and $\Theta_{-1}$ and the orbit $\gamma$. Bottom right: the corresponding arc-length parametrized curve $\alpha$.[]{data-label="ligual1fases"}](ligual1fases.pdf){width=".8\textwidth"} [****]{} Finally, we consider the case when $0<\lambda<1$. In this situation, for $\varepsilon=1$, the curve $\Gamma_1$ given by Equation is a connected arc in $\Theta_1$ having the point $(0,1)$ as endpoint, and an asymptote at the line $y=-\lambda$. Consequently, in $\Theta_1$ there are four monotonicity regions called $\Lambda_1^+,\dots,\Lambda_4^+$ (see Figure \[lmenor1todo\], top left). For $\varepsilon=-1$, the curve $\Gamma_{-1}$ in $\Theta_{-1}$ is also a connected arc with $(0,-1)$ as endpoint and an asymptote also at the line $y=-\lambda$, then there are three regions of monotony denoted by $\Lambda_1^-,\Lambda_2^- \; \text{and} \; \Lambda_3^-$ (see Figure \[lmenor1todo\], bottom left). Once again, we begin describing the $\lHn$ intersecting orthogonally the axis of rotation. On the one hand, by Lemma \[existorbitaext\] we know that there exists a unique orbit $\gamma_+(s)$ in $\Theta_1$ with $(0,1)$ as endpoint. By reasoning as done in the previous cases, we can conclude that $\gamma_+$ has to converge asymptotically to $e_0$ (see Figure \[lmenor1todo\], top left). Therefore, the $\lH$ $\sig_+$ obtained by rotating $\alpha_+$ around the $x\n1$-axis is a properly embedded, simply connected hypersurface converging asymptotically to the CMC cylinder $C\big(\frac{n-1}{\lambda n}\big)$ (see Figure \[lmenor1todo\], right). Additionally, the obtained discussion for $\Sigma_+$ depending on the value of $\lambda$ with respect to $\sqrt{n-1}/2$ is exactly the same than the one that we get in the case $\lambda>1$. On the other hand, Lemma \[existorbitaext\] allows us to assert that there exists a unique orbit $\gamma_-(s)$ in $\Theta_{-1}$ satisfying $\gamma_-(0)=(0,-1)$. Then $\gamma_-$ belongs to $\Lambda_2^-$ for $s<0$ close enough to $s=0$ (see Figure \[lmenor1todo\] bottom left). By monotonicity, $\gamma_-$ cannot intersect the curve $\Gamma_{-1}$, and by properness and by Proposition \[contradicecomparacioncurvaturamedia\], $\gamma_-(s)$ has to converge to the line $y=-\lambda$ when $s\rightarrow-\infty$. This implies that the $\lH$ $\sig_-$ obtained by rotating $\alpha_-$ around the $x\n1$-axis is an entire, strictly convex graph (see Figure \[lmenor1todo\], right). ![Left: the phase planes $\Theta_1$ and $\Theta_{-1}$ and the orbits $\gamma_+$ and $\gamma_-$. Right: the corresponding arc-length parametrized curves $\alpha_+$ and $\alpha_-$.[]{data-label="lmenor1todo"}](lmenor1todo.pdf){width=".75\textwidth"} Finally, we analyze the $\lHn$ non-intersecting the axis of rotation. For that, let $\gamma$ be an orbit in $\Theta_1$ passing through a point $(\hat{x},0),\ \hat{x}>0$. By monotonicity and properness, $\gamma(s)$ has to converge asymptotically to $e_0$ as $s\rightarrow\infty$, either directly, spiraling around if a finite number of times or infinitely many times. If we decrease the parameter $s$, and noting that $\gamma$ cannot intersect $\gamma_+$, we see that $\gamma$ has to intersect the axis $y=0$ in a last point $(x_0,0)$. Note that without loss of generality we can assume that $\gamma$ reaches the point $(x_0,0)$ at the instant $s=0$, and to conclude the discussión we distinguish two cases: if $(x_0,0)$ lies at the right-hand side or the left-hand side of $e_0=\big(\frac{n-1}{\lambda n},0\big)$. First, suppose that $x_0<\frac{n-1}{\lambda n}$. Decreasing $s<0$ we see that $\gamma(s)$ cannot intersect $\Gamma_1$, since otherwise it would intersect $y=0$ again, and therefore $\gamma$ stays in $\Lambda_3^+$ until reaching some $(x_1,-1)$ as endpoint (see Figure \[lmenor1todofueraeje\], top left, red orbit). Now, the orbit $\gamma$ continues in $\Theta_{-1}$ entering the region $\Lambda_2^-$ and converging to the line $y=-\lambda$ (see Figure \[lmenor1todofueraeje\], bottom left, red orbit). If we denote by $\alpha(s)$ to the arc-length parametrized curve generated by $\gamma$ we get that the rotation of $\alpha$ around the $x\n1$-axis gives us a properly embedded $\lH$, diffeomorphic to $\S^{n-1}\times\R$ with two ends; one converging to $C\big(\frac{n-1}{\lambda n}\big)$ and the other being a strictly convex graph (see Figure \[lmenor1todofueraeje\], center). Now, suppose that $x_0>\frac{n-1}{\lambda n}$. Decreasing $s<0$, and because $\gamma$ and $\gamma_+$ cannot intersect each other, we see that $\gamma(s)$ stays in $\Lambda_1^+$ until reaching some $(x_2,1)$ as endpoint (see Figure \[lmenor1todofueraeje\], top left, orange orbit). Now, $\gamma$ continues in $\Theta_{-1}$ entering the region $\Lambda_1^-$ and then going into $\Lambda_3^-$ after intersecting orthogonally the axis $y=0$. As $\gamma$ cannot stay contained in $\Lambda_3^-$ in virtue of Proposition \[contradicecomparacioncurvaturamedia\], we get that $\gamma(s)$ has to enter $\Lambda_2^-$ and converge to the line $y=-\lambda$ when $s\rightarrow-\infty$ (see Figure \[lmenor1todofueraeje\], bottom left, orange orbit). Hence, the rotational $\lH$ obtained is properly immersed, diffeomorphic to $\S^{n-1}\times\R$ and with two embedded ends; one converging to $C\big(\frac{n-1}{\lambda n}\big)$ and the other being a strictly convex graph (see Figure \[lmenor1todofueraeje\], right). ![Left: The phase planes $\Theta_1$ and $\Theta_{-1}$ and the two possible configurations for the orbit $\gamma$. Center and right: the two corresponding arc-length parametrized curves $\alpha$.[]{data-label="lmenor1todofueraeje"}](lmenor1todofuera.pdf){width=".9\textwidth"} To finish, we summarize the discussion carried on along this section in two classification results of the rotational $\lHn$: the first result for the ones intersecting the axis $x_{n+1}$, and the second one for the opposite case. For the very particular case that $n=2$, these results agree with the ones obtained in [@Lop]. Let be $\Sigma_+$ and $\Sigma_-$ the complete, rotational $\lHn$ intersecting the axis $x_{n+1}$ with upwards and downwards orientation respectively. Then: - For any $\lambda>0$, $\Sigma_+$ is properly embedded, simply connected and converges to the CMC cylinder $C(\frac{n-1}{\lambda n})$ of radius $\frac{n-1}{\lambda n}$. Moreover: - If $\lambda>\sqrt{n-1}/2$, $\sig_+$ intersects $C(\frac{n-1}{\lambda n})$ infinitely many times. - If $\lambda=\sqrt{n-1}/2$, $\sig_+$ intersects $C(\frac{n-1}{\lambda n})$ a finite number of times and is a graph outside a compact set. - If $\lambda<\sqrt{n-1}/2$, $\sig_+$ is a proper graph over the disk of radius $\frac{n-1}{\lambda n}$. - For $\lambda>1$, $\sig_-$ is properly immersed (with infinitely-many self-intersections), simply connected and has unbounded distance to the axis $x\n1$. - For $\lambda=1$, $\sig_-$ is a horizontal hyperplane. - For $\lambda<1$, $\sig_-$ is a strictly convex, entire graph. \[Classification1\] Let $\sig$ be a complete, rotational $\lH$ non-intersecting the axis $x\n1$. Then, $\sig$ is properly immersed and diffeomorphic to $\S^{n-1}\times\R$. One end converges to the CMC cylinder $C(\frac{n-1}{\lambda n})$ of radius $\frac{n-1}{\lambda n}$, and: - If $\lambda>1$, the other end has infinitely-many self-intersections and unbounded distance to the axis $x\n1$. - If $\lambda\leq 1$, the other end is a graph outside a compact set. Moreover, if $\lambda<1$ and the unit normal of $\sig$ at the points with horizontal tangent hyperplane is $-e\n1$, then $\sig$ is embedded. \[Classification2\] Observe that the end which converges to $C(\frac{n-1}{\lambda n})$ has the same asymptotic behavior than the one observed in item *1.* in Theorem \[Classification1\]. [9]{}\[References\] A.D. Alexandrov, Uniqueness theorems for surfaces in the large, I, [*Vestnik Leningrad Univ.*]{} [**11**]{} (1956), 5–17. (English translation: [*Amer. Math. Soc. Transl.*]{} [**21**]{} (1962), 341–354). V. Bayle, A. Cañete, F. Morgan, C. Rosales, On the isoperimetric problem in Euclidean space with density, [*Calc. Var. Partial Diff. Equations*]{} [**31**]{} (2008), 27–46. A. Bueno, The Björling problem for prescribed mean curvature surfaces in $\r3$, *Ann. Glob. Ann. Geom.* [**56**]{} 2019, 87–96. A. Bueno, Half-space theorems for properly immersed surfaces in $\r3$ with prescribed mean curvature, *Ann. Mat. Pur. Appl.* DOI: 10.1007/s10231-019-00886-1. A. Bueno, J.A. Gálvez, P. Mira, The global geometry of surfaces with prescribed mean curvature in $\R^3$, preprint. arxiv:1802.08146 A. Bueno, J.A. Gálvez, P. Mira, Rotational hypersurfaces of prescribed mean curvature, preprint. arxiv:1902.09405. E.B. Christoffel, Über die Bestimmung der Gestalt einer krummen Oberfläche durch lokale Messungen auf derselben. [*J. Reine Angew. Math.*]{} [**64**]{} (1865), 193–209. J. Clutterbuck, O. Schnurer, F. Schulze, Stability of translating solutions to mean curvature flow, [*Calc. Var. Partial Diff. Equations*]{} [**29**]{} (2007), no. 3, 281–293. M. Gromov, Isoperimetry of waists and concentration of maps, [*Geom. Funct. Anal.*]{} [**13**]{} (2003), 178–215. B. Guan, P. Guan, Convex hypersurfaces of prescribed curvatures, [*Ann. Math.*]{} [**156**]{} (2002), 655–673. G. Huisken, The volume preserving mean curvature flow, [*J. Reine Angew. Math.*]{} [**382**]{} (1987), 35–48. G. Huisken, C. Sinestrari, Convexity estimates for mean curvature flow and singularities of mean convex surfaces, [*Acta Math.*]{}, [**183**]{} (1993), no. 1, 45–70. T. Ilmanen, Elliptic regularization and partial regularity for motion by mean curvature, [*Mem. Amer. Math. Soc.*]{} [**108**]{} (1994). R. López, Invariant surfaces in Euclidean space with a log-linear density, *Adv. Math.* **339** (2018), 285–309. T. Marquardt, Remark on the anisotropic prescribed mean curvature equation on arbitrary domains, [*Math. Z.*]{} [**264**]{} (2010), 507–511. F. Martín, A. Savas-Halilaj, K. Smoczyk, On the topology of translating solitons of the mean curvature flow, [*Calc. Var. Partial Diff. Equations*]{} [**54**]{} (2015), no. 3, 2853-2882. H. Minkowski, Volumen und Oberfläche, *Math. Ann.* **57** (1903), 447–495. A.V. Pogorelov, Extension of a general uniqueness theorem of A.D. Aleksandrov to the case of nonanalytic surfaces (in Russian), [*Doklady Akad. Nauk SSSR*]{} [**62**]{} (1948), 297–299. J. Spruck, L. Xiao, Complete translating solitons to the mean curvature flow in $\R^3$ with nonnegative mean curvature, *Amer. J. Math.* (2017), 1–23, arXiv:1703.01003. The first author was partially supported by MICINN-FEDER Grant No. MTM2016-80313-P. For the second author, this research is a result of the activity developed within the framework of the Programme in Support of Excellence Groups of the Región de Murcia, Spain, by Fundación Séneca, Science and Technology Agency of the Región de Murcia. Irene Ortiz was partially supported by MICINN/FEDER project PGC2018-097046-B-I00 and Fundación Séneca project 19901/GERM/15, Spain. [^1]: *Mathematics Subject Classification:* 53A10, 53C42, 34C05, 34C40.\ *Keywords*: Prescribed mean curvature hypersurface, weighted mean curvature, non-linear autonomous system. [^2]: We can suppose that $r_1>0$, since if $r_1=0$ then $\gamma$ is the orbit corresponding to the $\lH$ intersecting the axis of rotation, already described in Figure \[lmayor1faseseje\].
--- author: - 'C. Kouveliotou, S.E. Woosley, L. Piro' - 'Jochen Greiner, MPE, Giessenbachstr. 1, 85740 Garching, Germany' title: 'Gamma-Ray Bursts\' --- Discoveries enabled by Multi-wavelength Afterglow Observations of Gamma-Ray Bursts ================================================================================== Introduction ------------ The progress in the Gamma-Ray Burst (GRB) field over the last decade and prior to the launch of [*Fermi*]{} mostly occurred in our understanding of the afterglow emission and the GRB surroundings. Classical observational astronomy, from the radio to X-rays, played a vital role in this progress as it allowed the identification of GRB counterparts by drastically improving the position accuracy of the bursters down to the sub-arcsec level. Once the afterglows were identified, the full power of optical and near-infrared instrumentation came to play, and resulted in an overwhelming diversity of observational results and consequently in the understanding of the properties of the relativistic outflows, their interaction with the circumsource medium, as well as the surrounding interstellar medium (ISM) and the host galaxies. Here we describe the basic multi-wavelength observational properties of afterglows, of both long- and short-duration GRBs, as obtained with space- (Tab. \[sat\]) and ground-based instruments. The present sample consists of $\sim$550 X-ray and $\sim$350 optical afterglows (see http://www.mpe.mpg.de/$\sim$jcg/grbgen.html). Mission/Years Instrument Energy range Localisation GRBs ------------------------------- ------------------------ -------------- ------------------ -------------- $\!\!$BSAX: 1996–2002 GRBM 2–28 keV omni-directional WFC 2–28 keV some arcmin $\sim$30/yr $\!\!$HETE-2: 2000–2006$\!\!$ FREGATE 6–400 keV omni-directional WXM 2–25 keV 10 arcmin $\sim$10/yr $\!\!$INTEGRAL 2001– ACS $>$80 keV omni-directional ISGRI 20–150 keV 3 arcmin $\sim$10/yr $\!\!$Swift: 2004– BAT 15–150 keV 3 arcmin $\sim$100/yr $\!\!$AGILE: 2007– $\!\!$SuperAGILE$\!\!$ 10–40 keV 5 arcmin $\sim$6/yr $\!\!$Fermi: 2008– GBM 8–30000 keV some deg $\sim$250/yr LAT 0.1–300 GeV some arcmin $\sim$7/yr : Main Satellite-Missions contributing to the afterglow sample \[sat\] Early searches for transient optical emission --------------------------------------------- Over the first two decades after the discovery of GRBs (until 1996), GRB localizations were either [*delayed but accurate*]{}, e.g., with arcmin accuracy, as provided by the Interplanetary Network ([*IPN*]{} [@hur95] with typical delays of days or [*rapid but rough*]{}, e.g., within minutes after the GRB trigger, but with at least 2 error circles as provided by the BATSE Coordinate Distribution Network system [@bbc96]. Correspondingly, several alternative strategies were pursued: (1) searching for quiescent emission in well-localized error boxes (assuming the existence of quiescent persistent GRB sources), (2) [*post facto*]{} correlating optical monitoring observations temporally overlapping with GRB triggers, and (3) quick follow-up observations after a GRB trigger. ### Searching for persistent quiescent GRB emission Archival searches for [*optical*]{} transients in small GRB error boxes using large photographic plate collections were initiated at Harvard Observatory [@sbb84], and then performed at several other observatories [@hbw87; @gfw87]. Though more than 130 thousand plates were investigated (see Tab. \[plates\]) and several optical transients were found, no convincing GRB counterpart was identified except the 2008 report on GRB 920925C [@det08]. The first search for [*quiescent X-ray*]{} sources in 5 GRB error boxes was conducted with the [*Einstein*]{} [@piz86] and [*EXOSAT*]{} satellites [@bo88]. [@gbk95] extended these searches to the [*ROSAT*]{} all-sky-survey data for more than 30 (15) GRB error boxes determined with the 2$^{nd}$ (3$^{rd}$) [*IPN*]{} catalogs. While a number of X-ray sources were found, their identification did not reveal any unusual associations, thus none of these X-ray sources was considered a quiescent GRB counterpart. ------------ --------------- ------------- --------------------------- ------------ Group Observatories No. of GRB No. of monitoring error boxes plates time (yrs) Schaefer Harvard 16 32000 4.25 Hudec Ondřejov 21 30000 10 Greiner Sonneberg 15 35000 2.6 Moskalenko Odessa 40 40000 1.3 Schwartz S. Barbara 7 $\!\!$photoelectric$\!\!$ 0.1 ------------ --------------- ------------- --------------------------- ------------ : Archival Search for GRB optical counterparts \[plates\] ### [*Post-facto*]{} correlation analysis Historically, the hunt for GRB counterparts began with the systematic search in photographic exposures serendipitously taken during the burst event [@gwm74]. This correlation approach was later extended substantially, and was also done in a variety of passbands, including scanning observations with the Cosmic Background Explorer ([*COBE*]{}) [@bon95]. In the optical band, regular, wide-field sky patrols of two kinds were correlated with GRBs detected with the [*Compton Gamma-Ray Observatory (CGRO)*]{}/ Burst And Transient Source Experiment (BATSE): (i) the Explosive Transient Camera (ETC) exposures with a total field of view (FoV) of 40$\times$60 [@vkr95], and (ii) the logistic network of photographic patrols performed at a dozen observatories worldwide [@gwh94]. During over 4 years of operation there were five cases when a BATSE GRB occurred during an ETC observation within or near an ETC FoV. No optical transients were detected, resulting in upper limits for the fluence ratio of gamma to optical luminosities, L$_\gamma$/L$_{opt}$ $\ge$ 2–120. Unfortunately, in all cases of simultaneous exposures only a part (20%–80%) of the rather large BATSE error box ($>2\degs$; see also Chapter 3) was covered. The correlation of BATSE GRBs with photographic wide-field plates of a network of 11 observatories identified simultaneous plates for nearly 60 GRBs, with typical limiting magnitudes of m$_{lim}\approx$2–3 mag for an 1s duration flash [@gwh94]. These limits would correspond to a m$_{lim}\approx$11–12 mag for the canonical afterglow durations discovered in the [*Swift* ]{} era (see also Chapter 5). Blink comparison of these plates did not reveal any optical transient (but it did find several new variable stars) resulting in limits for the flux ratio of gamma-rays to optical emission of F$_\gamma$/F$_{opt}$ $\ge$ 1–20. We know today that these limits were too high for the detection of a canonical optical afterglow. Instead, the non-detection of an optical counterpart for nearly 60 GRBs is consistent with very bright afterglows like e.g., GRB990123 and 080319B being very rare, of order 1-2% of the total afterglow population. ### Rapid follow-up observations of GRBs Early rapid follow-up observations were done already well before the discovery of afterglows in 1997 (see also Chapter 4), but due to the relatively large GRB error boxes these searches were not successful in identifying a plausible counterpart. Already in the early 90s, these rapid follow-up observations relied on the BAtse COordinate DIstribution NEtwork (BACODINE), which computed and distributed coordinates of bright GRBs (which had smaller error boxes) within typically 5 sec after the GRB trigger to interested observers [@bbc96]. However, for the few bursts within the FoV of the imaging [*CGRO*]{}/COMpton TELescope (COMPTEL) coordinates were determined with much better accuracy and were distributed after typically 15–30 minutes after detection via the BATSE/COMPTEL/NMSU network [@kip94]. Both, optical and X-ray follow-up observations were performed in the 90’s. Notably, GRB940301 was observed seven hours after the GRB trigger with the 1m Schmidt telescope at Socorro reaching a limiting magnitude of m$_{V}$ $\approx$ 16 mag [@hmp95]; no optical transient was detected. [*ROSAT*]{} pointed observations were initiated within 4 weeks of two GRBs, namely GRB920501 [@li96] and 940301, which due to their close locations had been dubbed “[*COMPTEL*]{} repeater” GRB930704/ 940301 [@gbh96]. None revealed a fading X-ray counterpart. The [*BeppoSAX*]{} afterglow discovery -------------------------------------- ![Sequence of error circles from $\gamma$-rays to optical for GRB970228, the first GRB for which long-wavelength afterglow emission was identified. Left: The underlying image is from a 34 ksec [*ROSAT*]{}/High-Resolution Imager (HRI) observation [@fga98], with the large circle showing the 3$\sigma$ error circle of the X-ray afterglow as determined with the [*BeppoSAX*]{}/Wide-Field Camera (WFC). The smaller circle is the $\approx$1 arcmin error circle of the fading source SAXJ$0501.7+1146$ found with the two [*BeppoSAX*]{}/Narrow-Field instrument (NFI) pointings, and the two straight lines mark the triangulation circle derived from the [*BeppoSAX*]{} and [*Ulysses*]{} timings [@hur97]. Right: Optical image taken on 1997 February 28 [@vp97] at the William Herschel Telescope (Canary Islands) with the WFC error circle marked as a dashed segment, the NFI error circle with the dotted segment, and the 10 [*ROSAT*]{}/HRI error box as a full circle. The optical transient (OT) falls right into the [*ROSAT*]{}/HRI error box. []{data-label="grb970228"}](grb970228_hriml_sw.ps "fig:"){width="4.8cm"} ![Sequence of error circles from $\gamma$-rays to optical for GRB970228, the first GRB for which long-wavelength afterglow emission was identified. Left: The underlying image is from a 34 ksec [*ROSAT*]{}/High-Resolution Imager (HRI) observation [@fga98], with the large circle showing the 3$\sigma$ error circle of the X-ray afterglow as determined with the [*BeppoSAX*]{}/Wide-Field Camera (WFC). The smaller circle is the $\approx$1 arcmin error circle of the fading source SAXJ$0501.7+1146$ found with the two [*BeppoSAX*]{}/Narrow-Field instrument (NFI) pointings, and the two straight lines mark the triangulation circle derived from the [*BeppoSAX*]{} and [*Ulysses*]{} timings [@hur97]. Right: Optical image taken on 1997 February 28 [@vp97] at the William Herschel Telescope (Canary Islands) with the WFC error circle marked as a dashed segment, the NFI error circle with the dotted segment, and the 10 [*ROSAT*]{}/HRI error box as a full circle. The optical transient (OT) falls right into the [*ROSAT*]{}/HRI error box. []{data-label="grb970228"}](ot_groot_big2.eps "fig:"){width="5.6cm"} The launch in 1996 of the Italian-Dutch Satellite per Astronomia X, [*SAX*]{}, ushered a major breakthrough in our understanding of GRBs (for a detailed description of the [*SAX*]{} results see also Chapter 4). Its unprecedented localization accuracy ($\sim5$; 2–35 keV), rapid notification (within minutes of the GRB) was coupled with its fast slewing capability (a few hours) and repointing with its co-aligned narrow field X-ray telescopes. Despite the fact that only $\approx$3.5% of its total observing time (or 1.5% of all observations) was spent on GRBs, [*BeppoSAX*]{} brought a revolution in the field of GRBs allowing the tools of optical/NIR/radio astronomy to be applied to these fascinating objects. The follow-up observation of GRB970228 led to the discovery of the first X-ray and optical afterglow (Fig. \[grb970228\]) [@cos97; @vp97]. The next important event, the rapid localization of GRB970508, allowed the first measurement of the GRB distance scale via optical spectroscopy. GRB970508 was also the burst with the first radio afterglow. These multi-wavelength observations provided the first observational evidence for the fireball scenario [@mdk97; @fkn97]. Subsequent measurements within the next two years demonstrated the extragalactic nature of GRBs through more redshift measurements of the optical afterglow emission as well as of the host galaxies, and firmly established GRBs as the most luminous objects known in the Universe. The year 1998 also saw the discovery of GRB980425, which was subsequently associated with a supernova (SN1998bw) [@gvp98]. During its lifetime, [*BeppoSAX*]{} observed 56 GRBs and slewed to 36 of these [@pis04] within typically 5–24 hrs (average around 8 hrs). X-ray afterglows were discovered in over 90% of the cases and their fundamental properties were established. It was found that the X-ray flux fades with a power law dependence $t^{-\alpha}$, with $\alpha \sim 1.4$ [@piro01]. The X-ray spectrum is well described with a power law $\nu^{-\beta}$of of slope $\beta \sim 0.9$. The observed absorption is, within the errors, always compatible with the Galactic foreground absorption. The observed flux at a given time after the burst, which is proportional to $(1+z)^{\beta - \alpha}$, shows a pretty narrow distribution, since the cosmological spectral redshift (K correction) and temporal decay roughly compensate each other: the mean flux in the $1-10$ keV band at 11 hrs after the burst is about $5\times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$ [@piro01]. The overall energy emitted in this late afterglow phase ($>6-8$ hrs) is typically a few percent of the GRB energy. We shortly note here two major results because [*BeppoSAX*]{} laid the foundations for their studies: (i) Jet breaks: Being a geometrical effect, jet breaks in afterglow light curves are achromatic, and indeed a number of cases with such breaks at $0.5-1$ days after the burst were detected. This provided the early observational evidence of beaming in GRBs. (ii) Confirmation of the basic synchrotron scenario: The broad-band spectral energy distribution (SED) was predicted to consist of four segments with different powerlaw slopes. The breaks in the SED were found only in very few cases, first in GRB970508 [@wig99], but provided the first observational evidence of a rather low circumburst density (0.03 cm$^{-3}$) and a large equivalent isotropic energy (3$\times$10$^{52}$ erg). Further details on both topics are given in Chapters 8 and 11. Multiwavelength observations ---------------------------- The detection of the first optical afterglow(s) sparked an international observing effort, which was unique, except perhaps for SN1987A. All major ground-based telescopes were used at optical, infrared as well as in radio wavelengths, and basically every space-born observatory since then has observed GRBs. The [*HETE-2*]{} satellite [@ric02], launched in October 2000, continued to provide rapid and arcmin sized GRB localizations at a rate of about 2 per month after [*BeppoSAX*]{} had been switched off in April 2003. [*Swift*]{}, launched in November 2004, revolutionized our knowledge on the afterglow phenomena. Over the last 13 years (February 1997 – June 2010) a total of 870 GRBs have been localized within a day to less than one square-degree size error boxes, and X-ray afterglows have been detected basically for each of those bursts for which X-ray observations have been done within a few days (see http://www.mpe.mpg.de/$\sim$jcg/grbgen.html). ### Contemporaneous, prompt multiwavelength emission --------- ------------------ ------------ ------------ ---------------- GRB $\!\!$Brightness Filter Time after Reference (mag)   GRB (sec) 080319B 3.8   K$_s$ 65 [@bpl09] 080319B 5.4   V 53 [@rks08] 990123 8.9   white 50 [@abb99] 061007 9.9   white 94 [@raa09] 060117 10.1   R$_c$ 129 [@jpk06] 060418 10.2   K$^\prime$ 168 [@mvm07] 061126 11.0   K$_s$ 137 [@pbb08] 081203A 11.6   I$_c$ 415 [@wmb08] 081121 11.6   white 60 [@yur08]$\!\!$ 090102 11.8   H 102 [@gkp10] 030329 11.9   J 8100 [@nhk03] --------- ------------------ ------------ ------------ ---------------- : Top 10 brightest optical afterglows of GRBs. Another 19 afterglows reached a maximum brighter than 15 mag in one color. \[brightAG\] Some optical afterglows have shown substantial variability at early times. One can distinguish a component which tracks the prompt gamma-rays (GRB041219A [@vest05; @bbs05], GRB050820A [@vest06], GRB080319B [@rks08]) and an afterglow component which starts during or shortly after the prompt phase (GRB990123 [@abb99], 021211 [@lfc03], GRB 060111B [@kgs06]). The former component has been attributed to internal shocks, while the latter was interpreted as reverse shock emission, e.g. [@sap99; @mer99]. The internal shock emission is relativistic, and the timescales in the observer frame are shortened by $\Gamma^{-2}$, with $\Gamma$ being the bulk Lorentz factor which typically is assumed to be of order 300–500. The reverse shock is predicted to happen with little delay with respect to the gamma-ray emission (unless the Lorentz factor is very small), and the corresponding optical emission decays with a power law index of $2$ for a constant density environment, or up to $2.8$ for a wind density profile [@kob00]. ### Dark bursts \[darksec\] Originally, those GRBs with X-ray afterglows but without optical detection (about 50%) were coined as “dark GRBs”. The “darkness” in the optical was assumed to be due to one (or more) of several reasons [@fyn01]: the afterglow could (i) have an intrinsically low luminosity, e.g., due to a low-density environment or low explosion energy, (ii) be strongly absorbed by intervening material, either very local around the GRB, or along the line-of-sight through the host galaxy, or (iii) be at high redshift ($z>6$) so that Ly$\alpha$ blanketing and absorption by intervening Lyman-limit systems would prohibit detection in the $R$ band (most frequently used in the optical). An analysis of a subsample of GRBs, namely those with particularly accurate positions provided with the Soft X-ray Camera on [*HETE-2*]{}, showed that optical afterglows were found for 10 out of 11 GRBs [@villa04]. This suggested that the majority of dark GRBs are neither at high redshift nor strongly absorbed, but just faint, i.e., the spread in afterglow brightness at a given time after the GRB is much larger than previous observations had indicated. However, since 2004 the [*Swift*]{} observations have provided a plethora of locations at the few arcsec level within minutes of the GRB, and the fraction of dark bursts is still above $\sim$30%. Very recently, a sample with nearly complete afterglow detections was reported, which had been created by selecting those GRBs for which observations with the Gamma-Ray Burst Optical/Near-Infrared Detector [*GROND*]{} (operated at the 2.2m telescope at the La Silla Observatory [@gbc08]) started within 30 min after the burst [@gkk10]. With a 95% detection completeness and a simultaneously obtained 7-band spectral energy distribution for all these bursts, rest-frame extinction $A_{\rm V}$ is accurately measured for the first time in a coherent way. Substantially more bursts with $A_{\rm V} >0.5$ mag are found than in previous samples [@kkz10], and in many cases a moderate redshift (in the 1–3 range) enhances the effect in the observer frame. The properties of this sample demonstrate that the darkness can be explained by a combination of (i) moderate extinction at moderate redshift, and (ii) a ($\sim$10%) fraction of bursts at redshift $z>5$. This strengthens similar earlier suggestions [e.g. @ckh09; @pcb09], which were based on a combination of early detections and host galaxy studies of the non-detected afterglows. ### Spectral lines Line detections have been reported at optical and X-ray wavelengths. These early X-ray line detections were based on [*BeppoSAX*]{}, [*ASCA*]{}, [*Chandra*]{} and XMM-[*Newton*]{} observations. A comprehensive analysis, however, of $>$200 [*Swift*]{} bursts did not reveal any significant X-ray lines [@rcm08; @hvo08]. Therefore, in the following we will constrain ourselves to optical lines. Optical/NIR spectroscopy of afterglows usually reveals absorption lines of (typically more than one) system along the line of sight between the GRB and the observer. The system with the largest redshift is then assigned to be the redshift of the GRB. Formally, these absorption redshifts are still a lower limit, but one would have to assume a contrived empty environment if the GRB were at a much larger redshift than the last absorption system and if it would leave no measurable imprint in the spectrum. Moreover, the detection of a Lyman cutoff or (even time-variable) lines from fine-structure levels provide stringent limits. Measurements of the equivalent widths of the absorption features (e.g., Fig. \[UVpump\]) allows us to derive column densities of metal lines and neutral hydrogen, as well as the metallicity and dust content along the line of sight. A special case are absorption lines from fine-structure and other metastable levels of ions such as O$^o$, Si$^+$ and Fe$^+$, which are ubiquitous in GRB-Damped Lyman $\alpha$ systems (DLAs) [@vel04; @cpb05; @bpc06; @pcb06], and very likely are excited by the GRB emission. Interestingly, in about half of the cases, the GRB-DLAs exhibit column densities of log($N_H$) $\sim10^{22} cm^{-2}$ or above [@fjp09]. This is in contrast to the only few such systems in QSO-DLAs [@nlp08], and is likely due to the smaller range of galactocentric distances probed by GRB sightlines (for a detailed discussion on GRB-DLAs see also Chapter 13). An interesting puzzle was brought up by @pro06, namely that the number density of strong (equivalent widths $>$ 1 Å) intervening Mg[ii]{} absorbers detected in GRB afterglow spectra at redshifts $0.5<z<2$, is nearly 4 times larger than those in the QSO spectra. Similar analyses based on a different dataset found a factor 2 larger incidence rate (with higher significance than the earlier factor 4), but only for strong absorbers, while for weaker absorbers, (equivalent widths in the 0.3–1.0 Å range), the incidence rate was consistent with that in QSO spectra [@vpl09; @tlp09]. A similar study with C[iv]{} absorbers did not reveal any differences between GRB and QSO sightlines [@ssv07]. A number of possible explanations have been proposed [@pvl07], including a dust extinction bias, or different beam sizes of the sources, or lensing amplification, but none provided a conclusive solution of this discrepancy so far. Depending on its brightness, early host spectra might already show emission lines. Usually, host spectroscopy is done when the GRB afterglow has faded away. To date (2010), there has not been a single case of emission lines being at a larger or smaller redshift than the highest-redshift absorption system, supporting the assumption that the GRB belongs to the corresponding host galaxy. Besides giving the redshift, the observed emission lines of O[ii]{}, \[O[iii]{}\] and the Balmer series are used to infer the global extinction, metallicity and star formation rate [@sgb09; @lbk10]. Note that in this case these are host-integrated quantities, in contrast to the line-of-sight measurements via absorption lines. ![[**Left:**]{} Absorption line profiles for a variety of transitions detected at the GRB060418 redshift. Red lines show the Voigt-profiles fits of low-ionization species. [**Right:**]{} Observed total column densities for the fine-structure lines (open circles), the first metastable level (filled triangle) and the second metastable level (filled squares) of FeII (top) and the total column densities for NiII (bottom). Lines are the best-fit UV pumping model. From @vls07](Vreeswijk_AA468p83_f1.ps "fig:"){width="4.5cm"} ![[**Left:**]{} Absorption line profiles for a variety of transitions detected at the GRB060418 redshift. Red lines show the Voigt-profiles fits of low-ionization species. [**Right:**]{} Observed total column densities for the fine-structure lines (open circles), the first metastable level (filled triangle) and the second metastable level (filled squares) of FeII (top) and the total column densities for NiII (bottom). Lines are the best-fit UV pumping model. From @vls07](Vreeswijk_AA468p83_f8.ps "fig:"){width="6.4cm"} . \[UVpump\] ### Line Variability Variability of both absorption and emission lines has been searched for, though on substantially different timescales. Variable absorption lines, involving the fine structure of the ground level and other metastable energy levels of Fe$^+$ and Ni$^+$ were first modelled (Fig. \[UVpump\]) for GRBs020813 and 060418 [@dzcp06; @vls07]. It was demonstrated that these lines are formed by UV pumping, i.e. excitation to an upper level due to the absorption of a UV photon, followed by de-excitation cascades, as suggested by @pcb06. This interpretation allowed the first determination of the distance between the GRB and its DLA to an astonishing 1.7 kpc. Later re-modelling with a different set of atomic abundances increased this distance to 2.0$\pm$0.3 kpc [@lvs09]. For GRB050730, the same authors derived a distance (near-side of the cloud) of 440$\pm$30 pc for a cloud 520$^{+240}_{-190}$ pc size (along the line of sight). This is in contrast to a distance of only about 50–100 pc for which the GRB radiation can ionize hydrogen. The global picture derived from modelling these variable lines is that of the absorber being a large, diffuse cloud with a broadening parameter and physical size typical of the Galactic ISM, with low metallicity and low dust content, and at a distance at least 0.1–1 kpc away from the GRB [@lvs09]. Variability of the emission lines was expected since the GRB prompt and afterglow emission ionizes its surrounding to substantial distances. Depending on the density of the circumburst medium, this leads to recombination lines over timescales of years which could compete with the emission lines usually assigned to star-formation. In fact, it had been proposed to use the GRB-induced lines for the identification of remnants of GRBs in nearby galaxies [@bah92; @prl00]. Such a search was indeed conducted for the host galaxy of GRB990712, but no variability was found in the OIII \[5007\] line over a timescale of 6 years [@aky06]. ### Continuum variability #### Early lightcurve behaviour The early time GRB afterglow behaviour depends strongly on the wavelength range considered. At soft X-rays, [*Swift*]{} has found surprisingly rapid variability in both, short- and long-duration GRBs. Yet, many of the early light curves show a canonical behaviour with three distinct power law segments [@ncg06]: a bright, rapidly declining ($t^{-\alpha}$, with $\alpha > 3$) emission, which smoothly connects to the prompt emission both temporally and spectrally [@tgc05; @bcg05], followed by a steep-to-shallow transition, which is usually accompanied by a change in the power-law index of the spectrum. The first break has been interpreted [@ncg06; @zfd06] as the slowly decaying forward shock emission as it becomes dominant over the rapidly declining tail emission of the prompt $\gamma$-rays as seen from large angles [@kup00]. The subsequent shallow phase is commonly interpreted as due to continuous energy injection into the external shock [@ncg06; @zfd06], which implies that most of the energy in the afterglow shock was either injected at late times after the prompt $\gamma$-ray emission, or was originally in slow material that would not have contributed to the prompt emission. This shallow phase then transitions into the late afterglow phase with no clear evidence for a spectral change. ![Representative examples of X-ray afterglow light curves of long (left) and short-duration (right) GRBs. From @grf09.[]{data-label="XAG_lc"}](Swift_XAG_ARAA47p567_f6.ps){width="\textwidth"} ![Representative examples of optical light curves of long-duration GRBs as measured with GROND. The light curve diversity is similar to that in X-rays. []{data-label="OAG_lc"}](GROND_multiGRB_lc.eps){width="\textwidth"} Extended emission lasting about 100 sec has been detected at hard X- and gamma-rays in about 25% of the short bursts [@nob06]. Though these tails were known already from [*HETE-2*]{} [@vlr05] and [*CGRO*]{}/BATSE [@lrg01; @conn02; @nob06] a systematic study was only possible with [*Swift*]{} [@ngs10], since this emission is rather soft and has spurred debate on whether it is afterglow or prompt emission. The optical afterglow behaviour is at least as diverse as the X-ray one (Fig. \[OAG\_lc\]): A large fraction of the afterglows show the canonical smooth power law decay, but some show a completely different behaviour. There are rare cases (like GRBs 990123 or 080319B) which are dominated by very bright, fast decaying emission, which is usually interpreted as the appearance of the reverse shock ([@abb99; @rks08; @bpl09], but see e.g., [@geg09] for an alternative interpretation). About 10–20% of the optical afterglows exhibit an increase in their brightness during the first few hundred seconds. This has been observed both with the [*Swift*]{}/ Ultra Violet Optical Telescope (UVOT) [@ops09] as well as with fast-slewing telescopes from the ground [@rsp04; @qry06; @yar06; @mvm07; @kkg08; @cdk08; @kgm09]. No color evolution, however, was seen during the rise and the turn-over towards decay. The deceleration of the forward shock by the external medium has been favoured as an explanation for light curve shapes, where the rise and subsequent decay can be modelled with a broken power-law. The time of maximum light was also used to derive initial Lorentz factors of 80–300. Cases, where the decay showed another break at early times, and the power-law indices of the rise and first decay did not match the standard fireball prediction, have been interpreted as a signature of jet emission seen off-axis [@panaitescu08]. Assuming that the jet structure has a power-law angular distribution, there is a correlation between initial rise time and the slope of the first fading after maximum, which can explain the observed diversity of light curves [@panaitescu08] – though applications to larger samples have not confirmed this trend [@kba09; @kkz10]. An interesting consistency check is now possible with measurements from [*Fermi*]{} and [*INTEGRAL*]{}: the Lorentz factor can also be determined from the variability of the gamma-ray emission [@lis01]. For GRB080928 [@rosk10] this comparison has been attempted for the first time, and the two values of the initial Lorentz factor are indeed broadly consistent. #### Jet-breaks An observer will detect emission due to relativistic beaming of the emission from the GRB blast wave within an angle $\sim$1/$\Gamma$ of the line of sight (see also Chapter 11). The afterglow is thus a signature of the geometry of the ejecta. Until the blast wave has decelerated such that its opening angle is $\sim$1/$\Gamma$, its gradual fading is partly compensated by an increasing emission region. Only at angles larger than 1/$\Gamma$, does the observed emission decay with a power-law index of $>$2 and can be described under a spherically symmetric model. Since this transition is a geometric effect, the slope change in the afterglow light decay should be achromatic [@rho99], that is observable at all wavelengths at the same time. In the pre-[*Swift*]{} era, this achromatic steepening was commonly reported in the optical afterglows and interpreted as the indication of beamed emission. Using the pre-[*Swift*]{} data, collimation factors of $\Omega$/4$\pi$ 0.01, corresponding to half opening angles of 8 were derived from the timing of these breaks [@fks01; @bfk03]. However, with [*Swift*]{} only a small fraction of bursts has been reported with convincing evidence for an X-ray jet break [@rlb09]. Today a general consensus has developed according to which the breaks in [*Swift*]{}-detected bursts occur at later times due to their larger mean redshift, and thus at flux levels beyond the sensitivity of standard follow-up campaigns. Recent results of a dedicated long-term monitoring of X-ray afterglows with [*Chandra*]{} seems to recover jet breaks for about 40% of the [*Chandra*]{} observed bursts [@bur10]. #### X-ray flares GRB050502B provided one of the first examples of the dramatic X-ray flaring activity in the early afterglow evolution [@brf05; @fbl06]. This burst also demonstrated that X-ray flares (measured up to 10 keV) can contain energy comparable to the one emitted during the prompt GRB phase in the 15–300 keV band. Surprisingly, X-ray flares have been seen in long- and short-duration GRBs, as well as at low and high redshifts: even GRB090423 at z$\sim$8.2 exhibited a flare with rather standard properties [@cmm10]. The majority of the flares occurs during the first 10$^3$ s after the GRB trigger, but some have also been seen as late as 10$^5$ or even 10$^6$ s [@cso08] (see next section). The flares are relatively sharp, with $\Delta t / t \sim 0.1$, and are spectrally different (harder) than the underlying afterglow emission. There is considerable spectral evolution during a flare with a hardening during the rise followed by softening during the decay [@gpg07; @kgm07; @gpo07]. The first case where these flares were seen simultaneous in the optical/NIR was GRB071031 [@kgm09], which showed that the peak of the emission shifts at late times from the few keV band into the UV. Given that the flare phenomenology is very analogous to that of the prompt gamma-ray emission, it is now generally accepted that X-ray flares and gamma-ray pulses are produced by the same mechanism. #### Early time afterglow features (“humps”) Some GRB afterglows (GRBs021004, 030329) exhibited “humps” on top of the canonical optical fading at timescales of 10$^4$-10$^5$ s after the GRB onset [@lrc02; @log04]. Originally, these humps were interpreted as the interaction of the blast wave with moderate density enhancements in the ambient medium, with a density contrast of order 10 [@lrc02]; later models employed additional energy injection episodes [@bgj04]. Optical afterglow variability due to the interaction with the ISM is not expected later than 10$^6$ s because the blast wave, once it has swept up enough interstellar material to produce the canonical afterglow emission, is thought to be only mildly relativistic. It is possible, but not easy to prove due to lacking X-ray observations, that these humps are related to the X-ray flares discussed in the previous section. #### Late afterglow features: Supernovae and something else? There is now general consensus that the long/soft [@kou93] GRBs are intimately connected to the deaths of massive stars. About 70% of core-collapse supernovae (SNe) are those of type II; one of the peculiar sub-classes that form part of the other 30% are type Ib/c supernovae. While the supernova-GRB connection was proposed some years ago [@gvp98; @iwa98], the unambiguous spectroscopic identification of the lowest-redshift long-duration GRBs as supernovae during the last decade provided convincing evidence for this association [@hjo03; @sta03] (Fig. \[snspec\]). The supernovae in the five spectroscopically confirmed gamma-ray bursts (GRB980425/SN1998bw, 030329/2003dh, 031203/2003lw, 060218/2006aj and 100316D/2010bh) are all of type Ic, with unusually large kinetic energy (very large expansion velocities of order 10–30 thousand km/s were measured after 10 days) and ejected mass of radioactive $^{56}$Ni; such SNe were called hypernovae by @pac98. Their latter property in particular suggests progenitors with masses 40  [@nom04], though the detailed analysis of the light curve and spectra of GRB 060218 / SN 2006aj showed that the initial mass was only $\sim$20 , indicating a possibly broader range of progenitor masses leading to a GRB [@mdn06]. Theoretically, SNe Ib/c are favoured over type II because the former have typically smaller envelope masses, and are thus thought to allow easier break-out of the GRB jet. Moreover, the lack of hydrogen lines in the GRB afterglow spectra is consistent with the collapsar model, where the progenitor star lost its hydrogen envelope to become a Wolf-Rayet star before collapsing. In contrast to these relatively similar spectroscopic properties among the GRB-SN, the $\gamma$-ray emission properties of the corresponding GRBs differ in their total emitted energy [@krg07], temporal profile and spectral shape, implying that the $\gamma$-ray properties are not determined by the progenitor mass, but most likely by completely different properties [@gfp06]. Two noteworthy exceptions to this picture of SNe detections associated with the nearest bursts are GRB060505 and 060614. A host galaxy at z=0.125 was associated with this burst based on two optical emission lines; moreover this was clearly a long-duration burst (T$_{90}$=102 s). However, no SN was found in the error box of GRB060614 [@fwt06; @dcp06] to limits about a factor 100 fainter than previous detections. Both bursts have spurred extensive discussions on the homogeneity of the class of GRB-SN, and the classification of GRBs [@gfp06; @gnb06; @zzv09]. Gehrels et al. (2006) proposed a third parameter for GRB classification based on their spectral lags (difference of arrival times between high and low-energy photons) and their peak luminosities. According to this criterion, the spectral lag of GRB060614 would place this burst entirely within the short-duration GRB subclass [@gnb06]. ![image](grb030329_snspec_Hjorth.ps){width="8cm"} Finally, a number of GRBs show optical humps at late times, but earlier than the expected appearances of their related SNe, around $10^5-10^6$ s [@mkr10]. Multi-color light curves show that these humps are achromatic, excluding very early SNe. The cause of these humps is still a matter of debate. #### Very late afterglow evolution \[dipank\] The expansion of GRB afterglows, while being initially ultra-relativistic, slows down in the course of time and eventually enters a sub-relativistic phase after several tens to hundreds of days. With the notable exception of GRB060729 [@gbw10], most afterglows are too faint to be detectable in most wavelengths at such late times, and their observations are confined mainly to low-frequency radio bands. Despite a large number of afterglow detections at radio wavelengths [see, e.g. @frail+03], only two well studied examples exist so far of observations, in multiple radio bands, deep into the non-relativistic phase: GRB970508 and GRB030329. For the former, radio follow-up at the 1.4 GHz to 8.4 GHz range was conducted for more than 400 days post-burst, while the transition to non-relativistic expansion occurred at $\sim 100$ days [@fwk00]. In the case of GRB 030329, radio observations at several frequencies, (610 MHz to 4.8 GHz) over $\sim 1200$ days after the burst have been reported [@horst07] The non-relativistic transition time in this case was estimated to be $\sim 60-80$ days. Observations well within the non-relativistic phase provide a useful additional tool to derive the physical parameters of the burst, in particular the total (bolometric) energy [@onp05; @krg07]. The dynamics in this regime is governed by the Sedov-Taylor solution, which is different from the Blandford-McKee solution in the early relativistic phase before the jet break. Burst parameters derived from the non-relativistic phase alone may, therefore, be considered as a set of independent measurements, which serve as a useful check on the quantities derived from the relativistic phase evolution. Multiband modelling of the relativistic phase needs to include a description of the angular distribution of the energy and Lorentz factor of the outflow, which remains uncertain even in the presence of a well-determined jet break. In the deep non-relativistic phase, however, the expansion of the blast wave is expected to have become nearly isotropic, so the energy estimates are much less prone to uncertainties arising from collimation effects. The total energy $E_{\rm ST}$ estimated for the Sedov-Taylor non-relativistic phase, together with the isotropic equivalent energy $E_{\rm iso}$ estimated from burst fluence and relativistic phase modelling, provide a useful indicator of the degree of initial collimation of the relativistic outflow. In the case of GRB 970508 the estimated values of $E_{\rm ST}$ and $E_{\rm iso}$ are $\sim 5\times 10^{50}$ erg and $\sim 10^{52}$ erg, respectively, suggesting an initial collimation angle $\leq 20^{\circ}$ [@fwk00]. For GRB 030329, the corresponding estimates are $\sim 8\times 10^{50}$ erg and $\sim 7-8\times 10^{51}$ erg, respectively [@bkf04; @frail+05; @horst07]. Several microphysical quantities may in fact be a function of the dynamical regime, and hence may not have the same value in the relativistic and the non-relativistic phase. These may include parameters such as $\epsilon_{\rm e}$, the fraction of the total energy resident in relativistic electrons, $\epsilon_B$, the fraction of the total energy resident in post-shock magnetic field, and $p$, the power-law index of the electron energy distribution. By modelling the relativistic and the non-relativistic phase evolution separately, one may in principle be able to conclude whether these microphysical parameters are indeed different in the two phases. Obtaining a complete solution for physical parameters in the non-relativistic phase requires the measurement of all three spectral breaks, $\nu_{\rm a}$, $\nu_{\rm m}$ and $\nu_{\rm c}$. Multi-band radio light curves can be used to determine the first two of these breaks, but a direct measurement of the cooling frequency in the non-relativistic phase has not yet been possible, in the absence of high frequency observations. As an approximate estimate, one uses the value of $\nu_{\rm c}$ extrapolated from an earlier, relativistic phase to infer the physical parameters. Because of this partial lack of information and also the uncertainties inherent in the measurement of spectral parameters, it is not yet possible to state with confidence whether the microphysical parameters are indeed different between the relativistic and the non-relativistic phase. Nevertheless, in both GRB 970508 and in GRB 030329 one finds that in the non-relativistic phase the energy in relativistic electrons and that in the magnetic field are nearly in equipartition [@fwk00; @horst07], while in the relativistic phase the derived estimates of $\epsilon_{\rm B}$ tend to be significantly smaller than those of $\epsilon_{\rm e}$ [see, e.g. @pk01; @pk02]. Another important measurement that is made possible by the long-lasting radio follow-up of an afterglow is that of the expansion rate of the blast wave. In the case of GRB 970508 an apparent superluminal transverse expansion was inferred from the evolution of the modulation index of the scintillating flux at 8.5 GHz [@fkn97]. Early in the evolution, the radio flux showed significant fluctuations (up to $\sim 50$%), as would be expected due to interstellar scintillation of a source of very small angular size. This scintillation gradually decreased with time, and became nearly imperceptible after $\sim 50$ days. This evolution can be attributed to an increase of the angular size of the source with time. The expansion rate derived from these observations was $\sim 3\,{\mu}\mbox{\rm as}$ in $\sim 2$ weeks, which, at the redshift of the source (z=0.835), amounted to a transverse expansion speed of $\sim 4$ times the speed of light. Using the standard interpretation of superluminal motion, this would suggest that the average bulk Lorentz factor of the blast wave $\sim 2$ weeks after the burst was $\sim 4$ [@fkn97]. In the case of GRB 030329 it has been possible to directly measure the angular extent of the expanding source using Very Long Baseline Interferometry (VLBI) at several epochs over nearly 3 years following the burst [@tfb04; @ptg07]. These measurements show an apparent superluminal transverse expansion rate in the early phase ($v \sim 6c$ at $\sim 20$ days after burst), which gradually becomes sub-luminal around $\sim 1$ yr after the burst. The evolution of the apparent transverse size can be used to distinguish between several possible models of post jet-break lateral expansion of the blast wave – the available measurements on GRB 030329, however, are not strongly constraining in this regard [@granot05; @ptg07]. The non-relativistic transition time derived from the VLBI measurements of GRB 030329 appear to be a factor of $\sim 2$ larger than that required to successfully model the multi-wavelength light curves of the afterglow [@ptg07; @horst07]. The reason for this discrepancy is yet to be fully understood. ### Polarization One direct consequence of synchrotron emission is that the emission from an individual particle is polarized. Due to the probably random nature of the post-shock magnetic fields, the polarization is likely to be averaged out and only a small degree will be left. The time at which linear polarization is detectable is thought to be around the jet-break time. Several (differing) models have been proposed, in which a collimated jet and an off-axis line of sight conspire to produce an asymmetry which leads to net polarization including one or several 90 changes of the polarization angle [@ghi99; @sari99]. This behaviour could provide independent evidence for the jet structure of the relativistic outflow. ![Evolution of the polarization of the afterglow of GRB030329 during the first 38 days. The top and middle panels show the polarization degree in percent and the position angle in degrees. The bottom panel shows the residual $R$ band light curve after subtraction of a power-law t$^{-1.64}$ describing the undisturbed decay during the time interval $0.5-1.2$ days after the GRB, thus leading to a horizontal curve. Gray bars mark re-brightening transitions. Contributions from an underlying supernova (solid curved line) do not become significant until $\sim$10 days after the GRB. From @gkr03 \[030329pola\]](grb030329_pol_v4.ps){width="\textwidth"} . The observed polarization at optical wavelengths at later times is less than 3% [@hjo99; @wij99; @rol00] with one, debated, exception of 10% [@bers03]. Because of these low-levels and the rapid decline of the afterglow brightness during the first day, it has been difficult to observe changes in the polarization as predicted by theory. The by far most extensive observations of a light curve with fast variability in polarization degree and angle (Fig. \[030329pola\]) have been obtained for the afterglow of GRB030329 [@gkr03]. This variability pattern does not follow any of the model predictions, and is also not correlated with brightness. The global behaviour is consistent with the interpretation that the GRB is emitted in a relativistic jet with an initial opening angle of 3. However, in this GRB afterglow several re-brightenings superposed to a power-law decline have likely caused deviations from a simple single-jet model, thus making it difficult to interpret. The low level of polarization implies that the components of the magnetic field parallel and perpendicular to the shock do not differ by more than $\sim$10%, and suggests an entangled magnetic field, probably amplified by turbulence behind shocks, rather than a pre-existing field. Very recently, an optical polarization measurement of GRB090102 was achieved at a time when the reverse shock emission was dominating the light curve [@sms10]. The method uses a rotating polaroid which allows simultaneous measurements of the polarization degree of neighbouring stars but not the angle. This implies that the constant polarization of the Galactic foreground ISM could not be subtracted, and thus the measured polarization of 10.2$\pm$1.3%, is likely an upper limit. This relatively high level has been interpreted as evidence for the presence of large-scale ordered magnetic fields in the relativistic outflow. In the present case, the magnetisation, i.e., the ratio of magnetic to kinetic energy, must have been fine-tuned to near 1. Any value substantially larger than 1 would suppress the observed reverse shock, while values well below 1 would not produce a net polarization at the measured level. ![[**Left:**]{} Schematics of an on-axis orphan afterglow: prompt gamma-rays are emitted only by some regions which can have either a regular (upper left; cross-section in the lower picture) or irregular structure (upper right). The ellipses describe the area seen by an observer at a given time. Observer A detects the early emission from a small region within the gamma-ray emitting region, and later an afterglow from a much larger region (regular GRB and afterglow). Observer B does not detect any gamma-rays, but detects a regular (on-axis orphan) afterglow. [**Right:**]{} An off-axis orphan afterglow is seen by observers which are not within the initial relativistic jet. This emission is seen only after the jet break. Observer A detects both, the GRB and the afterglow; observer B detects the same afterglow but no gamma-rays, and observer C detects an off-axis orphan afterglow. From @nap03 \[orph\]](Nakar_NA8p141_f1.ps "fig:"){width="7.5cm"} ![[**Left:**]{} Schematics of an on-axis orphan afterglow: prompt gamma-rays are emitted only by some regions which can have either a regular (upper left; cross-section in the lower picture) or irregular structure (upper right). The ellipses describe the area seen by an observer at a given time. Observer A detects the early emission from a small region within the gamma-ray emitting region, and later an afterglow from a much larger region (regular GRB and afterglow). Observer B does not detect any gamma-rays, but detects a regular (on-axis orphan) afterglow. [**Right:**]{} An off-axis orphan afterglow is seen by observers which are not within the initial relativistic jet. This emission is seen only after the jet break. Observer A detects both, the GRB and the afterglow; observer B detects the same afterglow but no gamma-rays, and observer C detects an off-axis orphan afterglow. From @nap03 \[orph\]](Nakar_NA8p141_f2.ps "fig:"){width="7.cm"} . ### Orphan afterglows An exciting consequence of beaming is that there should exist GRBs which develop a less beamed X-ray, optical, or radio afterglow, but for which we miss the prompt GRB emission - the so-called orphan (Fig. \[orph\]) afterglow (for a discussion see, e.g., @rhoad97 [@mes98; @per98]). Archival X-ray data have been searched for such events, but none was found [@grind99; @ghv00]. In the optical, a small number of dedicated surveys was performed; there no candidate event was found in 125 hrs of monitoring of a field of 256 sq. deg. with [*ROTSE-I*]{} to a limiting magnitude of 15.7 [@kab02]. @vlw02 searched for color-selected transients within 1500 sq. deg. of the Sloan Digital Sky Survey ([*SDSS*]{}) down to $R = 19$ and found only one unusual transient which was later identified as a radio-loud AGN exhibiting strong variability [@gof02]. A couple of interesting optical transients were found in the $B$, $V$ and $R$-band Deep Lens Survey ([*DLS*]{}) transient search, within an area of 0.01 deg$^{-2}$ yr$^{-1}$ with a limiting magnitude of 24. None of these could be positively associated with a GRB afterglow [@bwb04] and all were later shown to have been flares from M dwarfs in our Galaxy [@kur06]. In another unsuccessful search using the [*ROTSE-III*]{} telescope array [@raa05] placed an upper limit on the rate of fading optical transients with quiescent counterparts dimmer than $\sim$20th magnitude of less than 1.9 deg$^{-2}$ yr$^{-1}$. Finally, a monitoring project of $\sim$12 sq. deg. in 25 nights (at a typical spacing of 2 nights) down to a limiting magnitude of $R \sim 23$ mag found no afterglow candidate, providing a limit on the collimation factor (ratio of the true rate of on-axis optical afterglows to long-duration GRBs which produce observable optical afterglows) of $<$12500 [@rgs06]. In the radio band, orphan afterglows have been searched for by combining the Faint Images of the Radio Sky at Twenty-centimeters ([*FIRST*]{}) and the NRAO VLA Sky Survey ([*NVSS*]{}), with the result of finding 9 afterglow candidates and implying a limit on the beaming factor of $f_b^{-1} \equiv (\theta^2/2) >13$ if all candidates, and $f_b^{-1} >90$ if none are associated with GRBs, respectively [@low02]. These authors also noted the, at first glance anti-intuitive, fact that the number of orphan radio afterglows is smaller for smaller jet opening angles in a flux-limited survey (for narrower beams each GRB has a lower energy and, therefore, is more difficult to detect) Later, @gay06 concluded that none of the transient objects was an orphan afterglow and set an upper limit for the beaming factor, $f_b^{-1} > 62$. Recently, there is evidence for an orphan radio afterglow found in the search for type Ibc SNe, through the discovery of luminous radio emission from the seemingly ordinary type Ibc SN2009bb, which, however, requires a substantially relativistic outflow powered by a central engine [@scp10]. A mildly relativistic outflow was also observed in SN2007gr [@paragi10]. These detections indicate that, most likely, the relativistic energy content of Ibc SNe varies dramatically, while their total explosion energy maybe more standard. Constraints from multi-wavelength afterglow observations -------------------------------------------------------- ### Fireball parameters The evolution of the blast wave in the fireball model is governed by the total energy in the shock, the geometry of the outflow, and the density structure of the ISM into which it is expanding (see also Chapters 7 and 8). The time dependence of the radiated emission depends on the hydrodynamic evolution and the distribution of energy between electrons and magnetic field [@sap99]. Unfortunately, only for few bursts sufficient data have been collected in order to derive the fundamental physical parameters: GRBs970508 (Fig. \[SED970508\]) [@gwb98; @wig99], 980329 [@yfh02], 980703 [@fyb03] and 051111 [@blp06]. Despite this sparse number of GRBs sampled, it is obvious that the diversity in physical parameters is large: the ISM density ranges between 0.1–500 cm$^{-3}$, total energies are 10$^{51}$ to 10$^{53}$ erg, and the energy distribution between electrons and magnetic field is consistent with equipartition. Future observations are clearly warranted to improve our understanding of the distributions in these parameters, and to what extent more sophisticated models with more parameters are needed. ![image](Galama_ApJ500L97_f2.ps){width="7.cm"} ### Environment #### Extinction Besides deriving the fireball parameters from spectral energy distributions (SED), emphasis has also been given to the curvature of broad-band spectra in the optical/near-infrared (NIR) region due to dust extinction, and in the soft X-ray band due to absorption by gas. Effective neutral hydrogen absorption in excess of the Galactic foreground absorption has taken a long way to get detected significantly in GRB afterglow spectra. Originally not detected at all in the full sample of BeppoSAX bursts [@ppp03], a re-analysis of the brightest 13 X-ray afterglows revealed statistically significant absorption in excess of the Galactic one for two bursts [@sfa04]. Already 8 bursts of 17 observed with [*Chandra*]{} or XMM-[*Newton*]{} until Oct. 2004 show excess absorption [@gcp06]. In the Swift era, excess absorption is detected in the majority of bursts, in selected samples up to 85% [@gkk10]. In the optical/NIR, extinction measurements for a long time have been hampered by the lack of proper SED measurements, and the interrelation of spectral slope, redshift and extinction. Early attempts therefore concentrated on deep NIR observations [e.g. @khg03]. In a first systematic way [@kkz06] collected photometry of 19 bursts from the literature, constructed light curves, shifted measurements of different filters to a common epoch according to the light curve, and derived spectral slope and extinction $A_{\rm V}$. While little evidence was found for substantial $A_{\rm V}$, the prevalence of a SMC-like dust extinction curve was noted. In the Swift era, UVOT observations provided more accurate $A_{\rm V}$ measurements, but for a sample which is strongly biased towards bright and small-$A_{\rm V}$ bursts [@smp07; @spo10]. Recently, the systematic GRB follow-up with the P60 [@cfm06] and GROND instruments [@gbc08] provided the first unbiased view on the extinction properties (see section \[darksec\]), with a substantially larger fraction of bursts with moderate $A_{\rm V}$ [@ckh09; @gkk10]. #### Wind vs. constant density profile The likely progenitor of long-duration GRBs is the stripped core of a massive star of initial mass 25 , similar to a Wolf-Rayet star. The winds from these stars in our Galaxy have velocities of 1000–2500 km/s and mass-loss rates of $10^{-5} - 10^{-4}$ /yr. Before exploding and creating a GRB, a Wolf-Rayet star is thus expected to be surrounded by a medium with density $\rho \propto r^{-s}$, where $r$ is the distance from the star, and $s$=2 for a stellar wind density profile and $s$=0 for a constant interstellar medium (ISM) density. The emission for a thin shell model expanding into a pre-blown wind has been calculated by [@chl99; @chl00], while that interacting with a constant ISM density ($s$=0) can be found in [@wax97; @spn98]. The appearance of the spectrum (as determined by the power law index $p$ of the electron distribution) at a given time is similar for both cases, but the evolution is different. At high frequency, e.g., optical/X-rays, for $s$=0 the flux evolution goes from adiabatic ($\propto t^{-(3p-3)4}$) to cooling ($\propto t^{-(3p-2)4}$) while for $s$=2 it goes from cooling ($\propto t^{-(3p-2)4}$) to adiabatic ($\propto t^{-(3p-1)4}$). While cooling, the two cases have the same spectrum and decline. At low frequency (radio), the flux evolution is $\propto t^{1/2}$ for $s$=0, but can make a transition from $\propto t$ to constant for $s$=2 [@chl99]. Although wind models are indicated for some observed afterglows, the majority are better described by constant density environments. ### Progenitors #### Long-duration GRBs Based on the observed supernova connection [@wob06], the progenitors of long-duration GRBs are intimately connected to supernovae. These progenitors must have lost their hydrogen envelope prior to the supernova explosion. In order to explain the observed statistics, the progenitors must be massive and frequent enough: A comparison of the supernova features in GRB afterglow light curves with those of non-GRB related stripped-envelope supernovae shows that the GRB-SN have, on average, considerably higher kinetic energies and ejected masses [@rich09]. Which special circumstances lead to the final occurrence of a GRB is not fully understood. There are certain mass ranges which make an explosion more difficult, but this depends on the rotation of the progenitor [@fry99; @whw02]. Also, besides single star channels also binary channels have been proposed [e.g. @svr02; @pmn04; @frh05], making a specific prediction difficult. The role of rotation in supernovae is a long-standing question going back to [@hoy46], but for GRB-SN there is general consensus that rotation is required. However, the details [@hlw00; @spr02] as well as the questions of binarity [@yol05] remain open. The rotation as well as the mass of the GRB progenitor are crucially influenced by mass loss. Replenishment of material lost from the surface will reduce the rotation rate, and mass loss will make the star lighter at the time of explosion. Preferred GRB scenarios thus have a small mass-loss rate, particularly in the Wolf-Rayet phase, at which the mass loss rate is smaller for low metallicity [@vdk05]. This has led to the general expectation that GRBs should favour low metallicity regions [@mfw99]. The above line of thoughts might suggest that high-redshift bursts should have, in general, longer duration than nearby bursts. Lower metallicity in the early phases of the Universe would leave more mass and rotation energy with the progenitor due to less strong winds, which in turn should have a consequence on the time scale of accretion and/or fall-back. With the present sample of GRBs with redshift no such correlation is seen (Fig. \[zvsT90\]), indicating that the duration measure $T_{90}$ does not (only) depend on mass and rotation. An interesting point, however, is that a fraction of $\sim$8% of long-duration bursts have rest-frame durations 1 sec (independent of redshift; see Fig. \[zvsT90\]). This poses the question of how massive stars can produce burst of such short duration? Since the fall-back of material from the envelope is of order 100 sec, this high rate of intrinsically short bursts related to massive stars may imply that the burst duration is rather determined by the ejection of the jet or the dissipation of the kinetic energy of the jet. Population synthesis models show that for the redshift range 6–10 the majority of GRB progenitors are Population II stars [@bhf10], as Population III (metal free) stars have already finished their evolution and Population I (metal rich) stars are just beginning to form. The peak of the long-duration GRB rate depends on the poorly constrained metallicity evolution and peaks at $z \sim 7$ (3) for efficient/fast (inefficient/slow) mixing of metals [@bhf10]. ![image](grb_Swift_z_T90.ps){width="9.2cm"} #### Short-duration GRBs The association of short GRBs with early-type galaxies [@gsb05; @bpp06], and the burst localizations being relatively distant from the center of the host, have supported the earlier conjecture that the progenitors of short GRBs are related to an old stellar population, namely binary systems composed of two compact objects that merge after their orbit has decayed through gravitational wave emission [@elp89]. Interestingly, however, some short GRBs are associated with small, star-forming galaxies, and explode close to their center [@tko08]. In a recent census actually 4$\times$ more bursts reside in star-forming galaxies than in elliptical galaxies [@ber09]. This association spurred discussion on different families of progenitors for short GRBs, including different compact object types [@bpb06], proto-magnetars [@mqt08], or a tighter connection to the star-formation evolution similar to long-duration bursts [@vzo10]. A comparison of the luminosities, star formation rates and metallicities of a sample of hosts of short and of long-duration bursts shows, however, that short burst hosts appear to be drawn uniformly from the underlying field galaxy distribution. This suggests a wide age distribution of several Gyr for the progenitors of short GRBs [@ber09], though this is also consistent with the possibility that the associations of short GRBs to host galaxies are systematically flawed. ### Jet opening angle As described above, orphan afterglow searches have not yet been sensitive enough to constrain the beaming fraction in a sensible way. Similarly, polarisation measurements have not (yet) confirmed that breaks seen in light curves are jet breaks. Thus, estimates of jet opening angles rely exclusively on the identification of observed breaks in the afterglow light curves, and their association with a jet break. In the HETE-II and BeppoSAX era, the requirement of achromaticity was only loosely applied, due to the sparse coverage of radio, optical/NIR and/or X-ray measurements [@fks01; @bfk03]. These first attempts found the surprising result that the seemingly most energetic bursts also had the smallest beaming factor, so that the true, beaming corrected energy release was strongly clustered. In the Swift era, measuring the jet opening angle has been a more challenging task: the much better database of X-ray and optical/NIR follow-up of Swift bursts has made identifying achromatic breaks much more rare [@rlb09]. In the few clear cases, the distribution of jet break times ranges from a few hours to a few weeks with a median of $\sim$1 day [@rlb09], implying opening angles of few to about 20. Another uncertainty, which already plagued the first attempts, is the problem of constant ISM or wind density profile, leading to opening angle estimates differing by up to a factor of 2. With knowledge of the jet opening angle, an estimate of the true rate of GRBs can also be made. In the Universe, there are about 5 supernovae per second [@mdp98]. The exposure and sky-coverage rate corrected GRB rate is about 3 per day. Correcting this for a mean beaming factor of 300 implies that throughout the Universe, the GRB rate is only about 0.2% of the SN rate, and thus a rare phenomenon among core-collapse SNe. If the GRB rate is strongly dependent on metallicity, this fraction will be higher at large redshift. ### Distance and Energetics Beyond the prompt emission fluence, two observables are required to determine the energetics of gamma-ray bursts: their distance, and their jet opening angles. Since the discovery of afterglows, redshifts have been measured for nearly 200 bursts (Fig. \[z-dis\]); their isotropic equivalent energy is in the range of 10$^{51}$ to $10^{54}$ erg. However, with only a few jet opening angles measured, the distribution of beaming-corrected energetics remains poorly constrained. ![image](grb_zdistr.ps){width="8.8cm"} For a handful of bursts detected recently with [*Fermi*]{} in the 0.1–several GeV range, light curve breaks or limits could be derived, and beaming-corrected energies determined. Four of these bursts, namely GRB080916C, 090902B, 090926A and 090323, have beaming-corrected energies $E_{\gamma}$ of $>$2-5 $\times$ 10$^{52}$ erg [@gck09; @mkr10; @cfh10b; @rsk10], among the highest ever measured. Interestingly, their jet opening angles are not particularly narrow. Values in excess of 10$^{52}$ erg have been reported for another [*Fermi*]{}/Large Area Telescope (LAT) detected burst [@gck09] and a number of [*Swift*]{} bursts [@cfh10a]. These results indicate that the distribution of $E_{\gamma}$ is broad (at least a factor of 30) and not compatible with a standard candle [@fks01; @bfk03]. Furthermore, while being compatible with the Amati ($E_{\rm peak} - E_{\rm iso}$) relation at the 2$\sigma$ level [@afg09], these very luminous GRBs with high values of $E_{\rm peak}$ are not compatible with the $E_{\rm peak} - E_{\gamma}$ relationship [@ggl04]. Both these correlations are heuristic, based on prompt gamma-ray emission properties, and have survived over the last decade with measurements by various instruments. $E_{\gamma}$ is the beaming-corrected version of $E_{\rm iso}$, the total bolometric energy released by a burst (see chapter [**???**]{}). GRBs being beamed, not only the energy per burst is reduced by 2-3 orders of magnitude, but also their frequency is increased correspondingly since an observer will miss most of the narrow-beamed events. This in turn has implications on the GRB rate, and its relation to the star-formation rate. ### Cosmology GRB afterglows are bright enough to be used as pathfinders into the very early universe, independent of whether or not the GRB and/or afterglow phenomenon is fully understood. In contrast to stationary sources at high redshift, GRB afterglows do not appear substantially fainter at increasing $z$. Relativistic time dilation implies that the observations of GRBs at the same time $\Delta$t after the GRB event in the observers frame ([*on Earth*]{}) will be observed at different times in the source frame, e.g. at [*earlier*]{} times for more distant GRB. At this [*earlier*]{} time the GRB is intrinsically brighter, thus partly compensating the larger distance. While it seems unlikely that GRBs will soon be used to derive an accurate Hubble-diagram and to constrain cosmological parameters below the accuracy provided by other methods, there are a few other implications of high-$z$ GRB studies for cosmology: - Since long-duration GRBs are related to the death of massive stars, it is likely that high-$z$ GRBs exist, as examplified by the recent discoveries of GRBs at redshift 6.7 [@gkf09] and 8.2 [@tfl09; @sdc09]. Theoretical predictions range between a few up to 50% of all GRBs being at $z>5$ [@msc00; @lar01; @bromm03], while observations indicate a level of 5% (see also Chapter 14). With WMAP data and theoretical expectations pointing towards the first star-formation occurring at $z \sim 20-30$ [@kogu03], further redshift records can be expected in the near future. Hopefully, also the spectroscopic follow-up will be improved, thus allowing us to use these high-z bursts to expand our understanding of the early Universe with respect to metallicity evolution or re-ionization history. - WMAP data also suggest that the onset of re-ionisation happened at $z=11-20$ [@kogu03]. Because WMAP only provides an integral constraint on the re-ionisation history of the universe, it has led to the speculation that re-ionisation was either an extended process or happened more than once. Since the intrinsic luminosity as well as the number density of quasars are expected to fade rapidly beyond $z \sim 6$, only GRBs are suitable to be used as bright beacons to illuminate the end of the dark age [@bar01; @loeb01; @mir03], and potentially allow us to probe the re-ionisation history of the early Universe [@ino03]. - Extensive monitoring of afterglows would help to constrain their local environment, and could allow us to tell whether GRB afterglows are decelerated by the intergalactic medium with an increasingly higher density at higher redshift, or by a stratified constant density medium in a bubble cleared by the progenitor star [@gou03]. - Studying the distribution and absorption line properties of GRB host galaxies would shed light onto the cosmological structure formation and star forming history [@mao98]. Prospects for the future ------------------------ With the launch of Fermi, the GRB field has entered a new era as emphasis is re-directed again to the main emission mechanism. Yet, there are at least two aspects which relate to the afterglow phenomenon: First, the origin of the delayed GeV emission has been proposed to be afterglow emission [e.g. @ggn10]. Second, it turns out that the very energetic Fermi/LAT bursts are also those with particularly large beaming corrected energies [@mkr10; @cfh10b]. One might also hope that if GBM positions could be freed of their systematic errors of 5–12 degrees [@bcm09], the recovery of optical afterglows of bright GBM bursts may have a large impact on the question of jet breaks, and consequently on the beaming angle distribution. Current and near-future improvements in our ground-based facilities include - the routine use of a spectrograph with a wide wavelength coverage from the atmosphere cut-off to the near-infrared (X-Shooter) at the ESO/VLT, allowing the detection of absorption and emission lines over the full wavelength range accessible from ground. - the upgrade of the VLA to substantially higher sensitivity (EVLA), and the starting operation of LOFAR and ALMA, the latter covering the peak of the synchrotron spectrum of GRB afterglows, allowing a substantial fraction of, if not all, afterglows to be detected and calorimetry to be used to determine the GRB energetics; - the upgrades of air Cerenkov telescopes to lower energy thresholds will allow us to cover a larger distance range before photon-photon interactions attenuate the signal. These instruments will change the number of afterglow discoveries and the amount of data per afterglow dramatically, thus allowing completely new studies to be performed. In the field of non-electromagnetic signatures, both neutrino and gravitational wave detectors are getting close to the expected fluxes from GRBs. IceCube [@aaa10] and ANTARES [@bou10] should soon be able to detect the typically 1-10 GeV neutrinos which are expected to be produced in the shocks related to GRBs, or even inelastic proton-neutron collisions in shock-free environments, and thus would confirm that protons are accelerated. Similarly, the Advanced LIGO interferometer, once coming online around 2014, should detect of order 10 neutron star mergers related to short GRBs, up to distances of 200 Mpc [@gul09], and could provide insight into the inner engine. [**Acknowledgements:**]{} I thank Dipankar Bhattacharya for writing section \[dipank\] and Evert Rol for discussions during the early stage of this review. I acknowledge S. Klose and A. Rau for comments on an earlier version of this manuscript, and D.A. Kann for help in preparing Tab. \[brightAG\] as well as a proof reading of the manuscript. [99]{} Abbasi R., Abdou Y., Abu-Zayyad T. et al. (2010). *ApJ* **710**, 346. Akerlof C., Balsano R., Barthelmy S., . (1999). *Nat.* **398**, 400. Amati L., Frontera F., Guidorzi C. (2009). *A&A* **508**, 173. Band, D. L. & Hartmann, D.H. (1992). *ApJ* **386**, 299. Barkana R., Loeb A. (2001). *Phys. Rep.* **349**, 125. Barthelmy S.D., Butterworth P.S., Cline T.L. . (1996). In: *Gamma-Ray Bursts*, Eds. C. Kouveliotou , AIP **384**, p. 580. Barthelmy S.D., Cannizzo, J.K., Gehrels, N. et al. (2005). *ApJ* **635**, L133. Becker A.C., Wittman D.M., Broeshaar P.C., et al. (2004). *ApJ* **611**, 418. Belczynski K., Perna R., Bulik T. et al. (2006). *ApJ* **648**, 1110. Belczynski K., Holz D.E., Fryer C.L. et al. (2010). *ApJ* **708**, 117. Berger, E., Kulkarni, S.R., Frail, D.A. (2004). *ApJ* **612**, 966. Berger E., Penprase B.E., Cenko S.B. et al. (2006). *ApJ* **642**, 979. Berger, E. (2009). *ApJ* **690**, 231. Bersier, D., et al. (2003). *ApJ* **583**, L63. Björnsson G., Gudmundsson E.H., Johannesson G. (2004). *ApJ* **615**, L77. Blake C.H., Bloom J.S., Starr D.L., et al. (2005). *Nat.* **435**, 181. Bloom J.S., Frail D.A., Kulkarni S.R. (2003). *ApJ* **594**, 674. Bloom J.S., Prochaska J.X., Pooley D. et al. (2006). *ApJ* **638**, 354. Bloom J.S., Perley D.A., Li W. et al. (2009). *ApJ* **691**, 723. Boër M., (1988). *A&A* [**202**]{}, 117. Bontekoe T.J.R., Winkler C., Stacy J.G., Jackson P.D. (1995). *Ap&SS* **231**, 285. Bouwhuis M. on behalf of the ANTARES collaboration (2010). In Proc. of 31th ICRC, Lodz, arXiv:1002.0701. Briggs M.S., Connaughton V., Meegan C.A. et al. (2009). AIP Conf. Proc. **1133**, p. 40. Bromm V., Loeb A. (2002). *ApJ* **575**, 111. Burrows, D.N., Romano, P., Falcone, A. et al. (2005). *Sci.* **309**, 1833. Burrows D.N. (2010). in [*Deciphering the ancient Universe with GRBs*]{}, Kyoto, Apr. 2010, AIP (in press) Butler N.R., Li W., Perley D. et al. (2006). *ApJ* **652**, 1390. Cenko S.B., Fox D.B., Moon D.-S. et al. (2006). *PASP* **118**, 1396. Cenko S.B., Keleman J., Harrison F.A., et al. (2009). *ApJ* **693**, 1484. Cenko S.B., Frail D.A., Harrison F.A. et al. (2010a). *ApJ* **711**, 641. Cenko S.B., Frail D.A., Harrison F.A. et al. (2010b). *ApJ* (subm.; arXiv:1004.2900). Chen H.-W., Prochaska J.X., Bloom J.S., et al. (2005). *ApJ* **634**, L25. Chevalier R.A., Li Z.-Y. (1999). *ApJ* **520**, L29. Chincarini G., Mao J., Margutti R. et al. (2010). *MN* (subm., arXiv:1004.0901) Chevalier R.A., Li Z.-Y. (2000). *ApJ* **536**, 195. Connaughton V. (2002). *ApJ* **567**, 1028. Costa E., Frontera F., Heise J. (1997). *Nat.* **387**, 783. Covino S., D’Avanzo P., Klotz A. (2008). *MN* **388**, 347. Curran P.A., Starling R.L.C., O’Brien P.T. et al. (2008). *A&A* **487**, 533. Della Valle M., Chincarini G., Panagia N. et al. (2006). *Nat.* **444**, 1050. Denisenko D.V., Terekhov O.V. (2008). *Astron. Lett.* **34**, 298. De Pasquale M., Piro L., Perna R., et al. (2003). *ApJ* **592**, 1018. Dessauges-Zavadsky M., Chen H.-W., Prochaska J.X. et al. (2006). *ApJ* **648**, L89. Eichler D., Livio M., Piran T., Schramm D.N. (1989). *Nat.* **340**, 126. Falcone A.D., Burrows D.N., Lazzati D. et al. (2006). *ApJ* **641**, 1010. Frail, D.A., Kulkarni, S.R., Nicastro L. et al. (1997). *Nat.* **389**, 261. Frail, D.A., Waxman, E., Kulkarni, S.R. (2000). *ApJ* **537**, 191. Frail D.A., Kulkarni, S.R., Sari R., et al. (2001). *ApJ* **562**, L55. Frail, D.A., Kulkarni, S.R., Berger, E. et al. (2003a). *AJ* **125**, 2299. Frail, D.A., Yost S.A., Berger E. et al. (2003). *ApJ* **590**, 992. Frail, D.A., Soderberg, A.M., Kulkarni, S.R. et al (2005). *ApJ* **619**, 994. Frontera F., Greiner J., Antonelli L.A., et al. (1998). *A&A* **334**, L69. Fryer C.L. (1999). *ApJ* **522**, 413. Fryer C.L., Heger A. (2005). *ApJ* **623**, 302. Fynbo J.P.U., Jensen B.L., Gorosabel J. et al. (2001). *A&A* **369** 373. Fynbo J.P.U., Watson D., Thöne C.C. et al. (2006). *Nat.* **444**, 1047. Fynbo J.P.U., Jakobsson P., Prochaska J.X. et al. (2009). *ApJS* **185**, 526 Galama T.J., Vreeswijk, P.M., van Paradijs, J. et al. (1998a). *Nat.* **395**, 670. Galama T.J., Wijers R.A.M.J., Bremer M. et al. (1998b). *ApJ* **500**, L97. Gal-Yam A., Ofek E.O., Filippenko A.V., et al. (2002). *PASP* **114**, 587. Gal-Yam A., Fox D.B., Price P.A. et al. (2006a). *Nat.* **444**, 1053. Gal-Yam A., Ofek E.O., Poznanski D. et al. (2006b). *ApJ* **639**, 331. Gehrels N., Sarazin C.L., O’Brien P.T. et al. (2005). *Nat.* **437**, 851. Gehrels N., Norris J.P., Barthelmy S.D., et al. (2006). *Nat.* **444**, 1044. Gehrels N., Ramirez-Ruiz E., Fox D.B. (2009). *ARAA* **47**, 567. Gendre B., Corsi A., Piro L. (2006). *A&A* **455**, 803 Gendre B., Klotz A., Palazzi E. et al. (2010). *MN* (in press; arXiv:0909.1167) Genet F., Granot J. (2009). *MN* **339**, 1328. Ghirlanda G., Ghisellini G., Nava L. (2010). *A&A* **510**, L7. Ghisellini G., Lazzati D. (1999). *MN* **309**, L7. Ghirlanda G., Ghisellini G., Lazzati D. (2004). *ApJ* **616**, 331. Goad M.R., Page K.L., Godet O. et al. (2007). *A&A* **468**, 103. Godet O., Page K.L., Osborne J. et al. (2007). *A&A* **471**, 385. Gou L.J., Mészáros P., Abel T., Zhang B. (2004). *ApJ* **604**, 508. Granot, J., Ramirez-Ruiz, E., Loeb, A. (2005). *ApJ*, **618**, 413. Greiner J., Flohrer J., Wenzel W., Lehmann T. (1987). *ASS* [**138**]{}, 155. Greiner J., Wenzel W., Hudec R. (1994). [*Gamma-Ray Bursts*]{}, eds. G.J. Fishman , AIP [**307**]{}, 408. Greiner J., Boër M., Kahabka P., Motch C., Voges W. (1995). NATO ASI [**C450**]{} [*The Lives of the neutron stars*]{}, eds. M.A. Alpar , Kluwer, p. 519. Greiner J., Bade N., Hurley K., Kippen R.M., Laros J., (1996). 3rd Huntsville workshop 1995, AIP **384**, p. 627. Greiner, J., Hartmann D., Voges W.,  (2000). *A&A* **353**, 998. Greiner J., Klose S., Reinsch K.   (2003). *Nat.* **426**, 157. Greiner J., Bornemann W., Clemens C. et al. (2008). *PASP* **120**, 405. Greiner J., Clemens C., Krühler T. et al. (2009a). *A&A* **498**, 89. Greiner J., Krühler T., Fynbo J.P.U. et al. (2009b). *ApJ* **693**, 1610. Greiner J., Krühler T., Klose S. et al. (2010). *A&A* (subm). Grindlay J.E., Wright E.L., McCrosky R.E.: (1974). *ApJ* [**192**]{}, L113. Grupe D., Burrows D.N., Wu X.-F. et al. (2010). *ApJ* **711**, 1008. Grindlay J.E. (1999). *ApJ* 510, 710. Guetta D. Stella L. (2009). *A&A* **498**, 329. Heger A., Langer N., Woosley S.E. (2000). *ApJ* **528**, 368. Heise J., in’t Zand J., Kippen M., Woods P. (2001). in [*GRBs in the afterglow Era*]{}, Eds. E. Costa et al., ESO-Springer, 16. Harrison T.E., McNamara B.J., Pedersen H., : (1995). *A&A* **297**, 465. Hjorth J., et al. (1999). *Sci.* **283**, 2073. Hjorth, J., et al. (2003). *Nat.* **423**, 847. Hoyle F. (1946). *MN* **106**, 343 Hudec R., Borovicka, J., Wenzel,   (1987). *A&A* [**175**]{}, 71. Hurkett C.P., Vaughan S., Osborne J.,  (2008). *ApJ* [**679**]{}, 587. Hurley K. (1999). *ApJS* **120**, 399. Hurley K., Costa E., Feroci M. et al. (1997). *ApJ* **485**, L1. Inoue A.K., Yamazaki R., Nakamura T. (2003). *ApJ* **601**, 644. Iwamoto et al. (1998). *Nat.* **395**, 672. Jelinek M., Prouza M., Kubanek P. et al. (2006). *A&A* **454**, L119. Kaneko Y., Ramirez-Ruiz E., Granot J., Woosley S., et al. (2007). *ApJ* 654, 385. Kann D.A., Klose S., Zeh A. (2006). *ApJ* **641**, 993. Kann D.A., Klose S., Zhang B., et al. (2010). *ApJ* (arXiv:0712.2186v2 from Sep. 2009). Kehoe R., Akerlof C.W., Balsano R., et al. (2002). *ApJ* 577, L159. Kippen R.M., , (1994). [*Gamma-Ray Bursts*]{}, eds. G.J. Fishman , AIP [**307**]{}, 418. Klose S., Henden A.A., Greiner J. et al. (2003). *ApJ* **592**, 1025. Klotz A., Gendre B., Stratta G., et al. (2006). *A&A* **451**, L39. Klotz, A., Boër, M., Atteia, J.L., Gendre, B. (2009). **AJ** **137**, 4100. Kobayashi S. (2000). *ApJ* **545**, 807. Kogut A., Spergel D.N., Barnes C. et al. (2003). *ApJS* **148**, 161. Kouveliotou, C., et al. (1993). *ApJ* **413**, L101. Krimm H., Vanderspek R.K., Ricker G.R.: (1994). [*Gamma-Ray Bursts*]{}, eds. G.J. Fishman , AIP [**307**]{}, 423. Krimm H.A., Granot J., Marshall F.E. et al. (2007). *ApJ* **665**, 554. Krühler T., Küpcü Yoldaş A., Greiner J. et al. (2008). *ApJ* **685**, 376. Krühler T., Greiner J., McBreen S. et al. (2009). *ApJ* **697**, 758. Kulkarni S.R., Rau A. (2006). *ApJ* **644**, L63. Kumar P., Panaitescu A. (2000). *ApJ* **541**, L51. Küpcü Yoldaş A., Greiner J., Perna R. (2006). *A&A* **457**, 115. Lamb D.Q., Reichart D.E. (2001). in *GRBs in the afterglow era*, eds. Costa ESO-Springer, p. 226. Lamb D.Q., Donaghy T.Q., Graziani C. (2005). *ApJ* **620**, 355. Lazzati D., Ramirez-Ruiz E., Ghisellini G. (2001). *A&A* **379**, L39. Lazzati D., Rossi E., Covino S., Ghisellini G., Malesani D. (2002). *A&A* 396, L5. Ledoux C., Vreeswijk P.M., Smette A. et al. (2009). *A&A* **506**, 661. Levesque E., Berger E., Kewley L., Bagley M.M. (2010). *AJ* **139**, 694. Levinson A., Ofek E.O., Waxman E., Gal-Yam A. (2002). *ApJ* **576**, 923. Li P., Hurley K., Sommer M., et al. (1993). *BAAS* **25**, 846. Li W., Filippenko A.V., Chornock R., Jha S. (2003). *ApJ* **586**, L9. Lipkin, Y.M., Ofek, E.O., Gal-Yam, A. et al. (2004). *ApJ* **606**, 381. Lithwick Y., Sari R. (2001). *ApJ* **555**, 540. Loeb A., Barkana R. (2001). *ARAA* **39**, 19. Madau P., Della Valle M., Panagia N. (1998). *MN* **297**, L17 Mao S., Mo H.J. (1998). *A&A* **339**, L1. McBreen S., Krühler T., Rau A. et al. (2010). *A&A* **516** (in press; arXiv:1003.3885) MacFadyen A.I., Woosley S.E. (1999). *ApJ* **524**, 262. Mazzali P.A., Deng J., Nomoto K. et al. (2006). *Nat.* **442**, 1018. Mészáros P., Rees M.J., Wijers R.A.M.J. (1998). *ApJ* **499**, 301. Mészáros P., Rees M. (1999). *MN* **306**, L39. Metzger M.R., Djorgovski S.G., Kulkarni S.R. et al. (1997). *Nat.* **387**, 878. Metzger B.D., Quatert E., Thompson T.A. (2008). *MN* **385**, 1455. Miralda-Escudé J. (2003). *Sci.* **300**, 1904. Molinari E., Vergani S.D., Malesani D. et al. (2007). *A&A* **469**, L13. Nakar E., Piran T. (2003). *New Astron.* **8**, 141. Nishihara E., Hashimoto O., Kinugasa K. (2003). GCN 2118. Nomoto K., Maeda K., Mazzali P.A. et al. (2004). in *Stellar Collapse*, eds C.L. Fryer, *ASSL* **302**, p. 277. Norris J.P., Bonnell J.T. (2006). *ApJ* **643**, 266. Norris J.P., Gehrels, N., Scargle, Jeffrey D. (2010). *ApJ* **717**, 411. Noterdaeme P., Ledoux C., Petitjean P., & Srinand R. (2008). *A&A* **481**, 327. Nousek J.A., Kouveliotou C., Grupe D., et al. (2006). *ApJ* **642**, 389. Oates S.R., Page M.J., Schady P. (2009). *MN* **395**, 490. Oren Y., Nakar E., Piran T. (2005). *MN* **353**, L35. Paczynski B. (1998). *ApJ* **494**, L45. Panaitescu A., Kumar, P. (2001). *ApJ* **554**, 667. Panaitescu A., Kumar, P. (2002). *ApJ* **571**, 779. Panaitescu A., Vestrand W.T. (2008). *MN* **387**, 497. Paragi Z., Taylor G.B., Kouveliotou C. et al. (2010). *Nat.* *463*, 516. Perley D.A., Bloom J.S., Butler N.R. et al. (2008). *ApJ* **672**, 449. Perley D.A., Cenko S.B., Bloom J.S., et al. (2009). *AJ* **138**, 1690. Perna R., Loeb A. (1998). *ApJ* **509**, L85. Perna, R., Raymond, J., Loeb, A. (2000). *ApJ* **533**, 658. Pihlström, Y.M., Taylor, G.B., Granot, J., Doeleman, S. (2007). *ApJ* **664**, 411. Piro L. (2001). in [*GRBs in the Afterglow era*]{}, eds. E. Costa, F. Frontera, J. Hjorth, Springer, Berlin, p. 97. Piro L., Scarsi L. (2004). in [*The Restless High-Energy Universe*]{}, May 2003, Amsterdam, Eds. E.P.J. van den Heuvel, R.A.M.J. Wijers, J.J.M. in ’t Zand, *Nucl. Phys. B* (Proc. Suppl). **132**, p. 3. Pizzichini G.,  (1986). *ApJ* [**301**]{}, 641. Porciani C., Viel M., Lilly S.J. (2007). *ApJ* **659**, 218. Price P.A., Fox D.W., Kulkarni S.R., et al. (2003). *Nat.* **423**, 844. Podsiadlowski P., Mazzali P.A., Nomoto K. et al. (2004). *ApJ* **607**, L51. Prochaska J.X., Chen H.-W., Bloom J.S. (2006). *ApJ* **648**, 95. Prochter G.E., et al. (2006). *ApJ* **648**, L93. Quimby R.M., Rykoff E.S., Yost S.A. (2006). *ApJ* **640**, 402. Racusin J.L., Karpov S.V., Sokolowski M., et al. (2008). *Nat.* **455**, 183. Racusin J.L., Liang E.W., Burrows D.N. . (2009). *ApJ* **698**, 43. Rau A., Greiner J., Schwarz R., (2006). *A&A* **449**, 79. Rau A., Savaglio S., Krühler T. et al. (2010). *ApJ* (subm.; arXiv:1004.3261). Resmi, L., Ishwara-Chandra, C.H., Castro-Tirado A.J. et al (2005). *A&A* **440**, 477. Rhoads J.E., (1997). *ApJ* **487**, L1. Rhoads J.E., (1997). *ApJ* **525**, 737. Richardson D. (2009). *AJ* **137**, 347. Ricker G.R., Atteia J.-L., Crew G.B. et al. (2002). in *GRB and Afterglow Astronomy 2001* ed. G.R. Ricker & R.K. Vanderspek (New York) AIP **662**, p. 3. Rol, E., et al. (2000). *ApJ* **544**, 707. Romano P., Campana S., Mignani R.P., et al. (2008). [*Vizier*]{} Online Data Catalog 348:81221. Rossi A., Schulze S., Klose S. et al. (2010). *A&A* (subm.; arXiv:1007.0383) Rykoff E., Smith D.A., Price P.A. et al. (2004). *ApJ* **601**, 1013. Rykoff E., Aharonian F., Akerlof C.W., et al. (2005). *ApJ* **631**, 1032. Rykoff E., Aharonian F., Akerlof C.W., et al. (2009). *ApJ* **702**, 489. Sakamoto T., Hullinger D., Sato G. et al. (2008). *ApJ* **679**, 570. Salvaterra, R., Della Valle, M., Campana, S. et al. (2009). *Nat.* **461**, 1258. Sari R., Piran T., Narayan R. (1998). *ApJ* **497**, L17. Sari R. (1999). *ApJ* **524**, L43. Sari R., Piran T. (1999). *ApJ* **517**, L109. Savaglio, S., Glazebrook, K., Le Borgne, D. (2009). *ApJ* **691**, 182. Schaefer B.E., Bradt, H., Barat, C. . (1984). *ApJ* [**286**]{}, L1. Schady P., Mason K.O., Page M.J. et al. (2007). *MN* **377**, 273. Schady P., Page M.J., Oates S.R. et al. (2010). *MN* **401**, 2773. Schmidt M. (2000). *ApJ* **552**, 36. Smartt S.J., Vreeswijk P.M., Ramirez-Ruiz E. et al. (2002). *ApJ* **572**, L147. Soderberg A.M., Chakraborti S., Pignata G. et al. (2010). *Nat.* **463**, 513. Spruit H. (2002). *A&A* **381**, 923. Stanek K., et al. (2003). *ApJ* **591**, L17. Steele I.A., Mundell C.G., Smith R.J., Kobayashi S., Guidorzi C. (2010). *Nat.* **462**, 767. Stratta G., Fiore F., Antonelli L.A. et al. (2004). *ApJ* **608**, 846. Sudilovsky V., Savaglio S., Vreeswijk P.M. et al. (2007). *ApJ* **669**, 741. Tagliaferri, G., Goad, M., Chincarini, G. et al. (2005). *Nat.* **436**, 985. Tanvir N.R., Fox D.B., Levan A.J. et al. (2009). *Nat.* **461**, 1254. Taylor, G.B., Frail, D.A., Berger, E., Kulkarni, S.R. (2004). *ApJ* **609**, L1. Tejos N., Lopez S., Prochaska J.X. et al. (2009). *ApJ* **706**, 1309. Troja E., King A.R., O’Brien P.T. et al. (2008). *MN* **385**, L10. Vanden Berk D.E., Lee B.C., Wilhite B.C. et al. (2002). *ApJ* **576**, 673 Van der Horst, A.J., Kamble, A., Resmi, L. et al. (2008). *A&A* **480**, 35. Vanderspek R., Krimm H.A., Ricker G.R.: (1994). [*Gamma-Ray Bursts*]{}, eds. G.J. Fishman , AIP [**307**]{}, 438. Vanderspek R.K., Krimm H.A., Ricker G.R.: (1995). *Ap&SS* **231**, 259. Van Paradijs, J. Groot, P.J., Galama, T.  (1997). *Nat.* **386**, 686. Vergani S.D., Petitjean P., Ledoux C. et al. (2009). *A&A* **503**, 771. Vestrand W.T., Borozdin K., Casperson D.J. (2004). *AN* **325**, 549. Vestrand W.T., Wozniak, P.R., Wren, J.A. et al. (2005). *Nat.* **435**, 178. Vestrand W.T., Wren, J.A., Wozniak, P.R. et al. (2006). *Nat.* **442**, 172. Villasenor J., Ricker G., Vanderspek R. et al. (2004). in *Proc. 35th COSPAR Sci. Assembly*, July 2004, Paris, p. 1225. Villasenor J., Lamb D.Q., Ricker G. et al (2005). *Nat.* **437**, 855. Vink J.S., de Koter A. (2005). *A&A* **442**, 587. Virgili F.J., Zhang B., O’Brien P.T., Troja E. (2010). *ApJ* (subm.; arXiv:0909.1850) Vreeswijk P.M., Ellison S.L., Ledoux C. et al. (2004). *A&A* **419**, 927. Vreeswijk P.M., Ledoux C., Smette A. et al. (2007). *A&A* **468**, 83. Waxman E. (1997). *ApJ* **489**, L33. West J.P., McLin K., Brennan T. et al. (2008). GCN 8617. Wijers, R.A.M.J., Galama T.J. (1999). *ApJ* **523**, 177. Wijers, R.A.M.J., et al. (1999). *ApJ* **523**, L33. Woosley S.E., Heger A. Weaver T.A. (2002). *Rev. Mod. Phys.* **74**, 1015. Woosley S.E., Bloom J.S. (2006). *ARAA* **44**, 507. Yoon S.-C., Langer N. (2005). *A&A* **443**, 643. Yost S.A., Frail D.A., Harrison F.A. et al. (2002). *ApJ* **577**, 155. Yost S.A., Alatalo K., Rykoff E.S., et al. (2006). *ApJ* **636**, 959. Yuan F., Rujopakarn W. (2008). GCN 8536. Zhang B., Fan Y.Z., Dyks J. et al. (2006). *ApJ* **642**, 354. Zhang, B., Zhang, B.-B., Virgili, F.J. et al. (2009). *ApJ* **703**, 1696.
--- author: - | E.G. Drukarev\ Petersburg Nuclear Physics Institute\ Gatchina, St.Petersburg 188350, Russia title: QCD sum rules as a tool for investigation of characteristics of a nucleon in nuclear matter --- -1.5cm 0.25cm 17.5cm Progress of traditional nuclear physics in describing changes of properties of nucleons in nuclear matter is acknowledged. However, this approach is based on conception of $NN$ interactions and needs some phenomenological input on the latter. The small distances of the order of the nucleon radius appear to be very important. Also, calculation of each new characteristics requires further development of theory. Say, in order to obtain neutron-proton mass difference one should develop theory of charge-symmetry breaking forces. Lastly, there are the problems which are inaccessible for traditional nuclear physics. Swelling of nucleon in nuclei, demonstrated by EMC group is an example. This stimulates investigation of alternative approach. The application of QCD sum rules (SR) at finite density was suggested in \[1\]. The QCD SR in vacuum \[2\] succeeded earlier in describing properties of free hadrons. The method is based on dispersion relations for the function (correlator) describing propagation of the system carrying the quantum numbers of the nucleon. Expansion in inverse powers of square momentum $q^2$ is related to observable characteristics of the nucleon. Hence, the latter are expressed through the values of QCD condensates. The method is based on QCD Lagrangian, employes crucially the properties of strong interactions at small distances and contains confinement as an input. The SR in nuclear matter tie the changes of the values of certain QCD condensates with characteristics of nucleons in the medium. Single-particle potential energy appears to be the sum of the terms proportional to expectation values of quark operators $\bar q\gamma_0q$ and $\bar qq$ \[1\]. The former one vanishes in vacuum while the vacuum value should be subtracted in the latter case. The vector term is exactly linear in density, providing contribution of the order+250 MeV. The scalar one is about ($-300$ MeV). Hence, the method reproduces the main points of relativistic nuclear physics. It provides also connection between the scalar fields and pion-nucleon sigma term. The neutron-proton mass difference was found to contain dependence on isospin breaking expectation value of the operator $\bar dd-\bar uu$ \[3\]. Neutron was bound to be found stronger than the proton with reasonable value of the mass difference. The charge symmetry breaking in the scalar channel was shown to be as important as in the vector one. Thus the approach provides guide-lines for traditional nuclear physics. The method was applied to investigation of the deep inelastic structure functions of nuclei \[4\]. The deviations from that of a system of free nucleons were shown to be determined by the four quark condensate. The calculated values followed typical EMC behaviour. However in the first step, made in \[4\], we did not touch cumulative aspects of the problem. The method can be used for calculation of other properties of nucleons in medium. It can be improved within it’s own framework by taking into account higher terms of $q^{-2}$ expansion, containing more complicated in-medium condensates and by more detailed description of spectral density. This activity is supported by the Russian Fund for Fundamental Research (grant\#95-02-03752-a). 1\. E.G.Drukarev, E.M.Levin, Progr.in Part & Nucl.Phys. [**22**]{} (1991) 77.\ 2. M.A.Shifman, A.I.Vainshtein, V.I.Zakharov, Nucl.Phys. [**B147**]{} (1979) 385.\ 3. E.G.Drukarev, M.G.Ryskin, Nucl.Phys. [**A572**]{} (1994) 560; [**A577**]{} (1994) 375.\ 4. E.G.Drukarev, M.G.Ryskin, Z.f.Phys.A [**356**]{} (1997) 457.
--- abstract: 'The goal of this work is to exhibit a Gevrey type, in an analytic function $P$, of formal power series solutions of some families of first order holomorphic PDEs. The approach is based on the classical majorant series technique by applying Nagumo norms joint with a division algorithm. Our main result recovers systematically many situations studied in the literature on the Gevrey type of formal solutions of these equations. We also provide a relation between Gevrey series in $P$ and Gevrey series in several variables.' address: - 'Escuela de Ciencias Exactas e Ingeniería, Universidad Sergio Arboleda, Calle 74, $\#$ 14-14, Bogotá, Colombia.' - 'Universidad Privada del Norte, Campus Breña, Avenida Tingo Maria 1122, Lima, Perú' author: - 'Sergio A. Carrillo' - 'Carlos A. Hurtado' title: Gevrey series solutions in analytic functions of first order holomorphic PDEs --- [^1] Introduction ============ In the study of ordinary and partial differential equations at irregular singular points or in the case of singular perturbation problems, a technique to obtain holomorphic solutions from formal ones is by applying certain summability methods such as Borel–summability and multisummability. These solutions represent asymptotically the formal power series solution as the variables approach the singular locus in adequate domains. In general, the first step to follow this method is to determine the existence, uniqueness and divergence rate (Gevrey order) of these series. The study of their summability is determined by the nature of the equation and it is a much harder problem. We refer to [@CostinTanveer2007; @CostinTanveer2009; @LastraMalek2015; @Ouchi02; @TahataYamazawa2013] for some examples of PDEs, including Navier-Stokes equation in $\R^3$, which are susceptible to this type of analysis. The goal of this paper is to provide a proof on the Gevrey type of formal power series solutions $\widehat{\yy}$ of holomorphic ordinary and partial differential equations of first order at a singular locus $S$. We will show that under a suitable geometric condition, the germ of analytic function $P$ that generates $S$ is the generic source of divergence: $\widehat{\yy}$ is $1$–Gevrey in the germ $P$ ($P$-$1$–Gevrey for short). Roughly speaking, this means that we can write $\widehat{\yy}=\sum_{n=0}^\infty y_n P^n$ as a power series in $P$, where the coefficients $y_n$ are holomorphic in a common polydisc $D$ at the origin and $\sup_{\xx\in D}|y_n(\xx)|\leq CA^n n!$, for some constants $C,A>0$. This notion was introduced recently by J. Mozo and R. Schäfke [@Sum; @wrt; @germs] in the framework of asymptotic expansions and summability with respect to a germ of an analytic function, and it generalizes the notion of Gevrey series in one variable. More specifically, if $\xx=(x_1,\dots,x_d)\in(\C^d,\00)$ and $\yy=(y_1,\dots,y_N)\in\C^N$, we consider a germ $P$ of a non-zero holomorphic function on $(\C^d,\00)$ such that $P(\00)=0$, and the system of partial differential equations $$\label{Eq. Main Eq} P(\xx)L(\yy)(\xx)=F(\xx,\yy),\quad \text{ where } L:=a_1(\xx)\d_{x_1}+\cdots+a_d(\xx)\d_{x_d},$$ is a first order differential operator with holomorphic coefficients $a_j$ near the origin -not all identically zero-, and $F$ is a $\C^N$-valued holomorphic map defined in some neighborhood of $(\00,\00)\in\C^d\times\C^N$. The *singular locus* of (\[Eq. Main Eq\]) is the germ at the origin of the analytic set $$S:=\{\xx\in(\C^d,\00) : P(\xx)a_j(\xx)=0, j=1,\dots,d\},$$ where the nature of equation (\[Eq. Main Eq\]) changes from differential to implicit one. Note that $S$ contains the zero set of $P$, and they coincide if $a_j(\00)\neq 0$, for at least some index $j$. Furthermore, if $\frac{\partial F}{\partial \yy}(\00,\00)$ is an invertible matrix, $P$ cannot be canceled from (\[Eq. Main Eq\]), so its zero set is a non-removable singular part of the equation. Under these conditions our main result can be stated as follows. \[Thm Main Result\] Consider the partial differential equation (\[Eq. Main Eq\]) where $F(\00,\00)=\00$, and $\mu:=\frac{\partial F}{\partial \yy}(\00,\00)$ is an invertible matrix. If $P$ divides $L(P)$, equation (\[Eq. Main Eq\]) has a unique formal power series solution $\widehat{\yy}\in \C[[\xx]]^N$. Moreover, $\widehat{\yy}$ is a $P$-$1$–Gevrey series. Equation (\[Eq. Main Eq\]) falls into the category of singular first order holomorphic PDEs of the form $$\label{Eq. Main 2} L_1(\yy)(\xx)=F(\xx,\yy),$$ where $L_1=\sum_{j=1}^d X_j(\xx) \d_{x_j}$ is a germ of a holomorphic vector field, singular at $\00\in\C^d$, i.e., $X_j(\00)=0$, for all $j=1,\dots,d$. The convergence vs. rate of divergence of formal power series solutions of (\[Eq. Main 2\]) has been studied extensively by several authors, see e.g., [@Oshima73; @Kaplan79; @Hibino99; @Hibino04; @Yamazawa2000; @Ouchi05]. These growth properties depend on conditions on $S=\{\xx\in\C^d : X_j(\xx)=0, j=1,\dots,d\}$ or on its associated ideal $(X_1,\dots,X_d)\subseteq \C\{\xx\}$, and on non-resonance conditions on $\mu$ and the Jacobian matrix $\Lambda:=\left(\d_{x_i}X_j(\00)\right)_{i,j}$ that we will explain below. Then, if $S$ is an analytic submanifold, c.f. [@Kaplan79; @Yamazawa2000], by choosing a suitable analytic coordinate system $\xxi=(\xi_1,\dots,\xi_d)$ of $(\C^d,\00)$ where $S$ is the zero set of some of these coordinates, and $\Lambda$ is in canonical Jordan form, the convergence or a Gevrey type of solutions can be obtained. For instance, by means of a Newton polyhedron associated to $L_1$ that generalizes the Newton–Malgrange polygon which is familiar in the study of ODEs at irregular singular points. Let us set $\text{Spec}(\mu)=\{\mu_1,\dots,\mu_d\}$ and $\text{Spec}(\Lambda)=\{\lambda_1,\dots\lambda_m,0,\dots,0\}$, where $\mu_k\neq0$, $\lambda_j\neq 0$, and all eigenvalues are repeated according multiplicity. If $m\geq 1$, the classical non-resonance *Poincaré condition* requests that $$\label{Poincare condition} |\lambda_1\b_1+\dots+\lambda_m\b_m-\mu_k|\geq \nu |\bb|,\quad \text{ for all } \bb\in\N^m, k=1,\dots,d,$$ for some constant $\nu>0$. Then, if (\[Poincare condition\]) is valid and $m=d$, i.e., $\Lambda$ is invertible, the solution of (\[Eq. Main 2\]) is convergent [@Oshima73; @Hibino99; @Hibino04]. Otherwise, the solution is generically divergent, but of some Gevrey order in the variable $\xxi$, depending on the sizes of the blocks of the canonical Jordan form of $\Lambda$ associated to the zero eigenvalue [@Hibino99; @Hibino04]. It is worth remarking that Poincaré condition (\[Poincare condition\]) is better known in the theory of normal forms [@BraaksmaStolov07; @Ouchi05] or in the problem of existence of analytic invariant manifolds [@CS14], both for holomorphic vector fields defined near a singular point. In particular, in the problem of their local analytic linearization where much more complicated non-resonance rules appear such as Siegel’s or Bruno’s ones. Returning to our main problem, the linear part of $L_1=P\cdot L$ in equation (\[Eq. Main Eq\]) can be highly degenerated and $\Lambda$ is generically the zero matrix. In fact, $\Lambda=(a_i(\00)p_j)_{i,j}$, where $p_j=\d_{x_j}P(\00)$. This is a very special type of matrix and its canonical Jordan form is the diagonal matrix $\text{diag}(\text{tr}(\Lambda),0,\dots,0)$, where $$\text{tr}(\Lambda)=a_1(\00)p_1+\cdots+a_d(\00)p_d=L(P)(\00).$$ Thus, the only case in which $m\geq 1$, in fact, $m=1$, is when $L(P)(\00)\neq 0$. Furthermore, Poincaré condition (\[Poincare condition\]) is satisfied if and only if $$\mu_k-nL(P)(\00)\neq 0,\quad\text{ for all } n\in\N, k=1,\dots,d.$$ In the aforementioned papers, our situation ($m=0$ or $1$) is covered in [@Hibino99 Thm. 1.1], [@Hibino04 Thm. 1.2] claiming the solution is $(1,\dots,1)$–Gevrey while working in the variable $\xxi$, see Section \[Sec. Gevrey series in a germ\] for definitions. For the case $m=0$, Theorem \[Thm Main Result\] improves the divergence rate of the formal solution by showing it is $(1/k,\dots,1/k)$–Gevrey, where $k=o(P)$ is the order of $P$, see Proposition \[Prop. Ps sss Gevrey\]. But more importantly, it identifies a possible variable to study summability phenomena. Finally, in the case $m=1$, $L(P)(\00)\neq 0$, the formal solution is actually convergent. In fact, by reordering the coordinates we can assume $a_1(\00) p_1\neq 0$, thus, $\xi_1=P(\xx), \xi_2=x_2,\dots, \xi_d=x_d$ is a local change of variables in which our differential operator takes the form $$P\cdot L=\xi_1\cdot \left(U(\xxi)\d_{\xi_1}+\overline{a}_2(\xxi)\d_{\xi_2}+\cdots+\overline{a}_d(\xxi)\d_{\xi_d}\right),$$ where $\overline{a}_j(\xxi)=a_j(\xx)$, and $U(\xxi)=\overline{a}_1\d_{x_1}P+\cdots \overline{a}_d\d_{x_d}P$ is a unit since $U(\00)=L(P)(\00)$. Then, a standard majorant argument by working in the variable $\xi_1$ proves the convergence of the solution. We can also prove this by a slight modification of the proof of Theorem \[Thm Main Result\]. In this way we find our second result. \[Thm 2\] Suppose the hypotheses of Theorem \[Thm Main Result\], but now assume $L(P)(\00)\neq 0$. Then, if $\mu-nL(P)(\00)I_N$ is invertible, for all $n\in\N$, equation (\[Eq. Main Eq\]) has a unique analytic solution at the origin $\widehat{\yy}\in \C\{\xx\}^N$. Theorem \[Thm Main Result\] has a general nature and recovers many examples of Gevrey type formal power solutions of ODEs and PDEs that have been treated in the literature. We would like to mention the following situations where it can be applied (see the beginning of Section \[Sec. Nagumo norms\] for notations): 1. Equation (\[Eq. Main Eq\]) includes the case of singularly perturbed and doubly singular ODEs by taking all $a_j$ but one identically zero. Relabeling the variables, and under the previous hypotheses on $F$, we can consider systems of type $$\label{Eq Example 1} Q(\ee)x^{k+1}\frac{\d\yy}{\d x}(x,\ee)=F(x,\ee,\yy),$$ where $x\in(\C,0)$, $\ee=(\varepsilon_1,\dots,\varepsilon_m)\in(\C^m,\00)$, $Q$ is analytic at the origin and $k\geq -1$ is an integer. In the regular case, i.e., $k=-1$ or $0$ and $Q(\00)\neq 0$, if there exists a formal solution, it is convergent. In the irregular case, if $Q(\00)\neq 0$, we can interpret $\ee$ as regular parameters and the classical theory establishes that the formal solution of (\[Eq Example 1\]) is $1/k$–Gevrey in $x$, uniformly in $\ee$ [@Sibuya1]. In our setting, this means precisely that the solution is a $x^k$-$1$–Gevrey series. Equation (\[Eq Example 1\]) was studied by W. Balser and V. Kostov [@BalserKostov02] for $m=1$, $k=0$, and by W. Balser and J. Mozo [@BalserMozo02] for $m=1$, $k\geq 1$, both when $Q(\varepsilon)=\varepsilon$ and in the linear case $F(x,\varepsilon,y)=A(x,\varepsilon)y-f(x,\varepsilon)$, proving the summability of the formal solution in the perturbation parameter $\varepsilon$, in adequate domains of $x$. On the other hand, M. Canalis-Durand, J.P. Ramis, R. Schäfke and Y. Sibuya [@CDRSS] studied this equation when $m=1$, $k=-1$ and $Q(\varepsilon)=\varepsilon^\sigma$, $\sigma\geq 1$ a positive integer. In particular, they showed that the solution is $1/\sigma$–Gevrey in $\varepsilon$, uniformly in $x$. Later on, M. Canalis-Durand, J. Mozo and R. Schäfke [@CDMS] considered the case $m=1$, $Q(\varepsilon)=\varepsilon^q$, and $k,q\geq 1$, and they proved the $\varepsilon^q x^k$-$1$–summability of the formal power series solution and the singular directions are determined by the solutions of $\det\left(k\eta^q\xi^kI_N-\mu\right)=0,$ in the two-dimensional $(\xi,\eta)-$Borel space. We can recover all these Gevrey type properties by applying Theorem \[Thm Main Result\] as follows: 1. If $k=-1$ and $Q(\00)=0$, by choosing $P(x,\ee)=Q(\ee)$ and $L=\d_x$, we have $L(P)=0$. Thus, the solution is $Q(\ee)$-$1$–Gevrey. 2. If $k\geq 0$, we take $P(x,\ee)=x^kQ(\ee)$ and $L=x\d_x$, since $L(P)=kx^kQ(\ee)=kP$. Thus, the solution is $x^kQ(\ee)$-$1$–Gevrey. If $Q(\00)\neq 0$, this means the solution is a $x^k$-$1$–Gevrey series. 2. Let $\ee$ and $Q$ be as before, and assume $P(\00)=0$. If $L=a_1(\xx,\ee)\d_{x_1}+\cdots+a_d(\xx,\ee)\d_{x_d}$, the system of PDEs $$\label{Eq. QP} Q(\ee)P(\xx)L(\yy)(\xx,\ee)=F(\xx,\ee,\yy),$$ can be seen as a singularly perturbed problem where the perturbation is given by $Q$ if $Q(\00)=0$. If $P$ divides $L(P)$, then $L(Q P)=Q L(P)$ is divisible by $QP$ and we can apply Theorem \[Thm Main Result\] to conclude the system has a unique formal power series solution which is $Q(\ee)P(\xx)$-$1$–Gevrey. Particular situations are: 1. When $P\in\C[\xx]$ is a quasi–homogeneous polynomial in $\xx$, i.e., $P(t^{\lambda_1}x_1,\dots,t^{\lambda_d}x_d)=t^\lambda P(\xx)$, for some rational numbers $\lambda,\lambda_1,\dots,\lambda_d>0$. Then, the operator $$\label{Eq. L lambda} L_{\boldsymbol{\lambda}}:=\lambda_1 x_1 \d_{x_1}+\cdots+\lambda_d x_d \d_{x_d},$$ satisfies $L_{\boldsymbol{\lambda}}(P)=\lambda P$, and the solution of (\[Eq. QP\]) is $Q(\ee)P(\xx)$-$1$–Gevrey. 2. When $P(\xx)=\xx^\aa$ is a monomial, $\aa=(\alpha_1,\dots,\alpha_d)$ a tuple of non-negative integers, and $L=\sum_{j=1}^d b_j(\xx) x_j\d_{x_j}$, with $b_j$ holomorphic near the origin. Then $L(\xx^\aa)=\xx^\aa \sum_{j=1}^d \a_j b_j(\xx)$, thus the solution of (\[Eq. QP\]) is $Q(\ee)\xx^\aa$-$1$–Gevrey. Another simple situation is $L=\frac{\d P}{\d x_i} \d_{x_j}-\frac{\d P}{\d x_j} \d_{x_i}$, $i\neq j$, since $L(P)=0$. 3. Families of PDEs with normal crossings given by $$\ee^{\aa'}\xx^{\aa}L_{\boldsymbol{\lambda}}(\yy)(\xx,\ee)={F}(\boldsymbol{x},\boldsymbol{\varepsilon},\boldsymbol{y}),$$ where $L_{\boldsymbol{\lambda}}$ and $\aa$ are as before, and $\aa'=(\a_1',\dots,\a_m')$ is another tuple of non-negative integers, but $\boldsymbol{\lambda}=(\lambda_1,\dots,\lambda_d)\in(\C\setminus\{0\})^d$. Since $L_{\boldsymbol{\lambda}}(\ee^{\aa'}\xx^{\aa})=\left<\boldsymbol{\lambda},\aa\right>\ee^{\aa'}\xx^{\aa}$, where $\left<\boldsymbol{\lambda},\aa\right>:=\lambda_1\a_1+\cdots+\lambda_d\a_d$, we obtain a $\ee^{\aa'}\xx^{\aa}$-$1$–Gevrey solution. These equations have been studied by H. Yamazawa and M. Yoshino [@YamazawaYoshino15] in the case $m=1$, $\aa=\00$, $\mu=\text{diag}(\mu_1,\dots,\mu_d)$ a diagonal matrix and $\lambda_j, \text{Re}(\mu_k)>0$, for all $j,k=1,\dots,d$. In fact, the authors proved the $1$–summability in $\varepsilon=\eta$ of the formal solution, uniformly in $\xx$. In this trend, and assuming that $\boldsymbol{\lambda}$ has, up to a non-zero constant, positive entries, J. Mozo and the first author [@CM2] studied these equations for the case $d=2$ and $m=0$ proving the solution is actually $x_1^{\a_1}x_2^{\a_2}$-$1$–summable. Later on, this was generalized by the first author [@Carr1] for any $d\geq 2$ and $m$ by using an adapted Borel–Laplace method: the formal solution is $\ee^{\aa'}\xx^\aa$-$1$–summable and the singular directions are determined by the solutions of $\det(\left<\ll,\aa\right>\xxi^{\aa}\boldsymbol{\eta}^{\aa'}I_N-\mu)=0,$ in the $(d+m)$-dimensional $(\xxi,\boldsymbol{\eta})-$Borel space. 4. The family of scalar singular first-order linear PDEs of nilpotent type given by $$(\a(x)+\beta(x,y))y\d_xu+(a+b(x,y))y^2 \d_y u +(1+\mathrm{a}(x,y)y)u=f(x,y),$$ where $\a(0)\neq 0$ and $\beta(x,0)\equiv b(x,0)\equiv 0$. We obtain a unique $y$-$1$–Gevrey series solution by taking $P(x,y)=y$ and $L=(\a+\b)\d_x+(a+b)y\d_y$. These equations were studied by M. Hibino [@Hibino2006I; @Hibino2006II] proving the $1$–summability in $y$, uniformly in $x$, under conditions on $\a$, and on the analytic continuation and exponential growth of $\beta,b, \mathrm{a}$ and $f$. We will give two more examples at the end of the paper, where after punctual blow-ups and ramifications we can apply Theorem \[Thm Main Result\]. One in the setting of singular PDEs [@Zhang19], Example \[Ex. Zhang\], and the other in the framework of confluence of singularities of nonlinear ODEs [@Klimes2016], Example \[Ex. Klimes\]. The technique to prove Theorems \[Thm Main Result\] and \[Thm 2\] is based on modified Nagumo norms for several variables, as introduced in [@CDRSS], joint with a generalized Weierstrass division theorem that allows to write a power series as a series in the germ $P$, although the decomposition depends on the monomial order employed. Due to the compatibility of these tools, we can use the typical majorant series argument to establish the results. The structure of the paper is as follows: Section \[Sec. Nagumo norms\] and \[Sec. The division algorithm\] contain the technical parts of the work where we explain and develop the properties we will need on modified Nagumo norms, the Weierstrass division theorem and their compatibility. In Section \[Sec. Gevrey series in a germ\] we recall the notions of $(s,\dots,s)$– and $P$-$s$–Gevrey series, $s\geq 0$, and we develop some properties relating them. Section \[Sec. Proof of Theorem \] contains the proofs of Theorems \[Thm Main Result\] and \[Thm 2\], and also Corollary \[Coro. 1\] which explains a simple extension for higher order systems. Finally, we include in Section \[Sec. Examples\] examples, including one where we show the hypotheses of Theorem \[Thm Main Result\] are necessary to conclude the desired Gevrey type. Nagumo norms {#Sec. Nagumo norms} ============ Let us start by fixing some notation: $\N$ is the set of natural numbers including $0$, $\N^+=\N\setminus\{0\}$, and $\R{^+}$ is the set of positive real numbers. For a coordinate $t$, we will write $\frac{\d}{\d t}=\d_t$ for the corresponding derivative. If $\boldsymbol{\beta}=(\beta_1,\dots,\beta_d)\in\N^d$, we use the multi-index notation $|\boldsymbol{\beta}|=\beta_1+\cdots+\beta_d$, $\bb!=\beta_1!\cdots\beta_d!$, $\boldsymbol{x}^{\boldsymbol{\beta}}=x_1^{\beta_1}\cdots x_d^{\beta_d}$ and $\frac{\d^{\boldsymbol{\beta}}}{\d \boldsymbol{x}^{\boldsymbol{\beta}}}=\frac{\d^{|\boldsymbol{\beta}|}}{\d x_1^{\beta_1}\cdots \d x_d^{\beta_d}}$. Let $d\geq 1$ be an integer. We will work with $(\C^d,\00)$ and local coordinates $\xx=(x_1,\dots,x_d)$. We also write $\xx'=(x_2,\dots,x_d)$ by removing the first coordinate. $\widehat{\mathcal{O}}=\C[[\xx]]$ and $\mathcal{O}=\C\{\xx\}$ denote the rings of formal and convergent power series in $\xx$ with complex coefficients, respectively. $\mathcal{O}^\ast=\{U\in\mathcal{O} \,:\, U(\00)\neq 0\}$ will denote the corresponding groups of units. Given $\hat{f}=\sum a_\bb \xx^\bb\in\widehat{\mathcal{O}}$, $o(\hat{f})$ will denote its order: if $\hat{f}=\sum_{n=0}^\infty {f}_n$, ${f}_n=\sum_{|\bb|=n} a_\bb \xx^\bb$, is written as sum of its homogeneous components, $o(\hat{f})$ is the least integer $k$ such that ${f}_k\neq 0$. For any $\boldsymbol{r}=(r_1,\dots,r_d)\in(\R^+)^d$, $D_{\boldsymbol{r}}=\{\xx\in \C^d \,:\, |x_j|<r_j, j=1,\dots,d\}$ is the polydisc centered at the origin with polyradius $\boldsymbol{r}$. If $r_j=r$, for all $j$, we write $D_\rr=D_r^d$ as a Cartesian product instead. By using the norm $|\xx|:=\max_{1\leq j\leq d} |x_j|$, we can write $$D_r^d=\{\xx\in\C^d : |\xx|<r\}.$$ Also, $\mathcal{O}(D_{\boldsymbol{r}})$ and $\mathcal{O}_b(D_{\boldsymbol{r}})$ will be denote the sets of holomorphic and bounded holomorphic $C$-valued functions on the given polydisc. We denote by $J:\mathcal{O}(D_{\boldsymbol{r}})^N\rightarrow \mathcal{O}^N$ the Taylor map sending a vector function to its Taylor series at the origin. Nagumo norms were introduced originally by M. Nagumo in [@Nagumo42] in his study of analytic partial differential equations. We will use a variant as it appears in [@CDRSS] for the case of one complex variable. Let us fix two numbers $0<\rho<r$ and consider the function $$d_r(x)=\left\{\begin{array}{ll} r-|x|&\text{ if } |x|\geq \rho,\\ r-\rho&\text{ if } |x|<\rho, \end{array}\right.$$ which satisfies $$\label{Eq dx-dx} |d_r(x)-d_r(y)|\leq |x-y|,\quad x,y\in D_r.$$ The number $\rho$ can be chosen arbitrarily but for our purposes we will choose always $\rho=r/2$. Fix a polyradius $\rr=(r_1,\dots,r_d)$. If $f\in\mathcal{O}(D_\rr)$ and $m\in\N$, we consider the family of *Nagumo norms* $$\label{Nagumo n} \|f\|_m:=\sup_{\xx\in D_\rr} |f(\xx)|d_{r_1}(x_1)^m\cdots d_{r_d}(x_d)^m.$$ These norms depend on $\rr$, but to simplify notation we omit this dependence. There is no reason for these values to be finite, for instance, if $m=0$ this norm reduced to the maximum norm. Note that if $\|f\|_k$ is finite and $m>k$, then $$\label{Eq fm f0} \|f\|_m\leq (r_1/2)^{m-k}\cdots (r_d/2)^{m-k} \|f\|_k.$$ In particular, if $k=0$, i.e., if $f\in\mathcal{O}_b(D_\rr)$, all its Nagumo norms are finite. We collect in the next proposition the main properties of these norms we will use in the proof of Theorems \[Thm Main Result\] and \[Thm 2\], including their behavior under the shift operators $$\label{Eq. Shifts} S_j(f)(\xx)=\left\{\begin{array}{ll} \left(f(\xx)-f(x_1,\dots,x_{j-1},0,x_{j+1},\dots,x_d)\right)/x_j&\text{ if } x_j\neq 0,\\ \frac{\d f}{\d{x_j}}(\xx)&\text{ if } x_j=0. \end{array}\right.$$ \[Prop Nagumo norms\] Consider $m,k\in \N$ and $j=1,\dots,d$. If $f,g\in\mathcal{O}(D_\rr)$, then 1. $\|f+g\|_m\leq \|f\|_m+\|g\|_m$ and $\|fg\|_{m+k}\leq \|f\|_m\|g\|_k$. 2. $\left\|\frac{\d f}{\d x_j}\right\|_{m+1}\leq e(m+1)\prod_{i\neq j} (r_i/2)\|f\|_m$. 3. $\|S_j(f)\|_m\leq \frac{4}{r_j}\|f\|_m$. The inequalities in (1) are clear from the definition. We prove (2) and (3) for the variable $x_1$. If $\xx=(x_1,\xx')\in D_\rr$, we have $$\label{Eq f fm} |f(\xx)|d_{r_1}(x_1)^m\cdots d_{r_d}(x_d)^m\leq \|f\|_m.$$ To establish (2), we use Cauchy’s formula $$\left|\frac{\d f}{\d x_1}(\xx)\right|=\left|\frac{1}{2\pi}\int_{|\xi-x_1|=R}\frac{f(\xi,\xx')}{(\xi-x_1)^2}d\xi\right|\leq \frac{1}{R}\sup_{|\xi-x_1|=R} |f(\xi,\xx')|,$$ valid for any $0<R<r-|x_1|$. Note that if $|\xi-x_1|=R$, then $d_{r_1}(x_1)-R\leq d_{r_1}(\xi)$ by applying inequality (\[Eq dx-dx\]). In particular, if $0<R<d_{r_1}(x_1)$ it holds that $$|f(\xi,\xx')|\leq \|f\|_m d_{r_2}(x_2)^{-m}\cdots d_{r_d}(x_d)^{-m} (d_{r_1}(x_1)-R)^{-m}.$$ If we choose $R=\frac{d_{r_1}(x_1)}{m+1}$, we find that $$\left|\frac{\d f}{\d x_1}(\xx)\right|\leq \frac{(m+1)\|f\|_m}{d_{r_1}(x_1)^{m+1}d_{r_2}(x_2)^m\cdots d_{r_d}(x_d)^m}\left(1+\frac{1}{m}\right)^m.$$ Therefore, by using the well-known inequality $(1+1/m)^m<e$ we conclude that $$\left|\frac{\d f}{\d x_1}(\xx)d_{r_1}(x_1)^{m+1}\cdots d_{r_d}(x_d)^{m+1}\right|\leq e(m+1) d_{r_2}(x_2)\cdots d_{r_d}(x_d)\|f\|_m\leq e(m+1)\left(\frac{r_2}{2}\cdots \frac{r_d}{2}\right)\|f\|_m,$$ as we wanted to show. Finally, to prove (3), note that by inequality (\[Eq f fm\]) we have $$|f(0,\xx')|\leq \|f\|_m (r_1/2)^{-m}d_{r_2}(x_2)^{-m}\cdots d_{r_d}(x_d)^{-m}\leq \|f\|_m d_{r_1}(x_1)^{-m}d_{r_2}(x_2)^{-m}\cdots d_{r_d}(x_d)^{-m},$$ for all $\xx\in D_\rr$. Hence, if $|x_1|\geq r_1/2$, $$\left|\frac{f(x_1,\xx')-f(0,\xx')}{x_1}\right|\leq \frac{4}{r_1}\|f\|_m d_{r_1}(x_1)^{-m}\cdots d_{r_d}(x_d)^{-m}.$$ For $|x_1|<r_1/2$ we can use the maximum modulus principle and the above estimate to see that $$|S_1(f)(\xx)|\leq \max_{|\xi|=r_1/2} |S_1(f)(\xi,\xx')|\leq \frac{4}{r_1}\|f\|_m (r_1/2)^{-m}d_{r_2}(x_2)^{-m}\cdots d_{r_d}(x_d)^{-m}.$$ Since $d_{r_1}(x_1)=r_1/2$ if $|x_1|<r_1/2$, we find that in all cases that $|S_1(f)(\xx)d_{r_1}(x_1)^m\cdots d_{r_d}(x_d)^m|\leq (4/r_1) \|f\|_m$ as required. For vector–valued $\boldsymbol{y}=(y_1,\dots,y_N)\in\mathcal{O}(D_{\rr})^N$, and matrix–valued $A=(A_{i,j})\in\mathcal{O}(D_{\rr})^{N\times N}$ maps we extend Nagumo norms by the rules $$\label{Eq. Nagumo N} \|\yy\|_m:=\max_{1\leq i\leq d} \|y_i\|_m,\quad \|A\|_m:=\max_{1\leq i\leq d} \sum_{j=1}^N \|A_{i,j}\|_m.$$ Then, it is immediate to check that $$\|f\cdot\yy\|_{m+k}\leq \|f\|_m\|\yy\|_k,\quad \|A\cdot\yy\|_{m+k}\leq \|A\|_m\|\yy\|_k,\quad \|A\cdot B\|_{m+k}\leq \|A\|_m\|B\|_k,$$ for all $f\in\mathcal{O}(D_\rr)$, $\yy\in\mathcal{O}(D_\rr)^N$, and $A,B\in\mathcal{O}(D_\rr)^{N\times N}$. The division algorithm {#Sec. The division algorithm} ====================== We recall here a generalized Weierstrass division theorem by following closely [@Sum; @wrt; @germs], and whose original version is due to J.M. Aroca, H. Hironaka, and J. L. Vicente [@AHV75]. For the sake of completeness we include the proof for convergent series including the compatibility of the division algorithm with the Nagumo norms introduced in the previous section. We will use the partial order $\leq$ on $\N^d$ defined by $\aa\leq \bb$ if $\a_j\leq \beta_j$, for all $j=1,\dots, d$. Thus $\aa\not\leq \bb$ means there is an index $j$ such that $\beta_j<\a_j$. We also use the notation $$\Delta_\aa:=\left\{ \sum g_\bb \xx^{\bb}\in\widehat{\mathcal{O}} : g_\bb=0 \text{ if } \aa\leq \bb\right\}.$$ Given $\boldsymbol{\a}=(\a_1,\dots,\a_d)\in\N^{d}\setminus \{\00\}$, any power series $\hat{f}=\sum_{\bb\in\N^d} f_\bb \xx^\bb\in \widehat{\mathcal{O}}$ can be written uniquely as a series in the monomial $\xx^\aa$ as $$\label{Eq decomp f xa} \hat{f}=\sum_{n=0}^\infty \hat{f}_{\aa,n}(\boldsymbol{x})\boldsymbol{x}^{n\boldsymbol{\a}},\quad \hat{f}_{\aa,n}(\xx)=\sum_{\aa\not\leq\bb}f_{n\boldsymbol{\a}+\boldsymbol{\beta}}\boldsymbol{x}^{\boldsymbol{\beta}}\in \Delta_\aa.$$ This decomposition can be obtained by a repeated use of the canonical division algorithm by $\xx^\aa$: given $\hat{f}\in\widehat{\mathcal{O}}$, there are unique $q\in\widehat{\mathcal{O}}$, $r\in \Delta_\aa$ such that $$\hat{f}=q \xx^\aa+r,\quad \text{ where } \quad q=\sum_{\aa\leq \bb} f_\bb x^{\bb-\aa},\quad r=\sum_{\aa\not\leq\bb} f_\bb \xx^\bb.$$ Moreover, if $f\in\mathcal{O}(D_\rr)$, then $q,r\in\mathcal{O}(D_\rr)$. We can actually use the shift operators (\[Eq. Shifts\]) introduced in the previous section to write $$q=Q_\aa(f):=S_1^{\a_1}\circ \cdots \circ S_d^{\a_d}(f),\quad r=R_\aa(f):=f-Q_\aa(f)\cdot \xx^{\aa}.$$ In particular, Proposition \[Prop Nagumo norms\] (3) shows that $$\label{Eq Qa Ra} \|Q_\aa(f)\|_m\leq \frac{4^{|\aa|}}{\rr^\aa}\|f\|_m,\quad \|R_\aa(f)\|_m\leq \|f\|_m+4^{|\aa|}\|f\|_m=(1+4^{|\aa|})\|f\|_m,\quad m\in\N.$$ By taking $m=0$ we conclude that $Q_\aa, R_\aa: \mathcal{O}_b(D_\rr)\to \mathcal{O}_b(D_\rr)$ are linear continuous maps. The generalized Weierstrass division allows to extend the previous considerations by dividing by a non-zero element of $\widehat{\mathcal{O}}\setminus\{0\}$ with zero constant term, but not in a canonical way. We will focus on division by an analytic germ $\PP \in \mathcal{O}\setminus\{0\}$, $\PP(\00)= 0$. The division is determined by $P$ and an injective linear form $\ell:\N^d\rightarrow\R^+$, $\ell(\aa)=\ell_1\a_1+\cdots+\ell_d\a_d$ used to order the monomials by the rule $$\xx^\aa<_\ell\xx^\bb\quad \text{ if }\quad \ell(\aa)<\ell(\bb).$$ Then, any $\hat{f}=\sum f_\bb \xx^\bb\in\widehat{\mathcal{O}}\setminus\{0\}$, has a *minimal exponent* $\nu_\ell(\hat{f})$ with respect to $\ell$, i.e., $\nu_\ell(\hat{f})=\aa$ where $\xx^\aa=\min_\ell\{\xx^\bb : f_\bb\neq 0\}$, and the minimum is taken according to $<_\ell$. The division process, for formal and convergent series, can be stated as follows, c.f. [@Sum; @wrt; @germs Lemmas 2.4, 2.6]. \[Generalized Weierstrass Division\] Let $P$ and $\ell$ as above. For every $\hat{g}\in\widehat{\mathcal{O}}$, there are unique $\hat{q}\in\widehat{\mathcal{O}}$, $\hat{r}\in \Delta_{\nu_\ell(P)}$ such that $$\hat{g}=\hat{q}P+\hat{r}.$$ Moreover, if $\rho>0$ is sufficiently small, then for every $g\in\mathcal{O}_b(D_{\rho(\ell)})$, $\rho(\ell)=(\rho^{\ell_1},\dots,\rho^{\ell_d})$, there are unique $q\in \mathcal{O}_b(D_{\rho(\ell)})$, $r\in \mathcal{O}_b(D_{\rho(\ell)})$ with $J(r)\in \Delta_{\nu_\ell(P)}$ such that $$g=qP+r,\quad Q_{P,\ell}(g):=q,\quad R_{P,\ell}(g):=r.$$ The corresponding operators $Q_{P,\ell}, R_{P,\ell}:\mathcal{O}_b(D_{\rho(\ell)})\rightarrow \mathcal{O}_b(D_{\rho(\ell)})$ are linear and continuous. In fact, if $\rho$ is sufficiently small, then $$\|Q_{P,\ell}(g)\|_m\leq \frac{2\cdot 4^{|\nu_\ell(P)|}}{\rho^{\ell(\nu_\ell(P))}}\|g\|_m,\quad \|R_{P,\ell}(g)\|_m\leq 2(1+4^{|\nu_\ell(P)|})\|g\|_m\quad \text{ for all } m\in\N.$$ By the choice of the polyradius $\rho(\ell)$, we have $|\xx^\bb|\leq \rho^{\ell(\bb)}$ if $\xx\in D_{\rho(\ell)}$. Let us write $\aa=\nu_\ell(P)$. Without loss of generality we can assume $P=\xx^\aa+\widetilde{P}$, where $\widetilde{P}\in\mathcal{O}\setminus\{0\}$ and $\nu_\ell(\widetilde{P})>_\ell \xx^\aa$. Then, solving $g=qP+r$ or $q\xx^\aa+r=g-q\widetilde{P}$ for $q$ and $r$ is equivalent to find a fixed point for the equation $$\label{Eq Aux q} q=Q_\aa(g-q\widetilde{P}),$$ joint with $r=R_\aa(g-q\widetilde{P})$. If $\rho$ is sufficiently small, we can choose a constant $K>0$ such that $\|\widetilde{P}\|_0\leq K\rho^{\ell(\nu_\ell(\widetilde{P}))}$. Consider the map $\phi_g:\mathcal{O}_b(D_{\rho(\ell)})\to \mathcal{O}_b(D_{\rho(\ell)})$ given by $\phi_g(h)=Q_\aa(g-h\widetilde{P})$. By using the first inequality in (\[Eq Qa Ra\]) for $m=0$ we see that $$\|\phi_g(h_1)-\phi_g(h_2)\|_0=\left\|Q_\aa((h_1-h_2)\widetilde{P})\right\|_0\leq 4^{|\aa|}K\rho^{\ell(\nu(\widetilde{P}))-\ell(\aa)}\|h_1-h_2\|_0.$$ Thus, $\phi_g$ defines a contraction if $\rho$ is small enough, i.e., if $4^{|\aa|}K\rho^{\ell(\nu_\ell(\widetilde{P}))-\ell(\aa)}<1$, and it has a unique fixed point $q$. This determines the existence and uniqueness of $q$ and $r$. Finally, from equation (\[Eq Aux q\]) we find that $$\|Q_{P,\ell}(g)\|_m\leq \frac{4^{|\aa|}/\rho^{\ell(\aa)}}{1-4^{|\aa|}K\rho^{\ell(\nu_\ell(\widetilde{P}))-\ell(\aa)}}\|g\|_m,$$ and from $r=R_\aa(g-q\widetilde{P})$, that $$\begin{aligned} \|R_{P,\ell}(g)\|_m&\leq (1+4^{|\aa|})\left(1+\frac{\|\widetilde{P}\|_04^{|\aa|}/\rho^{\ell(\aa)}}{1-4^{|\aa|}K\rho^{\ell(\nu_\ell(\widetilde{P}))-\ell(\aa)}}\right)\|g\|_m=\frac{1+4^{|\aa|}}{1-4^{|\aa|}K\rho^{\ell(\nu_\ell(\widetilde{P}))-\ell(\aa)}}\|g\|_m.\end{aligned}$$ Therefore the result follows by taking additionally $\rho>0$ such that $4^{|\aa|}K\rho^{\ell(\nu_\ell(\widetilde{P}))-\ell(\aa)}<1/2$. By a repeated application of the previous proposition [@Sum; @wrt; @germs Coro. 2.5], any $\hat{f}\in\widehat{\mathcal{O}}$ can be written uniquely as $$\label{Eq. Decomposition formal} \hat{f}=\sum_{n=0}^\infty \hat{f}_{P,\ell,n} P^n,\quad \hat{f}_{P,\ell,n}\in \Delta_{\nu_\ell(P)}.$$ For the convergent case, we have a similar result that we state in the following corollary, c.f. [@Sum; @wrt; @germs Coro. 2.7]. \[Coro Decomposition convergent\] If $s>0$ is such that the operators $Q_{P,\ell}$ and $R_{P,\ell}$ are defined over $\mathcal{O}_b(D_{s(\ell)})$, there is $r=r(s)>0$, depending only on $s$, such that for any $f\in\mathcal{O}_b(D_{s(\ell)})$ we can find a unique sequence $(f_n)_{n\in\N}\subset \mathcal{O}_b(D_r^d)$ with $J(f_n)\in \Delta_{\nu_\ell(P)}$, such that $$\label{Eq. Decomposition convergent} f=\sum_{n=0}^\infty f_n \PP^n,\quad f_n=R_{P,\ell}\circ Q_{P,\ell}^n(f),$$ both series being convergent for $|\xx|<r$. By applying the preceding lemma we obtain $$f=\sum_{n=0}^{N-1} R_{P,\ell}(Q_{P,\ell}^n(f))P^n+Q^N(f) P^N,\quad \text{ for all } N\in\N.$$ If we choose $0<r\leq s$ such that $M=\sup_{|\xx|<r} |P(\xx)|<s^{\ell(\nu_\ell(P))}/2\cdot 4^{|\nu_\ell(P)|}=1/b$, then we can estimate $$\sup_{|\xx|<r} \left|f(\xx)-\sum_{n=0}^{N-1} R_{P,\ell}(Q_{P,\ell}^n(f))(\xx)P(\xx)^n\right|\leq (bM)^N \sup_{\yy\in D_{s(\ell)}} |f(\yy)|.$$ The result follows by taking $N\to+\infty$. To finish this section we would like to remark that if $\hat{f}=\sum f_n P^n$ and $\hat{g}=\sum g_n P^n$, where $f_n, g_n\in \mathcal{O}_b(D_r^d)$ for a common $r$, decomposition (\[Eq. Decomposition formal\]) for their product is given by $$\begin{aligned} \nonumber \hat{f}\cdot \hat{g}=\sum_{k=0}^\infty \sum_{j=0}^k f_j g_{k-j} P^{k}&=\sum_{k,m=0}^\infty \sum_{j=0}^k R_{P,\ell}(Q_{P,\ell}^m(f_jg_{k-j})) P^{k+m}\\ \label{Eq. Product}&=\sum_{n=0}^\infty \Big(\sum_{k=0}^n\sum_{j=0}^k R_{P,\ell} (Q_{P,\ell}^{n-k}(f_j g_{k-j}))\Big)P^n.\end{aligned}$$ Similar formulas hold for the product of more than two series. Gevrey series {#Sec. Gevrey series in a germ} ============= If $\boldsymbol{s}=(s_1,\dots,s_d)\in\R_{\geq 0}^d$ and $\hat{f}=\sum_{\bb\in\N^d} a_\bb \xx^\bb\in\widehat{\mathcal{O}}$, we say that $\hat{f}$ is a *$\boldsymbol{s}$–Gevrey series* if we can find constants $C,A>0$ such that $|a_\bb|\leq CA^{|\bb|} \bb!^{\boldsymbol{s}}$, for all $\bb\in\N^d$. Note that $\boldsymbol{s}=\00$ means convergence. We will be interested in the case $s_1=\dots=s_d=s>0$. Thanks to the inequalities $$\bb!\leq |\bb|!\leq d^{|\bb|}\bb!,$$ a series $\hat{f}$ is $(s,\dots,s)$–Gevrey if and only if there are constants $C,A>0$ such that $$|a_\bb|\leq CA^{|\bb|} |\bb|!^s,\quad \bb\in\N^d.$$ We denote by $\widehat{\mathcal{O}}_{s}$ the set of $(s,\dots,s)$–Gevrey series. For any $s\geq0$, $\widehat{\mathcal{O}}_{s}$ is closed under sums, products, partial derivatives, composition, and it contains $\mathcal{O}$. This can be seen as a consequence of a more general situation in the setting of ultradifferentiable functions. In that framework the Gevrey sequence $(n!^s)_{n\in\N}$ is generalized by a sequence of positive numbers $(M_n)_{n\in\N}$ satisfying log-convexity ($M_n^2\leq M_{n-1}M_{n+1}$), stability under derivatives ($M_{n+1}\leq K^nM_n$ for some $K>0$) and the condition $M_n^{1/n}\to +\infty$ as $n\to\infty$, see e.g. [@Thilliez20008; @RainerG2016] including other stability properties in more general contexts. According to the previous remark, if $\hat{f}\in\widehat{\mathcal{O}}_{s}$, the same is true for $\hat{f}(A\xx)$, for all matrices $A\in\C^{d\times d}$, c.f. [@Hibino99 Lemma 2.1]. In particular, we highlight the following simple statement we will need later. \[Lemma linear change\] Let $s\geq 0$. Then, $\hat{f}\in\widehat{\mathcal{O}}_{s}$ if and only if $\hat{f}(A\xx)\in \widehat{\mathcal{O}}_{s}$, for all $A\in\text{GL}_n(\C)$. Consider a germ $P\in\mathcal{O}\setminus\{0\}$ such that $P(\00)=0$, and $s\geq 0$. There are equivalent definitions for Gevrey series with respect to the germ $P$ [@Sum; @wrt; @germs Def./Prop. 7.5]. For simplicity, we will use the characterization given in [@CMS19 Lemma 4.1]. A series $\hat{f}\in\widehat{\mathcal{O}}$ is *$P$-$s$–Gevrey series* if there is a polyradius $\rr$, constants $C,A>0$ and a sequence $\{f_n\}_{n\in\N}\in\mathcal{O}_b(D_\rr)$ such that $$\label{Def. P s Gevrey} \hat{f}=\sum_{n=0}^\infty f_n P^n,\quad \text{ where } \sup_{\xx\in D_\rr}|f_n(\xx)|\leq CA^n n!^s.$$ We will use the notation $\widehat{\mathcal{O}}^{P,s}$ for the set of $P$-$s$–Gevrey series. This clearly generalizes the notion of $s$–Gevrey series in $x_j$, uniformly in the other variables ($x_j$-$s$–Gevrey series in our notation). In fact, setting $j=1$ to fix ideas, the classical notions required that when we write $\hat{f}=\sum_{n=0}^\infty f_{n} x_1^n$ as a power series in $x_1$, there is a polyradius $\rr'\in \R^{d-1}$ such that $f_n\in\mathcal{O}_b(D_{\rr'})$ and $\sup_{\xx'\in D_{\rr'}}|f_n(\xx')|\leq CA^n n!^{s},$ for adequate constants $C,A$. By using the generalized Weierstrass division we can show the notion of $P$-$s$–Gevrey series is well-defined, in the sense that it is independent of the decomposition (\[Def. P s Gevrey\]). Note it is enough to check the definition for the decomposition (\[Eq. Decomposition formal\]) induced by a given injective linear form $\ell:\N^d\rightarrow \R^+$. In fact, if (\[Def. P s Gevrey\]) holds, and since all $f_n$ are defined in a common polydisc, we can use decomposition (\[Eq. Decomposition convergent\]) to find $\rho>0$ and sequences $\{f_{n,j}\}_{n\in\N}\subset \mathcal{O}_b(D_\rho^d)$ with $J(f_{n,j})\in\Delta_\ell(P)$, such that $f_{n}=\sum_{j=0}^\infty f_{n,j} P^j$, valid for $|\xx|<\rho$, where $f_{n,j=}R_{P,\ell}\circ Q_{P,\ell}^j(f_n)$. Therefore, the decomposition (\[Eq. Decomposition formal\]) of $\hat{f}$ is given by $$\hat{f}=\sum_{n=0}^\infty g_n P^n,\quad g_n=\sum_{j=0}^n f_{j,n-j}\in \mathcal{O}_b(D_\rho^d),\quad J(g_n)\in \Delta_\ell(P),$$ and the sequence $(g_n)_{n\in\N}$ exhibits $s$–Gevrey bounds since $$|g_n(\xx)|\leq \sum_{j=0}^n \|R_{\PP,\ell}\| \|Q_{\PP.\ell}\|^{n-j} \sup_{|\yy|\leq \rho} |f_j(\yy)|\leq \sum_{j=0}^n \|R_{\PP,\ell}\| \|Q_{\PP.\ell}\|^{n-j} CA^j j!^s,\quad |\xx|<\rho,$$ as we wanted to show. From the previous definition it is easy to deduce many properties on this type of series. We recall the following, valid for $P,Q\in\mathcal{O}\setminus\{0\}$ such that $P(\00)=Q(\00)=0$, c.f. [@CMS19 Coro. 4.2, Lemma 4.3]: 1. $\widehat{\mathcal{O}}^{P,s}$ is stable under sums, products, partial derivatives, and it contains $\mathcal{O}$. 2. For any $k\in\N^+$, $\widehat{\mathcal{O}}^{P^k,ks}=\widehat{\mathcal{O}}^{P,s}$. 3. If $Q$ divides $P$, then $\widehat{\mathcal{O}}^{P,s}\subseteq \widehat{\mathcal{O}}^{Q,s}$. In particular, if $Q=U\cdot P$, $U\in\mathcal{O}^\ast$, then $\widehat{\mathcal{O}}^{P,s}=\widehat{\mathcal{O}}^{Q,s}$. 4. Let $\phi:(\C^d,\00)\to(\C^d,\00)$ be analytic, $\phi(\00)=\00$, and assume $P\circ \phi$ is not identically zero. If $\hat{f}\in \widehat{\mathcal{O}}^{P,s}$, then $\hat{f}\circ \phi\in \widehat{\mathcal{O}}^{P\circ \phi,s}$. 5. If $P(\xx)=\xx^\aa$, $\aa\in\N^d\setminus\{\00\}$, then $\hat{f}=\sum f_{\boldsymbol{\beta}}\boldsymbol{x}^{\boldsymbol{\beta}}\in \widehat{\mathcal{O}}^{\xx^\aa,s}$ if and only if there are constants $C,A>0$ satisfying $$\label{Eq. Gevrey bounds monomials} |f_{\boldsymbol{\beta}}|\leq CA^{|\boldsymbol{\beta}|}\min\{ \beta_j!^{s/\a_j} : j=1,\dots,d, \a_j\neq 0\} ,\quad \boldsymbol{\beta}\in\N^d.$$ It follows from (\[Eq. Gevrey bounds monomials\]) that if $\hat{f}\in \widehat{\mathcal{O}}^{\xx^\aa,s}$, then $\hat{f}\in\widehat{\mathcal{O}}_{s/|\aa|}$. Indeed, this is a consequence of the inequality $\min\{a_1,\dots,a_d\}\leq a_1^{\tau_1}\cdots a_d^{\tau_d}$, valid for all $a_j>0$ and $\tau_j\geq 0$ such that $\tau_1+\cdots+\tau_d=1$, by applying it to $\tau_j=\a_j/|\aa|$. This property can be generalized to an arbitrary germ and we have the following new inclusion of rings of Gevrey series. \[Prop. Ps sss Gevrey\] Consider $P\in \mathcal{O}$ with $o(P)=k\geq 1$. Then, a $P$-$s$–Gevrey series is a $(s/k,\dots,s/k)$–Gevrey series. In symbols, $$\widehat{\mathcal{O}}^{P,s}\subseteq \widehat{\mathcal{O}}_{s/k}.$$ Write $P=\sum_{j=k}^\infty P_j$ as sum of homogeneous polynomials. Since $P_k\neq 0$, we can find $\boldsymbol{a}\neq \00$ such that $P_k(\boldsymbol{a})\neq 0$. Choose $A\in\text{GL}_n(\C)$ having $\boldsymbol{a}$ as first column. If we set $Q(\xx)=P(A\xx)$ and we write it as sum of its homogeneous components $Q=\sum Q_j$, then $Q_j(\xx)=P_j(A\xx)$, and $Q_k(\xx)=P_k(\boldsymbol{a})x_1^k+\cdots$, i.e., $o(Q)=k$ and $Q_k(1,0,\dots,0)\neq 0$. Consider a $P$-$s$–Gevrey series $\hat{f}$. Then $\hat{f}_0(\xx)=\hat{f}(A\xx)=\sum a_\bb \xx^\bb$ is a $Q$-$s$–Gevrey series. We now consider the change of variables $$\label{Eq. Blow up} x_1=z_1,\quad x_2=z_1z_2,\quad \dots, x_d=z_1z_d,$$ that geometrically corresponds to a local expression for the blow–up of the origin in $\C^d$ [@MM80]. If $R(\zz)=Q(\xx)$ and $\hat{f}_1(\zz)=\hat{f}_0(\xx)$, we see $\hat{f}_1$ is a $R$-$s$–Gevrey series. On the one hand, $$R(\zz)=Q(z_1,z_1z_2,\dots,z_1z_d)=\sum_{j=k}^\infty z_1^j Q_j(1,z_2,\dots,z_d)=z_1^k U(\zz),$$ where $U$ is a unit, since $U(\00)=Q_k(1,0,\dots,0)\neq 0$. Thus, we conclude $\hat{f}_1$ is $z_1^k$-$s$–Gevrey, or equivalently, a $z_1$-$s/k$–Gevrey series. On the other hand, since $\hat{f}_1$ is written as $$\hat{f}_1(\zz)=\hat{f}_0(z_1,z_1z_2,\dots,z_1z_d)=\sum_{\bb\in\N^d} a_\bb z_1^{|\bb|} z_2^{\beta_2}\cdots z_d^{\beta_d}=\sum_{{(n,\boldsymbol{\gamma})\in\N\times\N^{d-1}}\atop {n\geq |\boldsymbol{\gamma}|}} a_{n-|\boldsymbol{\gamma}|,\boldsymbol{\gamma}} z_1^n \zz'^{\boldsymbol{\gamma}},$$ we can find constants $C,A>0$ such that $|a_{n-|\boldsymbol{\gamma}|,\boldsymbol{\gamma}}|\leq CA^{n+|\boldsymbol{\gamma}|}n!^{s/k}$. Therefore, in the index $\bb=(n,\boldsymbol{\gamma})$, we find the bound $$|a_\bb|\leq CA^{\beta_1+2\beta_2+\cdots+2\beta_d} |\bb|!^{s/k},\quad \text{ for all } \bb\in\N^d.$$ This means $\hat{f}_0$ and $\hat{f}$ are $(s/k,\dots,s/k)$–Gevrey series, thanks to Lemma \[Lemma linear change\]. Proposition \[Prop. Ps sss Gevrey\] and Lemma \[Lemma linear change\] shows that if $\hat{f}\in \widehat{\mathcal{O}}^{\xx^\aa,s}$, then $\hat{f}(A\xx)\in\widehat{\mathcal{O}}_{s/|\aa|}$, for all $A\in \C^{d\times d}$. However, being $\xx^\aa$-$s$–Gevrey is not stable under linear changes of variable. We illustrate this by a simple example: the series $\hat{f}(x_1,x_2)=\sum_{n=0}^\infty n! (x_1x_2)^n$ is $x_1x_2$-$1$–Gevrey, but $$\hat{f}_0(\xi_1,\xi_2)=\hat{f}(\xi_1+\xi_2,\xi_1-\xi_2)=\sum_{j,k\geq 0} \binom{j+k}{j} (j+k)! (-1)^j \xi_1^{2j} \xi_2^{2k},$$ is $(1/2,1/2)$–Gevrey in $\xi_1,\xi_2$, but not $\xi_1\xi_2$-$1$–Gevrey. Proof of the main results {#Sec. Proof of Theorem } ========================= The idea of the proof is to write $$\label{Eq. Proof y} \widehat{\yy}=\sum_{n=0}^\infty y_n P^n,$$ as a power series in the germ $P$ according to decomposition (\[Eq. Decomposition formal\]) for a given injective linear form $\ell:\N^d\to\R_+$, and then to find recursively the coefficients $y_n$, $J(y_n)\in \Delta_{\nu_\ell(P)}^N$. Then, with the aid of Nagumo norms and the majorant series technique we will establish the required Gevrey property. Before we start we note that if $F(\xx,\00)$ is identically zero, the unique formal power series solution if the zero series. Thus, we can assume $c(\xx):=F(\xx,\00)\not\equiv 0$. We divide the proof in several steps. **Step 0** (Preliminaries) Let us fix an injective linear form $\ell:\N^d\to\R_+$ and $\rho'>0$ sufficiently small such that $Q_{P,\ell},R_{P,\ell}:\mathcal{O}_b(D_{\rho'(\ell)})\to\mathcal{O}_b(D_{\rho'(\ell)})$ from Proposition \[Generalized Weierstrass Division\] are defined. We consider the linear operators $Q,R:\mathcal{O}_b(D_{\rho'(\ell)})^N\to \mathcal{O}_b(D_{\rho'(\ell)})^N$ given by $Q(f_1,\dots,f_N)=(Q_{P,\ell}(f_1),\dots,Q_{P,\ell}(f_N))$, and $R(f_1,\dots,f_N)=(R_{P,\ell}(f_1),\dots,R_{P,\ell}(f_N))$. Then, by using the norms (\[Eq. Nagumo N\]) we see that $$\label{Eq. Q R proof} \|Q(\boldsymbol{f})\|_m\leq \|Q\|\cdot \|\boldsymbol{f}\|_m,\quad \|R(\boldsymbol{f})\|_m\leq \|R\|\cdot \|\boldsymbol{f}\|_m,$$ for all $m\in\N$ and $\boldsymbol{f}\in\mathcal{O}_b(D_{\rho'(\ell)})^N$, where to simplify notation we write $\|Q\|=2\cdot 4^{|\nu_\ell(P)|}/\rho'^{\ell(\nu_\ell(P))}$ and $\|R\|=2(1+4^{|\nu_\ell(P)|})$. The same considerations and inequalities are valid for matrix-valued maps. It will be important for later to note that $\|R\|$ is independent of the radius, since we will shrink $\rho'$ during this proof. Since $F$ is analytic we can write it as a convergent power series in $\yy$, say $$F(\xx,\yy)=c(\xx)+(\mu+A(\xx))\yy+\sum_{|I|\geq 2}A_I(\xx) \yy^I,$$ where $c,A_I\in \mathcal{O}_b(D_{r'} ^d)^N$, $A\in \mathcal{O}_b(D_{r'}^d)^{N\times N}$, $A(\00)=\00$, and the summation is taken over all $I=(i_1,\dots,i_N)\in\N^N$ such that $|I|=i_1+\cdots+i_N\geq 2$. Furthermore, we can find $K,\delta>0$ such that $$\label{Eq. Bound F K d} \|A_I\|_0\leq K \delta^{|I|},\quad \text{ for all } I\in \N^N.$$ Note that all the previous coefficients are defined in the common polydisc of polyradius $(r',\dots,r')$. By reducing $r'$ if necessary, we can assume $a_1,\dots,a_d\in \mathcal{O}_b(D_{r'}^d)$ which are not all identically zero. Now, choose $0<s<\rho'$ such that $s^{l_j}\leq r'$, for all $j$, in order to be able to apply Corollary \[Coro Decomposition convergent\] to the previous functions. Then, we conclude there is $r>0$ such that we can write $$a_j=\sum_{n=0}^\infty a_{j,n} P^n,\quad c=\sum_{n=0}^\infty c_n P^n,\quad A=\sum_{n=0}^\infty A_n P^n,\quad A_I=\sum_{n=0}^\infty A_{I,n}P^n,$$ where $A_{n}\in (\mathcal{O}_b(D_r^d)\cap \Delta_{\nu_\ell(P)})^{N\times N}$, $c_n, A_{I,n}\in (\mathcal{O}_b(D_r^d)\cap \Delta_{\nu_\ell(P)})^N$, $a_{j,n}\in \mathcal{O}_b(D_r^d)\cap \Delta_{\nu_\ell(P)}$, for all $j=1,\dots,d$, $I\in\N^N$ and $n\in\N$. **Step 1** (The coefficient $y_0$) Our first step is to determine the first term in (\[Eq. Proof y\]), namely, $y_0$. Note that $\widehat{\yy}(\00)=y_0(\00)=\00$ since $F(\00,\00)=\00$ and $P(\00)=0$. When we plug $\widehat{\yy}$ into equation (\[Eq. Main Eq\]) and equate in common powers of $P$, we find $y_0$ must be an analytic solution of $$\label{Eq. y0} 0=R\big(c+(\mu+A)y+\sum_{|I|\geq 2} A_{I}y^I\Big)=c_0+\mu y_0+R(A_0y)+\sum_{|I|\geq 2} R(A_{I,0}y^I),$$ such that $J(y_0)\in \Delta_{\nu_\ell(P)}^N$. We will prove this problem has a unique solution in $\mathcal{O}_b(D_r^d)$ if $r>0$ is taken small enough. In order to proceed, we write (\[Eq. y0\]) as the fixed point equation $$y=G(y),\quad G(y):=-\mu^{-1}R\Big(c_0+A_0 y+\sum_{|I|\geq 2} A_{I,0}y^I\Big).$$ We will show we can reduce $r>0$ and find $\epsilon>0$ sufficiently small such that $G:\overline{B}_{\epsilon}\to \overline{B}_{\epsilon}$ is well-defined and a contraction, where $\overline{B}_{\epsilon}:=\{y\in\mathcal{O}_b(D_r^d)^N : \|y\|_0\leq \epsilon, y(\00)=\00 \}$ which is closed. Then, by Banach fixed point theorem $G$ has a unique fixed point. Let us check first that $G$ maps $\overline{B}_{1/2\delta}$ to $\mathcal{O}_b(D_r^d)^N$, where $\delta$ is as in (\[Eq. Bound F K d\]): by using (\[Eq. Q R proof\]) for $R$ and (\[Eq. Bound F K d\]) we see that $$\begin{aligned} \|G(y)\|_0&\leq \|\mu^{-1}\|_0 \|R\|^2\Big(\|c\|_0+\|A\|_0+\sum_{|I|\geq 2} K\delta^{|I|}\|y\|_0^{|I|-1} \Big) \|y\|_0.\end{aligned}$$ But the identity $\sum_{|I|\geq 1} \a^{|I|}=(1-\a)^{-N}-1$, $|\a|<1$, shows that $$\|G(y)\|_0\leq \|\mu^{-1}\|_0 \|R\|^2\Big(\|c\|_0+\|A\|_0+ K\delta\left(2^N-1\right) \Big) \|y\|_0,$$ thus, $\|G(y)\|_0$ is finite as we wanted to show. On the other hand, if $y+h, y\in \overline{B}_{1/2\delta}$ we also have $$\|G(y+h)-G(y)\|_0\leq \|\mu^{-1}\|_0 \|R\|^2\Big(\|A\|_0\|h\|_0+\sum_{|I|\geq 2} K\delta^{|I|}\|(y+h)^I-y^I\|_0\Big).$$ Taking into account the inequality $\|(y+h)^I-y^I\|_0\leq |I| (\|y\|_0+\|h\|_0)^{|I|-1} \|h\|_0$ that follows readily by induction on $|I|$, we obtain $$\begin{aligned} \|G(y+h)-G(y)\|_0&\leq \|\mu^{-1}\|_0 \|R\|^2\Big(\|A\|_0+K \sum_{|I|\geq 2} |I|\delta^{|I|} (\|y\|_0+\|h\|_0)^{|I|-1} \Big)\|h\|_0\\ &= \|\mu^{-1}\|_0 \|R\|^2\Big(\|A\|_0+K\delta g\big(\delta(\|y\|_0+\|h\|_0)\big) \Big)\|h\|_0,\end{aligned}$$ where $g(\a)=\sum_{|I|\geq 2} |I|\a^{|I|-1}=\frac{d}{d\a} \Big( \sum_{|I|\geq 2} \a^{|I|}\Big)=N((1-\a)^{-N-1}-1)$, for $|\a|<1$. Since $g(0)=0$, by its continuity, we can choose $0<\epsilon<\min\{1,1/2\delta\}$ such that $\|\mu^{-1}\|_0 \|R\|^2K\delta \cdot g(\epsilon)<1/4$. Also, since $A(\00)=\00$, we can reduce $r$ to have $\|\mu^{-1}\|_0 \|R\|^2\|A\|_0<1/4$. Therefore, if $\|y\|_0+\|h\|_0\leq \epsilon$, we conclude $$\|G(y+h)-G(y)\|_0\leq \frac{1}{2}\|h\|_0.$$ But $c(\00)=F(\00,\00)=\00$, and since $\epsilon$ has been fixed, we can reduce $r$ again to have $\|\mu^{-1}\|_0 \|R\|^2\|c\|_0<\epsilon/2$. By applying the previous inequality to $y=0$ we find $\|G(h)\|_0\leq \|G(h)-G(0)\|_0+\|G(0)\|_0\leq \frac{1}{2}\|h\|_0+\epsilon/2$. Thus, $G:\overline{B}_{\epsilon}\to \overline{B}_{\epsilon}$ has the desired properties. Several remarks are at hand. First, if $y_0$ is the solution of equation (\[Eq. y0\]), $J(y_0)$ will be the unique formal solution of (\[Eq. y0\]) and it is convergent. But $R(y_0)$ is another analytic solution of (\[Eq. y0\]), thus $J(y_0)=J(R(y_0))\in \Delta_{\nu_\ell(P)}^N$. Second, there is a direct way to find a solution of (\[Eq. y0\]) as follows: we find first a solution $Y_0(\xx)$ of $F(\xx,\yy(\xx))=\00$, with the aid of the holomorphic implicit function theorem - it can be applied since $F(\00,\00)=\00$ and $\frac{\d F}{\d \yy}(\00,\00)=\mu$ is invertible-. Then, it follows by applying $R$ to the previous equation that $y_0=R(Y_0)$ is the solution of (\[Eq. y0\]), since we already know it is unique. **Step 2** (Recurrence equations for $y_n$) We can now assume $y_0=0$ by making the change of variables $y\mapsto y-y_0$ in the initial equation (\[Eq. Main Eq\]). In fact, after doing so, we obtain a similar PDE such that $P$ divides $c$ and we search for a formal solution $\widehat{\yy}=\sum_{n=1}^\infty y_n P^n$ which is divisible by $P$. To find the recurrence equations satisfied by the $y_n$ we start with the right side of (\[Eq. Main Eq\]). By using the identity (\[Eq. Product\]) for the product of series we find $$A\cdot \widehat{\yy}=\sum_{n=1}^\infty \left(\sum_{k=1}^n \sum_{j=1}^k RQ^{n-k}(A_{k-j}y_j)\right)P^n.$$ For the non-linear term we have the decomposition $$\begin{aligned} \sum_{|I|\geq 2} A_I \widehat{\yy}^I=\sum_{k=2}^\infty\, \sum_{\ast_k} A_{I,m} \prod_{{1\leq l\leq N}\atop {1\leq j\leq i_l}} y_{l,n_{l,j}}\,P^k&=\sum_{k=2}^\infty\sum_{p=0}^\infty \, \sum_{\ast_k} RQ^p\bigg(A_{I,m} \prod_{{1\leq l\leq N}\atop {1\leq j\leq i_l}} y_{l,n_{l,j}}\bigg)\,P^{k+p}\\ &=\sum_{n=2}^\infty\left(\sum_{k=2}^n \, \sum_{\ast_k} RQ^{n-k}\bigg(A_{I,m} \prod_{{1\leq l\leq N}\atop {1\leq j\leq i_l}} y_{l,n_{l,j}}\bigg)\right)\,P^{n}.\end{aligned}$$ where the sum $\sum_{\ast_k}$ is taken over all $I\in\N^N$ such that $2\leq |I|\leq k$, $m$ satisfying $0\leq m\leq k-|I|$, and $n_{l,j}\geq 1$ such that $k=m+n_{1,1}+\cdots+n_{1,i_1}+\cdots+n_{N,1}+\cdots+n_{N,i_N}$. Note in particular that $n_{l,j}<k\leq n$ and thus no component of $y_n=(y_{n,1},\dots,y_{n,d})$ appears in the coefficient corresponding to $P^n$. For the left-side of (\[Eq. Main Eq\]) we can write $$\label{Eq. P L y} P\cdot L(\widehat{\yy})=\sum_{n=1}^\infty \left(L(y_{n-1}) +ny_nL(P)\right)P^n.$$ At this point we use the hypothesis $L(P)=P\cdot h$, for some $h\in\mathcal{O}_b(D_r^d)$, so the previous equations becomes $$P\cdot L(\widehat{\yy})=\sum_{n=2}^\infty \left(L(y_{n-1}) +(n-1)h y_{n-1}\right)P^n.$$ Now, we can equate both sides of equation (\[Eq. Main Eq\]) in common power series of $P$, to obtain the recurrence $$\begin{aligned} \sum_{k=2}^n RQ^{n-k}(L(y_{k-1})) +\sum_{k=2}^n(k-1)RQ^{n-k}(h y_{k-1})=&c_n+\mu y_n+\sum_{k=1}^n \sum_{j=1}^k RQ^{n-k}(A_{k-j}y_j)\\ &+\sum_{k=2}^n \, \sum_{\ast_k} RQ^{n-k}\bigg(A_{I,m} \prod_{{1\leq l\leq N}\atop {1\leq j\leq i_l}} y_{l,n_{l,j}}\bigg),\end{aligned}$$ or equivalently $$\label{Eq. mu yn} \mu y_n+R(A_0y_n)=b_n:=e_n+\sum_{k=2}^n(k-1)RQ^{n-k}(h y_{k-1}),,\quad\text{ for all } n\geq1,$$ where $$\begin{aligned} e_n=&-c_n+\sum_{k=2}^n RQ^{n-k}(L(y_{k-1}))-\sum_{k=1}^{n-1} \sum_{j=1}^k RQ^{n-k}(A_{k-j}y_j)-\sum_{j=1}^{n-1} R(A_{n-j}y_j)\\ &-\sum_{k=2}^n \, \sum_{\ast_k} RQ^{n-k}\bigg(A_{I,m} \prod_{{1\leq l\leq N}\atop {1\leq j\leq i_l}} y_{l,n_{l,j}}\bigg)\end{aligned}$$ Note in particular that $b_1=-c_1$. Equation (\[Eq. mu yn\]) can be solved as follows: consider $Y_n=(\mu+A_0)^{-1}(b_n)$, where we have if necessary, reduced $r$ to ensure $\mu+A_0(\xx)$ is invertible for all $|\xx|\leq r$. Then $R(Y_n)$ solves (\[Eq. mu yn\]), as we see by applying $R$ to $(\mu+A_0)Y_n=b_n$ and recalling that $R(b_n)=b_n$. To check uniqueness, note that if $y_{n}$ and $w_n$ are solutions, then $R((\mu+A_0)(y_n-w_n))=0$, so $(\mu+A_0)(y_n-w_n)=h_1 P$, for some $h_1\in\mathcal{O}$. Thus, $y_n-w_n=R(y_n-w_n)=R((\mu+A_0)^{-1}h_1P)=0$. In conclusion, we can find recursively the coefficients $y_n$ by means of the formulas $$\label{Eq. yn=R bn} y_n=R\left((\mu+A_0)^{-1}(b_n)\right),$$ and equation (\[Eq. Main Eq\]) has a unique formal power series solution. **Step 3** (Majorant series) We use the majorant series technique to show that $\widehat{\yy}$ is $P$-$1$–Gevrey by proving that $\sum_{n=1}^\infty \|y_n\|_n \tau^n$ is $1$–Gevrey in $\tau$. We have chosen $r>0$ satisfying all previous requirements in order to find $y_n$ recursively. Now, we take $0<\rho <\min\{r,1\}$ satisfying $\rho^{l_j}<r$, $j=1,\dots,d$, in order to apply the bounds (\[Eq. Q R proof\]) for functions in $\mathcal{O}_b(D_{\rho(\ell)})^N$. Let $M=\|(\mu+A_0)^{-1}\|_0>0$. By applying the Nagumo norm $\|\cdot\|_n$ to equation (\[Eq. yn=R bn\]) and taking into account the properties developed in Propositions \[Prop Nagumo norms\] and \[Generalized Weierstrass Division\] we find that $$\begin{aligned} \frac{\|y_n\|_n}{M\|R\|}\leq& \|c_n\|_n+\sum_{k=2}^n \|R\| \|Q\|^{n-k}(\|L(y_{k-1})\|_n+(k-1)\|h y_{k-1}\|_n)\\ &+\sum_{k=1}^{n-1} \|R\| \|Q\|^{n-k}\sum_{j=1}^k \|A_{k-j}\|_{k-j} \|y_j\|_j+\sum_{j=1}^{n-1} \|R\| \|A_{n-j}\|_{n-j} \|y_j\|_j\\ &+\sum_{k=2}^n \|R\| \|Q\|^{n-k}\, \sum_{\ast_k} \|A_{I,m}\|_m \prod_{{1\leq l\leq N}\atop {1\leq j\leq i_l}} \|y_{l,n_{l,j}}\|_{n_{l,j}}.\end{aligned}$$ To bound effectively $\|L(y_{k-1})\|_n$, note that since $0<\rho<1$, by Proposition \[Prop Nagumo norms\] (2) we have $$\left\|a_j\frac{\d y_{k-1}}{\d x_j}\right\|_n\leq \|a_j\|_{n-k} \left\|\frac{\d y_{k-1}}{\d x_j}\right\|_k\leq ek\prod_{i\neq j}(\rho^{\ell_i}/2)\|a_j\|_{n-k} \|y_{k-1}\|_{k-1}\leq ek\|a_j\|_{n-k} \|y_{k-1}\|_{k-1}.$$ But inequality (\[Eq fm f0\]) implies that $\|a_j\|_{n-k}\leq \|a_j\|_0.$ Therefore, if $a=\|a_1\|_0+\cdots+\|a_d\|_0$, by hypothesis $a>0$, and $$\|L(y_{k-1})\|_n\leq eak\|y_{k-1}\|_{k-1}.$$ On the other hand, $$\|h y_{k-1}\|_n\leq \|h\|_{n-k+1} \|y_{k-1}\|_{k-1}\leq \|h\|_0\|y_{k-1}\|_{k-1}.$$ Thus, we find that $$\begin{aligned} \label{Eq. yn MR} \frac{\|y_n\|_n}{M\|R\|}\leq& \|c_n\|_n+\|R\|(ea+\|h\|_0)\sum_{k=2}^n k\|Q\|^{n-k}\|y_{k-1}\|_{k-1} +\|R\|\sum_{k=1}^{n-1} \|Q\|^{n-k} \sum_{j=1}^k \|A_{k-j}\|_{k-j} \|y_j\|_j\\ \nonumber &+\|R\|\sum_{j=1}^{n-1} \|A_{n-j}\|_{n-j} \|y_j\|_j+\|R\|\sum_{k=2}^n \|Q\|^{n-k}\, \sum_{\ast_k} \|A_{I,m}\|_m \prod_{{1\leq l\leq N}\atop {1\leq j\leq i_l}} \|y_{n_{l,j}}\|_{n_{l,j}}.\end{aligned}$$ If we divide by $n!$ and using that $m!k!\leq (m+k)!$ we conclude that $$\begin{aligned} \frac{\|y_n\|_n}{M\|R\| n!}&\leq \frac{\|c_n\|_n}{n!}+\|R\|(ea+\|h\|_0)\sum_{k=2}^n \frac{\|Q\|^{n-k}}{(n-k)!}\, \frac{\|y_{k-1}\|_{k-1}}{(k-1)!}+\|R\|\sum_{k=1}^{n-1} \frac{\|Q\|^{n-k}}{(n-k)!} \sum_{j=1}^k \frac{\|A_{k-j}\|_{k-j}}{(k-j)!} \frac{\|y_j\|_j}{j!}\\ &+\|R\|\sum_{j=1}^{n-1} \frac{\|A_{n-j}\|_{n-j}}{(n-j)!} \frac{\|y_j\|_j}{j!}+\|R\|\sum_{k=2}^n \frac{\|Q\|^{n-k}}{(n-k)!}\, \sum_{\ast_k} \frac{\|A_{I,m}\|_m}{m!} \prod_{{1\leq l\leq N}\atop {1\leq j\leq i_l}} \frac{\|y_{n_{l,j}}\|_{n_{l,j}}}{n_{l,j}!}.\end{aligned}$$ Let us define the sequence $z_n$ recursively by $$\begin{aligned} \label{Eq. zn recurrence} \frac{z_n}{M\|R\|}=&\frac{\|c_n\|_n}{n!}+\|R\|(ea+\|h\|_0)\sum_{k=2}^n \frac{\|Q\|^{n-k}}{(n-k)!}\, z_{k-1}+\|R\|\sum_{k=1}^{n-1} \frac{\|Q\|^{n-k}}{(n-k)!} \sum_{j=1}^k \frac{\|A_{k-j}\|_{k-j}}{(k-j)!} z_j\\ \nonumber &+\|R\|\sum_{j=1}^{n-1} \frac{\|A_{n-j}\|_{n-j}}{(n-j)!} z_j+\|R\|\sum_{k=2}^n \frac{\|Q\|^{n-k}}{(n-k)!}\, \sum_{\ast_k} \frac{\|A_{I,m}\|_m}{m!} \prod_{{1\leq l\leq N}\atop {1\leq j\leq i_l}} z_{n_{l,j}},\end{aligned}$$ where $z_1=M\|R\|\|c_1\|_1$. Since the terms of the previous equation are all non-negative real numbers, we find inductively that $$\label{Eq. yn zn}\frac{\|y_n\|_n}{n!}\leq z_n.$$ On the other hand, we consider the generating power series $$\overline{c}(\tau)=\sum_{n=1}^\infty \frac{\|c_n\|_n}{n!}\tau^n ,\quad \overline{A}(\tau)=\sum_{n=0}^\infty \frac{\|A_n\|_n}{n!}\tau^n,\quad \overline{F}(\tau,Y)=\sum_{m\geq 0, |I|\geq 2} \frac{\|A_{I,m}\|_m}{m!}\tau^mY^{|I|}.$$ which are in fact convergent. For instance, the coefficient of $\tau^m Y^j$ is given and bounded by $\sum_{|I|=j}\frac{\|A_{I,m}\|_m}{m!}\leq \sum_{|I|=j} K\|R\| \frac{\|Q\|^m}{m!} \delta^{|I|}=K\|R\|\frac{\|Q\|^m}{m!} \delta^{j}\binom{j+N-1}{N-1}\leq K\|R\| \frac{\|Q\|^m}{m!} \delta^{j}2^{j+N-1}$. By using these series and equation (\[Eq. zn recurrence\]), we find $Z(\tau)=\sum_{n=1}^\infty z_n \tau^n$ is a formal solution of the analytic equation $$\frac{Z(\tau)}{M\|R\|}=\overline{c}(\tau)+\|R\|(ea+\|h\|_0) e^{\|Q\|\tau} \tau Z(\tau)+\|R\| (e^{\|Q\|\tau}\overline{A}(\tau)-\|A_0\|_0)Z(\tau)+\|R\|e^{\|Q\|\tau}\overline{F}(\tau,Z(\tau)),$$ but the holomorphic implicit function theorem implies this equation has a unique convergent power series solution at the origin, thus it must be $Z(\tau)$, so it is convergent. By (\[Eq. yn zn\]) the series $\sum_{n=1}^\infty \|y_n\|_n \tau^n$ is $1$–Gevrey in $\tau$ as we wanted to show. Regarding the previous proof, only some minor changes are required so establish the result. While Step 0 and Step 1 remain the same, in Step 2 the recurrence for $y_n$ takes the form $$\label{Eq. mu yn 2} \mu y_n-nR(L(P)y_n)+R(A_0y_n)=d_n:=e_n+\sum_{k=1}^{n-1} kRQ^{n-k}(L(P)y_{k}),\quad\text{ for all } n\geq1, ,$$ and $e_n$ as before. Then $d_n$ and $b_n$ differ only in the previous sum, that we will bound by shifting one index, as follows $$\label{Eq. last one} \left\|\sum_{k=2}^{n} (k-1)RQ^{n-k+1}(L(P)y_{k-1})\right\|_n\leq \|R\|\|Q\|\|L(P)\|_0\, \sum_{k=2}^{n} k \|Q\|^{n-k} \|y_{k-1}\|_{k-1}.$$ In the current case, the solution of (\[Eq. mu yn 2\]) is given by $$y_n=R\left((\mu-nL(P)I_N+A_0)^{-1}(d_n)\right),$$ where $I_N$ is the identity matrix of size $N$. To make this formula meaningful, it is enough to prove $\mu-nL(P)(\xx)I_N+A_0(\xx)$ is invertible for all $n\geq 1$ and $|\xx|\leq r$, if $r$ is sufficiently small. To proceed let us recall that if $B\in\C^{N\times N}$ is such that $|B|<1$ for a matrix norm $|\cdot |$, then $I_N-B$ is invertible, $(I_N-B)^{-1}=\sum_{n=0}^\infty B^n$, and $|(I_N-B)^{-1}|\leq (1-|B|)^{-1}$. Here as before we use $|B|=\max_{1\leq i\leq N} \sum_{j=1}^N |B_{i,j}|$. Now, since $L(P)(\00)\neq 0$, we can choose a small $r>0$ such that $\a=\inf_{|x|\leq r}|L(P)(\xx)|>0$. Thus, if $n>\|\mu+A_0\|_0/\a$, we see that $\left|\mu+A_0(\xx)\right|/|nL(P)(\xx)|\leq \|\mu+A_0\|_0/n\a<1$, for all $|\xx|\leq r$, so $\mu-nL(P)+A_0$ is invertible with inverse bounded by $$\begin{aligned} |(\mu-nL(P)(\xx)+A_0(\xx))^{-1}|&=\frac{1}{n|L(P)(\xx)|}\left|\left(I_N-\frac{1}{nL(P)(\xx)}(\mu+A_0(\xx))\right)^{-1}\right|\\ &\leq \frac{1}{\a n} \frac{1}{1-\frac{\|\mu+A_0\|_0}{\a n}}=\frac{1}{\a n-\|\mu+A_0\|_0}.\end{aligned}$$ For $n\leq \|\mu+A_0\|_0/\a$, by hypothesis the remaining finite number of matrices $\mu-nL(P)(\xx)+A_0(\xx)$ are invertible at the origin. Thus, we can shrink $r$ and assume they are invertible for all $|\xx|\leq r$. In conclusion, all these matrices are invertible, and we can find $M>0$ such that $$\label{Eq. mu Mn} \|(\mu-nL(P)+A_0)^{-1}\|_0\leq M/n,\quad \text{ for all } n\geq 1.$$ At this stage we can proceed with Step 3 by using the Nagumo norms and taking into account (\[Eq. last one\]). However, the factor $M/n$ in (\[Eq. mu Mn\]) improves our bounds and shows that $$\begin{aligned} \frac{\|y_n\|_n}{M\|R\|}\leq& \|c_n\|_n+\|R\|(ea+\|Q\|\|L(P)\|_0)\sum_{k=2}^n \|Q\|^{n-k}\,\|y_{k-1}\|_{k-1}+\cdots,\end{aligned}$$ where the dots indicate the remaining terms are the same as in (\[Eq. yn MR\]). In this case it is not necessary to divide by $n!$, just by defining $z_n$ accordingly we find $\|y_n\|_n\leq z_n$, for all $n\geq 1$, and $Z(\tau)$ satisfies the analytic equation $$\frac{Z(\tau)}{M\|R\|}=\overline{c}(\tau)+\|R\|(ea+\|Q\|\|L(P)\|_0) \frac{\tau Z(\tau)}{1-\|Q\|\tau}+\|R\| \left(\frac{\overline{A}(\tau)}{1-\|Q\|\tau}-\|A_0\|_0\right)Z(\tau)+\|R\|\frac{\overline{F}(\tau,Z(\tau))}{1-\|Q\|\tau},$$ with analytic coefficients $\overline{c}(\tau)=\sum_{n=1}^\infty \|c_n\|_n\tau^n $, $\overline{A}(\tau)=\sum_{n=0}^\infty \|A_n\|_n \tau^n$, and $\overline{F}(\tau,Y)$ $=\sum_{m\geq 0, |I|\geq 2} \|A_{I,m}\|_m \tau^mY^{|I|}.$ Therefore, by the holomorphic implicit function theorem $Z(\tau)$ and also $\widehat{\yy}$ are convergent as required. There is a straightforward way to extend the theorems for some systems of PDEs of higher order by augmenting the size of the given equation, as we explain in the following result. \[Coro. 1\] Let $P$, $L$ and $F$ be as in Theorem \[Thm Main Result\], fix $u_1,\dots,u_{k-1}\in\mathcal{O}$, and consider the system of PDEs $$\label{Eq. main 3} (P\cdot L)^k(\yy)(\xx)+u_{k-1}(\xx) (P\cdot L)^{k-1}(\yy)(\xx)+\cdots+u_1(\xx) (P\cdot L)(\yy)(\xx)=F(\xx,\yy).$$ Then, the following statements hold: 1. If $P$ divides $L(P)$, (\[Eq. main 3\]) has a unique formal power series solution which is $P$-$1$–Gevrey. 2. If $L(P)(\00)\neq 0$, and $\sigma-nL(P)(\00)\neq 0$, for all $n\in\N$ and all solutions $\sigma$ of the polynomial equation $$\label{Eq. Eigenvalues} p_\mu\left(\sigma^{k}+u_{k-1}(\00)\sigma^{k-1}+\cdots+u_2(\00)\sigma^2+u_1(\00)\sigma\right)=0,$$ where $p_\mu$ is the characteristic polynomial of $\mu$, then (\[Eq. main 3\]) has a unique convergent power series solution. In the variable $\boldsymbol{w}=(\boldsymbol{w}_0,\boldsymbol{w}_1,\dots,\boldsymbol{w}_{k-1})\in\C^{Nk}$, where $\boldsymbol{w}_0=\yy$, $\boldsymbol{w}_1=(P\cdot L)(\boldsymbol{w}_0), \boldsymbol{w}_2= (P\cdot L)(\boldsymbol{w}_1),\dots, \boldsymbol{w}_{k-1}=(P\cdot L)(\boldsymbol{w}_{k-2})$, (\[Eq. main 3\]) can be written as $$P\cdot L(\boldsymbol{w})=G(\xx,\boldsymbol{w}):=\left(\boldsymbol{w}_1,\boldsymbol{w}_2\dots,\boldsymbol{w}_{k-1},F(\xx,\boldsymbol{w}_0)-u_1(\xx)\boldsymbol{w}_1-\cdots-u_{k-1}(\xx) \boldsymbol{w}_{k-1}\right),$$ which has the form of equation (\[Eq. Main Eq\]). Then, the results follow from Theorem \[Thm Main Result\] and \[Thm 2\] by noticing that $$\frac{\d G}{\d \boldsymbol{w}}(\00,\00)=\left(\begin{array}{ccccc} 0 & I_N & 0 & \cdots & 0\\ 0 & 0 & I_N & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & I_N\\ \mu & -u_1(\00)I_N & -u_2(\00)I_N & \cdots & -u_{k-1}(\00)I_N\\ \end{array}\right)\in\C^{Nk}\times\C^{Nk}$$ is an invertible matrix with eigenvalues given by the solutions of (\[Eq. Eigenvalues\]). Examples {#Sec. Examples} ======== The solution of (\[Eq. Main Eq\]) is generically divergent, but there are cases where it can be convergent. This is evidenced already in the case of one variable: while Euler’s equation $x^2y'+y=x$ has the $x$-$1$–Gevrey solution $\widehat{y}(x)=\sum_{n=0}^\infty (-1)^n n!x^{n+1}$, the equation $x^2y'+y=x+x^2$ has $\widehat{y}(x)=x$ as analytic solution. More examples can be obtained by taking $f\in\C\{z\}$ and $P\in\mathcal{O}$, $P(\00)=0$. If $L(P)=0$, then the solution of $$P(\xx)L(y)=y-f(P(\xx)),\quad \text{ is }\quad \widehat{y}(\xx)=f(P(\xx)),$$ which is convergent. \[Ex. ODE\] We consider the equation $x_1x_2\frac{\d y}{\d x_1}=\mu y-\frac{x_1}{1-x_1}$, where $\mu\neq 0$ is constant. A way to find its unique formal power series solution is to plug $\widehat{y}=\sum_{n=0}^\infty y_n(x_2)x_1^n$ into the equation and then equate common powers of $x_1$. Thus, we find $y_0(x_2)=0$, $y_n(x_2)=(\mu-n x_2)^{-1}$, $n\geq 1$, and the formal solution is equal to $$\widehat{y}(x_1,x_2)=\sum_{n\geq1, m\geq 0} \frac{n^m}{\mu^{m+1}} x_1^n x_2^m.$$ We see $\widehat{y}$ is $x_2$-$1$–Gevrey by direct inspection or by applying Theorem \[Thm Main Result\] to $P=x_2$ and $L=x_1\frac{\d}{\d x_1}$ since $L(P)=0$. However, $\widehat{y}$ is not $x_1$-$1$–Gevrey, i.e., $P=x_1$, $L=x_2\frac{\d}{\d x_1}$ is not a valid choice: as a power series in $x_1$, $y_n(x_2)$ is analytic on the disc $\{x_2\in\C : |x_2|<|\mu|/n\}$, so there is no common neighborhood of the origin where all $y_n(x_2)$ are defined. Consider $\aa\in \N^d\setminus\{\00\}$, $\mu\in\C^\ast$, $\boldsymbol{\lambda}=(\lambda_1,\dots,\lambda_d)\in (\C\setminus\{0\})^d$, and $c(\xx)=\sum_{\bb\in\N^d} a_\bb \xx^\bb\in\C\{\xx\}$. Then the PDE $$\xx^\aa L_{\boldsymbol{\lambda}}(\yy)=\xx^\aa \left( \lambda_1 x_1 \d_{x_1}\yy +\cdots+ \lambda_d x_d \d_{x_d}\yy \right)=\mu\yy-c(\xx),$$ has generically a $\xx^\aa$-$1$–Gevrey formal solution $\widehat{y}$. To find it we can reduce the problem to solve a family of ODEs as follows: write $$\label{Eq. Decomposition xa} \widehat{y}(\xx)=\sum_{\aa\not\leq \bb} \xx^\bb \widehat{y}_\bb(\xx^\aa), \quad c(\xx)=\sum_{\aa\not\leq \bb} \xx^\bb c_\bb(\xx^\aa),$$ as power series with coefficients series in $\xx^\aa$, according to the decomposition $c_\bb(t)=\sum_{n=0}^{\infty} a_{n\aa+\bb} t^n$. Then, plug $\widehat{y}$ into the equation and equate the common terms in $\xx^\bb$. It follows the initial problem is equivalent to solve the family of independent ODEs $$\left<\ll,\aa\right> t^2\widehat{y}_\bb'(t)=(\mu-\left<\ll,\bb\right>t)\widehat{y}_\bb(t)-c_\bb(t),\quad \aa\not\leq \bb.$$ Thus each $\widehat{y}_\bb$ is uniquely determined and generically $t$-$1$–Gevrey. Let us consider two explicit examples. First, take $c(\xx)=\xx^{\bb}$. If $\left<\ll,\aa\right>=0$, the solution is $\widehat{y}(\xx)=\frac{\xx^\bb}{1-\left<\ll,\bb\right>\xx^\aa}$ which is convergent. Otherwise, after some calculations we find the solution $$\widehat{y}(\xx)=\sum_{n=0}^\infty \binom{-\left<\ll,\bb\right>/\left<\ll,\aa\right>}{n}(-1)^n n! \frac{\left<\ll,\aa\right>^n}{\mu^{n+1}} \xx^{n\aa+\bb},$$ which is $\xx^\aa$-$1$–Gevrey. In fact, $\widehat{y}$ is *$\xx^\aa$-$1$–summable in direction $\theta$*, see [@CDMS], with sum given by $$y_\theta(\xx)=\xx^{\bb-\aa}\int_0^{e^{i\theta}\infty} \left(1+\left<\ll,\aa\right>\xi/\mu\right)^{-\left<\ll,\bb\right>/\left<\ll,\aa\right>}e^{-\xi/\xx^\aa} d\xi,\quad \text{ for } \theta\neq \text{arg}(-\mu/\left<\ll,\aa\right>).$$ Note $\widehat{y}$ reduces to a polynomial when $\left<\ll,m\aa+\bb\right>=0$, for some $m\geq 0$. As second example consider $x_1x_2\left( x_1 \d_{x_1} y- x_2 \d_{x_2} y\right)=\mu y-(1-x_1)^{-1}(1-x_2)^{-1}$. Then, decomposition (\[Eq. Decomposition xa\]) takes the form $$\widehat{y}(x_1,x_2)=\sum_{n=0}^\infty y_{(n,0)}(x_1x_2) x_1^n+\sum_{n=1}^\infty y_{(0,n)}(x_1x_2) x_2^n,\quad \sum_{n,m\geq 0} x_1^nx_2^m=\frac{1}{1-x_1x_2}+\sum_{n=1}^\infty \frac{x_1^n+x_2^n}{1-x_1x_2},$$ and we find the coefficients are equal to $$y_{(n,0)}(t)=\frac{1}{(1-t)(\mu-nt)},\quad y_{(0,n)}(t)=\frac{1}{(1-t)(\mu+nt)},\quad \text{ valid for }|t|<\frac{|\mu|}{n}.$$ By using the Taylor series at the origin of the previous functions we can determine $\widehat{y}$. The relation between $\widehat{y}$ and the solution $$y_0(x_1,x_2)=\frac{1}{1-x_1x_2}\sum_{n=0}^\infty \frac{x_1^n}{\mu-n x_1x_2}+\frac{1}{1-x_1x_2}\sum_{n=1}^\infty \frac{x_2^n}{\mu +nx_1x_2},$$ which is analytic on $\{(x_1,x_2)\in\C^2 : |x_1|,|x_2|<1, x_1x_2\neq \mu/n, n\geq 1\}$, is that $y_0$ is the $x_1x_2$-$1$–sum of $\widehat{y}$, c.f. [@CM16 Example 2.1] for more details on similar calculations with these series. Theorem \[Thm Main Result\] can also be applied in other situations after suitable changes of variables of blow–up type. We illustrate this fact in the following example. \[Ex. Zhang\] Consider the system of PDEs given by $$\label{Eq. Singular} x_1^{p_1+1}c_1(\xx) \d_{x_1}\yy+\cdots+x_d^{p_d+1}c_d(\xx) \d_{x_d}\yy=\boldsymbol{F}(\xx,\yy),$$ where $\mu=\frac{\partial \boldsymbol{F}}{\partial \yy}(\00,\00)$ is invertible, and we assume $1\leq p_1\leq p_j$, for all $j$. The system has a unique formal power series solution $\widehat{\yy}$, and we show it is a $(1/p_1,\dots,1/p_1)$–Gevrey series: by using the punctual blow–up (\[Eq. Blow up\]), we find $$z_1\d_{z_1}=x_1\d_{x_1}+\cdots+x_d\d_{x_d},\quad z_j\d_{z_j}=x_j\d_{x_j},\quad j=2,\dots,d,$$ thus (\[Eq. Singular\]) takes the form $$z_1^{p_1}{L}'(\uu)=z_1^{p_1+1}{c}'_1(\zz)\d_{z_1}\uu +z_1^{p_1}\sum_{j=2}^d (z_1^{p_j-p_1}z_j^{p_j}{c}'_j(\zz)-{c}'_1(\zz)) z_j\d_{z_j}\uu=\boldsymbol{F}'(\zz,\uu),$$ where ${c}'_j(\zz)=c_j(\xx)$, $\boldsymbol{F}'(\zz,\uu)=\boldsymbol{F}(\xx,\yy)$, and $\uu(\zz)=\yy(\xx)$. Since ${L}'(z_1^{p_1})=p_1 z_1^{p_1}{c}'_1(\zz)$ and $\frac{\partial {\boldsymbol{F}'}}{\partial \zz}(\00,\00)=\mu$ is invertible, Theorem \[Thm Main Result\] implies that $\widehat{\uu}(\zz)=\widehat{\yy}(\xx)$ is a $z_1^{p_1}$-$1$–Gevrey series, and the proof of Proposition \[Prop. Ps sss Gevrey\] shows $\widehat{\yy}$ is a $(1/p_1,\dots,1/p_1)$–Gevrey series as desired. This is the generic situation, as we can exemplify with the scalar PDE $$x_1^2\d_{x_1}y+\cdots+x_d^2\d_{x_d}y+y=\xx^\11,\quad \widehat{\yy}(\xx)=\sum_{\bb\in\N^d} (-1)^{|\bb|}|\bb|!\xx^{\bb+\11},$$ where $\11=(1,\dots,1)$, that can be seen as a multidimensional Euler’s equation On the other hand, it is worth remarking (\[Eq. Singular\]) has been recently studied by Z. Luo, H. Chen and C. Zhang [@Zhang19] for the case $$x_1^2 c_1(\xx)\d_{x_1}u+x_2^2 c_2(\xx) \d_{x_2}u=b(\xx) u-a(\xx),$$ where $a,b,c_1,c_2$ are analytic near $\00\in\C^2$ and $b(\00)c_1(\00)c_2(\00)\neq 0$. This scalar equation has a unique formal power series solution $\hat{u}$ which is Borel–summable in the variables $(x_1,x_2)$. In particular, $\hat{u}$ is $(1,1)$–Gevrey as we showed before. \[Ex. Klimes\] Consider parametric families of ODEs unfolding $k+1$ singularities $$\label{Eq. Unfolding} (x^{k+1}-\varepsilon)\frac{d\yy}{dx}=\mu \yy-f(x,\varepsilon,\yy),$$ where $k$ is a positive integer, $\mu$ is an invertible matrix, $f$ is analytic near the origin in $\C\times\C\times\C^d$, $\frac{\d f}{\d \yy}(0,0,\00) =\00$, and $\varepsilon\in (\C,0)$ is a small parameter. These systems have been studied by M. Klimeš [@Klimes2016] for the case $k=1$ by using an adapted (unfolded) Borel-Laplace method in order to obtain parametric solutions defined and bounded on certain ramified domains attached to both singularities $x=\pm \sqrt{\varepsilon}$, at which they possess a limit in a spiraling manner. As it is remarked in Section 2.4 of [@Klimes2016], the system above has a unique formal power series solution $\widehat{\yy}$ which is $\left(\frac{1}{k},\frac{k+1}{k}\right)$–Gevrey in $(x,\varepsilon)$. We can prove this readily as follows: consider the ramification $\varepsilon=\eta^{k+1}$, and afterwards the punctual blow-up $x=z$, $\eta=z\zeta$. In these coordinates equation (\[Eq. Unfolding\]) takes the form $$z^k\left(z\frac{\d \uu}{\d z}-\zeta \frac{\d \uu}{\d \zeta}\right)=(1-\eta^{k+1})^{-1}\left(\mu \uu-f(z,z^{k+1}\zeta^{k+1},\uu)\right),$$ where $\uu(z,\zeta)=\yy(z,z^{k+1}\zeta^{k+1})=\yy(x,\varepsilon)$. By applying Theorem \[Thm Main Result\] to $P=z^k$ and $L=z\d_z-\zeta \d_\zeta$ we find $\widehat{\uu}(z,\zeta)=\widehat{\yy}(x,\varepsilon)$ is $z^k$-$1$–Gevrey, since $L(P)=kz^k$. Thus, $\widehat{\yy}(x,\eta^{k+1})$ is $\left(\frac{1}{k},\frac{1}{k}\right)$–Gevrey in $(x,\eta)$, and therefore $\widehat{\yy}(x,\varepsilon)$ is $\left(\frac{1}{k},\frac{k+1}{k}\right)$–Gevrey in $(x,\varepsilon)$ as we claimed. [999999]{} *The Theory of the Maximal Contact*. Memorias de Matemática del Instituto “Jorge Juan" [29]{}, Instituto “Jorge Juan" de Matemáticas, Consejo Superior de Investigaciones Cientícas, Madrid, 1975. *Multisummability of formal solutions of singular perturbation problems.* J. Differential Equations, vol. 183, (2002) 526–545. *Singular perturbation of linear systems with a regular singularity.* J. Dynam. Control. Syst. 8, 3 (2002) 313–322. *Small Divisors and Large Multipliers.* Ann. Inst. Fourier 57, no. 2 (2007) 603–628. *Monomial summability and doubly singular differential equations.* J. Differential Equations, vol. 233, (2007) 485–511. *Gevrey solutions of singularly perturbed differential equations.* J. Reine Angew. Math, vol. 518, (2000) 95–129. *An extension of Borel-Laplace methods and monomial summability*. J. Math. Anal. Appl, vol. 457, Issue 1, (2018) 461–477. *Tauberian theorems for summability in analytic functions*. Submitted to publication. Available at https://arxiv.org/abs/1903.08898. *Briot-Bouquet’s theorem in high dimension.* Publ. Mat. 58 suppl. (2014) 135–152. *Summability in a monomial for some classes of singularly perturbed partial differential equations.* Submitted to publication. Available at https://arxiv.org/abs/1803.06719. *Tauberian properties for monomial summability with applications to Pfaffian systems*. J. Differential Equations vol. 261 (2016) 7237–7255. *Nonlinear evolution PDEs in $\R^+\times \C^d$: Existence and uniqueness of solutions, asymptotic and Borel summability properties*. Annales De L’Institut Henri Poincaré (C) Analyse Non Linéaire 25 (2007) 795–823. *Short time existence and Borel summability in the Navier–Stokes equation in $\R^3$*. Comm. Partial Differential Equations 34(7–9) (2009) 785–817. *Divergence property of formal solutions for singular first order linear partial differential equations.* Publ. Res. Inst. Math. Sci. 35 (1999), no. 6, 893–919. *Formal Gevrey theory for singular first order semi-linear partial differential equations.* Osaka J. Math. 41, no. 1 (2004) 159–191. *Borel Summability of Divergent Solutions for Singular First–order Partial Differential Equations with Variable Coefficients. Part I.* J. Differential Equations 227, no. 2 (2006) 499–533. *Borel Summability of Divergent Solutions for Singular First–order Partial Differential Equations with Variable Coefficients. Part II.* J. Differential Equations 227, no. 2 (2006) 534–563. *Formal and convergent power series solutions of singular partial differential equations.* Trans. Amer. Math. Soc. 256 (1979) 163–183. *Confluence of Singularities of Nonlinear Differential Equations via Borel-Laplace Transformations.* J. Dyn. Control Syst. 22 (2016) 285–324. *On parametric Gevrey asymptotics for some nonlinear initial value Cauchy problems*. J. Differential Equations 259, no. 10 (2015) 5220–5270. *On the summability of divergent power series satisfying singular PDEs.* C. R. Math. Acad. Sci. Paris 357, no. 3 (2019) 258–262. *Holonomie et intégrales premières.* Ann. Sci. École Norm. Sup. 4, no. 13 (1980) no. 4, 469–523. *Asymptotic expansions and summability with respect to an analytic germ.* Publ. Mat. 63 (2019) 3–79. *Über das anfangswertproblem partieller differentialgleichunge*. Jap. J. Math. 18 (1942) 41–47. *On the theorem of Cauchy-Kowalevsky for first order linear differential equations with degenerate principal symbols*. Proc. Japan Acad. Ser. A Math. Sci. 49 (1973) 83–87. *Borel summability of formal solutions of some first order singular partial differential equations and normal forms of vector fields.* J. Math. Soc. Japan 57, no. 2 (2005) 415–460. *Multisummability of formal solutions of some linear partial differential equations*. J. Differential Equations 185, no. 2 (2002) 513–549. *Equivalence of stability properties for ultradifferentiable function classes.* Rev. R. Acad. Cienc. Exactas Fis. Nat. Ser. A Math. RACSAM 110, no. 1 (2016) 17–32. *Convergence of formal solutions of meromorphic differential equations containing parameters.* Funkcial. Ekvac. 37 (1994) 395–400. *On Quasianalytic Local Rings.* Expo. Math. 26, no. 1 (2008) 1–23. *Multisummability of formal solutions to the Cauchy Problem for some linear partial differential equations*. J. Differential Equations 255, no. 10 (2013) 3592–3637. *Formal Gevrey Class of formal power series solution for singular first order linear partial differential operators*. Tokyo J. Math. [23]{}, no. 2 (2000) 537–561. *Parametric Borel summability for some semilinear system of partial differential equations.* Opuscula Math. 35, no. 5 (2015) 825–845. [^1]: The first author was supported by the projects “Análisis complejo, ecuaciones diferenciales y sumabilidad" (IN.BG.086.20.002 Univ. Sergio Arboleda) and “Álgebra y Geometría en Dinámica Real y Compleja III" (MTM2013-46337-C2-1-P Ministerio de Economía y Competitividad, Spain).
--- author: - | Peng Wang, Bingliang Jiao, Lu Yang, Yifei Yang, Shizhou Zhang, Wei Wei, Yanning Zhang\ School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an, China\ bibliography: - 'egbib.bib' title: 'Vehicle Re-identification in Aerial Imagery: Dataset and Approach' ---
--- abstract: 'Recently some authors have pointed out that there exist nonclassical correlations which are more general, and possibly more fundamental, than entanglement. For these general quantum correlations and their classical counterparts, under the action of decoherence, we identify three general types of dynamics that include a peculiar sudden change in their decay rates. We show that, under suitable conditions, the classical correlation is unaffected by decoherence. Such dynamic behavior suggests an operational measure of both classical and quantum correlations that can be computed without any extremization procedure.' author: - 'J. Maziero' - 'L. C. Céleri' - 'R. M. Serra' - 'V. Vedral' title: Classical and quantum correlations under decoherence --- It is largely accepted that quantum mutual information is the information-theoretic measure of the total correlation in a bipartite quantum state. Groisman *et al.* [@GroPoWi], inspired by Landauer’s erasure principle [@Landauer], gave an operational definition of correlations based on the amount of noise required to destroy them. From this definition, they proved that the total amount of correlation in any bipartite quantum state ($\rho_{AB}$) is equal to the quantum mutual information \[$\mathcal{I}(\rho_{A:B})=S(\rho_{A})+S(\rho_{B})-S(\rho_{AB})$, where $S(\rho )=-\operatorname*{Tr}(\rho\log_{2}\rho)$ is the von Neumann entropy and $\rho_{A(B)}=\operatorname*{Tr}_{B(A)}(\rho_{AB})$ is the reduced density operator of the partition $A$($B$)\]. Another argument in favor of the claim that quantum mutual information is a measure of the total correlation in a bipartite quantum state was given by Schumacher and Westmoreland [@SchuWest]. They showed that, if Alice and Bob share a correlated composite quantum system that is used as the key for a one-time pad cryptographic system, the maximum amount of information that Alice can send securely to Bob is the quantum mutual information of the shared correlated state. We are interested here in the dynamics of both quantum and classical correlations under the action of noisy environments. For these purposes, it is reasonable to assume that the total correlation contained in a bipartite quantum state may be separated as $\mathcal{I}(\rho_{A:B})=\mathcal{Q}(\rho_{AB})+\mathcal{C}(\rho_{AB})$, owing to the distinct nature of quantum ($\mathcal{Q}$) and classical ($\mathcal{C}$) correlations [@GroPoWi; @HenVed; @Horo1; @Horo2]. Some proposals for characterization and quantification of $\mathcal{Q}$ and $\mathcal{C}$ in a composite quantum state have appeared in the last few years [@GroPoWi; @HenVed; @OllZur; @Horo1; @Winter; @Piani]. The quantum correlation, $\mathcal{Q}(\rho_{AB})$, between partitions $A$ and $B$ of a composite state can be quantified by the so-called quantum discord, $\mathcal{D}(\rho_{AB})$, introduced by Ollivier and Zurek [@OllZur]. Such a quantum correlation is more general than entanglement, in the sense that separable mixed states can have a nonclassical correlation that leads to a nonzero discord. It measures general nonclassical correlations, including entanglement. For separable mixed states (unentangled states) with nonzero discord, this quantum correlation provides a speed up, in performing some tasks, over the best known classical counterpart, as was shown theoretically [@Caves] and experimentally [@White] in a non-universal model of quantum computation. Therefore, such a nonclassical correlation might have a significant role in quantum information protocols. For pure states, we have a special situation where the quantum correlation is equal to the entropy of entanglement and also to the classical correlation. In other words, $\mathcal{Q}(\rho_{AB})=\mathcal{C}(\rho_{AB})=\mathcal{I}(\rho_{A:B})/2$ [@GroPoWi; @HenVed]. In this case, the total amount of quantum correlation is captured by an entanglement measure. On the other hand, for mixed states, the entanglement is only a part of this more general nonclassical correlation, $\mathcal{Q}(\rho_{AB})$ [@OllZur; @White; @Caves]. A quantum composite state may also have a classical correlation, $\mathcal{C}(\rho_{AB})$, which for bipartite quantum states can be quantified via the measure proposed by Henderson and one of us [@HenVed]. Since we assume that the total correlation is given by the quantum mutual information and if we adopt the definition of classical correlation given in [@HenVed], $\mathcal{Q}(\rho_{AB})$ turns out to be identical to the definition of quantum discord in Ref. [@OllZur]; in other words, $\mathcal{Q}(\rho_{AB})=\mathcal{D}(\rho_{AB})=\mathcal{I}(\rho_{A:B})-\mathcal{C}(\rho_{AB})$, as already noted in Ref. [@Luo]. We have identified three different kinds of dynamic behavior of $\mathcal{C}$ and $\mathcal{Q}$ under decoherence, which depend on the geometry of the initial composite state and on the noise channel: $(i)$ $\mathcal{C}$ remains constant and $\mathcal{Q}$ decays monotonically over time; $(ii)$ $\mathcal{C}$ suffers a sudden change in behavior, decaying monotonically until a specific parametrized time, $p_{SC}$ (to be defined below), and remaining constant thereafter, while $\mathcal{Q}$ has an abrupt change in its rate of decay at $p_{SC}$, becoming greater than $\mathcal{C}$ within certain parametrized time interval; and $(iii)$ $\mathcal{C}$ and $\mathcal{Q}$ decay monotonically. For two-qubit states with maximally mixed marginals we show which conditions lead to the different types of dynamic behavior, for certain noise channels (i.e., phase flip, bit flip, and bit-phase flip). We also recognize a symmetry among these channels and provide a necessary condition for $\mathcal{C}$ to remain constant under decoherence, which enables us to define an operational measure for both classical and quantum correlations. Let us start with the definition of classical correlation [@HenVed]:$$\mathcal{C}(\rho_{AB})\equiv\underset{\left\{ \Pi_{j}\right\} }{\max}\left[ S(\rho_{A})-S_{\left\{ \Pi_{j}\right\} }(\rho_{\left. A\right\vert B})\right] , \label{CC}$$ where the maximum is taken over the set of projective measurements $\left\{ \Pi_{j}\right\} $ [@note1] on subsystem $B$ [@note2], $S_{\left\{ \Pi_{j}\right\} }(\rho_{\left. A\right\vert B})={\textstyle\sum\nolimits_{j}} q_{j}S\left( \rho_{A}^{j}\right) $ is the conditional entropy of subsystem $A$, given the knowledge (measure) of the state of subsystem $B$, $\rho _{A}^{j}=\left. \operatorname*{Tr}_{B}\left( \Pi_{j}\rho_{AB}\Pi_{j}\right) \right/ q_{j}$, and $q_{j}=\operatorname*{Tr}_{AB}\left( \rho_{AB}\Pi _{j}\right) $. We consider the scenario of two qubits under local decoherence channels. The evolved state of such a system under local environments may be described as a completely positive trace preserving map, $\varepsilon\left( \cdot\right) $, which, written in the operator-sum representation, is given by [@NieChu; @Dav1] $$\varepsilon\left( \rho_{AB}\right) =\sum_{i,j}\Gamma_{i}^{(A)}\Gamma _{j}^{(B)}\rho_{AB}\Gamma_{i}^{(B)\dagger}\Gamma_{j}^{(A)\dagger}\text{,}$$ where $\Gamma_{i}^{(k)}$ ($k=A,B$) are the Kraus operators that describe the noise channels $A$ and $B$. For simplicity, let us consider a class of states with maximally mixed marginals ($\rho_{A(B)}=\mathbf{1}_{A\left( B\right) }/2$), described by $$\rho_{AB}=\frac{1}{4}\left( \mathbf{1}_{AB}+\sum_{i=1}^{3}c_{i}\sigma_{i}^{A}\otimes\sigma_{i}^{B}\right) ,\label{stateMMM}$$ where $\sigma_{i}^{k}$ is the standard Pauli operator in direction $i$ acting on the subspace $k=A,B$, $c_{i}\in\Re$ such that $0\leq\left\vert c_{i}\right\vert \leq1$ for $i=1,2,3$, and $\mathbf{1}_{A(B)}$ is the identity operator in subspace $A$($B$). The state in Eq. (\[stateMMM\]) represents a considerable class of states including the Werner ($\left\vert c_{1}\right\vert =\left\vert c_{2}\right\vert =\left\vert c_{3}\right\vert =c$) and Bell ($\left\vert c_{1}\right\vert =\left\vert c_{2}\right\vert =\left\vert c_{3}\right\vert =1$) basis states. *Phase flip channel.* This is a quantum noise process with loss of quantum information without loss of energy. For this channel, the Kraus operators are given by [@NieChu; @Dav1] $\Gamma_{0}^{(A)}=diag(\sqrt {1-p_{A}/2},\sqrt{1-p_{A}/2})\otimes\mathbf{1}_{B}$, $\Gamma_{1}^{(A)}=diag(\sqrt{p_{A}/2},-\sqrt{p_{A}/2})\otimes\mathbf{1}_{B}$, $\Gamma _{0}^{(B)}=\mathbf{1}_{A}\otimes diag(\sqrt{1-p_{B}/2},\sqrt{1-p_{B}/2})$, and $\Gamma_{1}^{(B)}=\mathbf{1}_{A}\otimes diag(\sqrt{p_{B}/2},-\sqrt{p_{B}/2})$, written in the subsystem basis $\left\{ |0\rangle_{k},|1\rangle_{k}\right\} ,$ $k=A,B$. We are using $p_{A(B)}$ ($0\leq p_{A(B)}\leq1$) as parametrized time in channel $A(B)$. We consider here the symmetric situation in which the decoherence rate is equal in both channels, so $p_{A}=p_{B}\equiv p$. The description of the dynamical evolution of the system under the action of a decoherence channel using the parametrized time $p$ is more general than that using a specific functional dependence on time $t$, in the sense that it accounts for a large range of physical scenarios. For example, for the phase damping channel (the phase damping and phase flip channels are the same quantum operation [@NieChu]), we have $p=1-\exp(-\gamma t)$, where $\gamma$ is the phase damping rate [@Eberly06]. The density operator in Eq. (\[stateMMM\]) under the multimode noise channel, $\varepsilon(\rho_{AB})$, has the eigenvalue spectrum: $$\begin{aligned} \lambda_{1} & =\frac{1}{4}\left[ 1-\alpha-\beta-\gamma\right] ,\quad \lambda_{2}=\frac{1}{4}\left[ 1-\alpha+\beta+\gamma\right] ,\nonumber\\ \lambda_{3} & =\frac{1}{4}\left[ 1+\alpha-\beta+\gamma\right] ,\quad \lambda_{4}=\frac{1}{4}\left[ 1+\alpha+\beta-\gamma\right] ,\label{lamb}$$ with $\alpha=\left( 1-p\right) ^{2}c_{1}$, $\beta=\left( 1-p\right) ^{2}c_{2}$, $\gamma=c_{3}$, and the von Neumann entropies of the marginal states remain constant under phase flip for any $p$, $S\left[ \operatorname*{Tr}_{A(B)}\varepsilon\left( \rho_{AB}\right) \right] =1$. To compute the classical correlation (\[CC\]) under phase flip, we take the complete set of orthonormal projectors $\left\{ \Pi_{j}=\left\vert \Theta _{j}\right\rangle \left\langle \Theta_{j}\right\vert ,j=\parallel ,\perp\right\} $, where $\left\vert \Theta_{\parallel}\right\rangle \equiv\cos(\theta)\left\vert 0\right\rangle +e^{i\phi}\sin(\theta)\left\vert 1\right\rangle $ and $\left\vert \Theta_{\perp}\right\rangle \equiv e^{-i\phi }\sin(\theta)\left\vert 0\right\rangle -\cos(\theta)\left\vert 1\right\rangle $. Then the reduced measured density operator of subsystem $A$ under phase flip, $\widetilde{\rho}_{A}^{j}=\left. \operatorname*{Tr}_{B}\left[ \Pi _{j}\varepsilon(\rho_{AB})\Pi_{j}\right] \right/ q_{j}$, will have the following eigenvalue spectrum: $$\begin{aligned} \xi_{1,2}^{(j)} & =\frac{1}{4}\left\{ 2\pm\left[ 2\gamma^{2}+\alpha ^{2}+\beta^{2}+\left( 2\gamma^{2}-\alpha^{2}-\beta^{2}\right) \cos\left( 4\theta\right) \right. \right. \nonumber\\ & \left. \left. +2(\alpha^{2}-\beta^{2})\cos\left( 2\phi\right) \sin ^{2}\left( 2\theta\right) \right] ^{1/2}\right\} \text{,}\label{eigenval}$$ and $q_{j}=1/2$, for $j=\parallel,\perp$. From Eq. (\[CC\]), it follows that$$\mathcal{C}\left[ \varepsilon(\rho_{AB})\right] =1-\underset{\theta,\phi }{\min}\left[ S\left( \widetilde{\rho}_{A}^{\parallel}\right) \right] ,\label{CCM}$$ since $\xi_{1,2}^{(\parallel)}=\xi_{1,2}^{(\perp)}$ and hence $S\left( \widetilde{\rho}_{A}^{\parallel}\right) =S\left( \widetilde{\rho}_{A}^{\perp}\right) $. The classical correlation and the quantum correlation under phase flip may be written, respectively, as $$\begin{aligned} \mathcal{C}\left[ \varepsilon\left( \rho_{AB}\right) \right] & =\sum_{k=1}^{2}\frac{1+(-1)^{k}\chi}{2}\log_{2}(1+(-1)^{k}\chi),\label{Cpf}\\ \mathcal{Q}\left[ \varepsilon\left( \rho_{AB}\right) \right] & =2+\sum_{k=1}^{4}\lambda_{k}\log_{2}\lambda_{k}-\mathcal{C}\left[ \varepsilon(\rho_{AB})\right] ,\label{Qpf}$$ where $\chi=\max\left( \left\vert \alpha\right\vert ,\left\vert \beta\right\vert ,\left\vert \gamma\right\vert \right) $, which depends on the relation between the coefficients $c_{i}$ in state (\[stateMMM\]) and on the parametrized time $p$. $(i)$ If $\left\vert c_{3}\right\vert \geq\left\vert c_{1}\right\vert ,\left\vert c_{2}\right\vert $ in (\[stateMMM\]), the minimum in (\[CCM\]) is obtained by $\theta=$ $\phi=0$. The classical and the quantum correlations under phase flip will be given in Eqs. (\[Cpf\]) and (\[Qpf\]), respectively, with $\chi=\left\vert c_{3}\right\vert $. In this case, the classical correlation $\mathcal{C}\left[ \varepsilon(\rho_{AB})\right] $ is constant (it does not depend on the parametrized time $p$) and equal to the mutual information of the completely decohered state ($p=1$), $\mathcal{C}(\rho_{AB})=\mathcal{C}\left[ \varepsilon(\rho_{AB})\right] =\mathcal{I}\left[ \left. \varepsilon(\rho_{A:B})\right\vert _{p=1}\right] $, while the quantum correlation \[Eq. (\[Qpf\])\] decays monotonically. $(ii)$ If $\left\vert c_{1}\right\vert \geq\left\vert c_{2}\right\vert ,\left\vert c_{3}\right\vert \ $or $\left\vert c_{2}\right\vert \geq\left\vert c_{1}\right\vert ,\left\vert c_{3}\right\vert $; and $\left\vert c_{3}\right\vert \neq0$, we have a peculiar dynamics with a sudden change in behavior. $\mathcal{C}$ decays monotonically until a specific parametrized time, $p_{SC}=1-\sqrt{\left. \left\vert c_{3}\right\vert \right/ \max(\left\vert c_{1}\right\vert ,\left\vert c_{2}\right\vert )}$, and from then on $\mathcal{C}$ remains constant. For $p<p_{SC}$, the minimum in (\[CCM\]) is achieved when $\theta=\pi/4,$ $\phi=0$ (if $\left\vert c_{1}\right\vert \geq\left\vert c_{2}\right\vert $) or $\phi=\pi/2$ (if $\left\vert c_{1}\right\vert <\left\vert c_{2}\right\vert $), and $\chi=\left( 1-p\right) ^{2}\max(\left\vert c_{1}\right\vert ,\left\vert c_{2}\right\vert )$. Thus, $\mathcal{C}$ decays monotonically. On the other hand, for $p\geq p_{SC}$, the choice $\theta=\phi=0$ leads to the minimum in (\[CCM\]) and $\chi=\left\vert c_{3}\right\vert $. Then $\mathcal{C}$ suddenly becomes constant at $p=p_{SC}$, $\mathcal{C}\left[ \left. \varepsilon(\rho_{AB})\right\vert _{p\geq p_{SC}}\right] =\mathcal{I}\left[ \left. \varepsilon(\rho_{A:B})\right\vert _{p=1}\right] $, and the decay rate of $\mathcal{Q}$ changes suddenly at $p=p_{SC}$. In Fig. 1, we depict this peculiar behavior for a given choice of parameters and, in Fig. 2, we show the values of the sudden change parametrized time, $p_{sc}$, as a function of $c_{1}$ and $c_{2}$. $(iii)$ Finally, if $\left\vert c_{3}\right\vert =0$, we have a monotonic decay of both correlations $\mathcal{C}$ and $\mathcal{Q}$. \[h\] [Fig01.eps]{} The dynamic behavior of correlations under the phase flip channel described in Fig. 1 is quite general. Such a sudden change in behavior occurs also when we consider the bit flip and the bit-phase flip channels \[of course under other conditions on the $\left. c_{k}\right. $’s in state (\[stateMMM\])\]. Moreover, these results contradict the early conjecture that $\emph{C}\geq\mathcal{Q}$ for any quantum state [@GroPoWi; @HenVed; @Horo3]. Here, we have shown that the quantum correlation may be greater than the classical one for some states, for example $\left. \varepsilon(\rho_{A:B})\right\vert _{p=p_{SC}}$. It is worth mentioning that this peculiar sudden change in behavior is a different phenomenon from entanglement sudden death [@Dav1; @Sudd; @Eberly]. Indeed, it seems that these correlations do not present sudden death [@Werlang]. \[h\] [Fig02.eps]{} *Bit flip channel.* The Kraus operators are [@NieChu; @Dav1] $\Gamma_{0}^{(A)}=diag(\sqrt{1-p/2},\sqrt{1-p/2})\otimes\mathbf{1}_{B}$, $\Gamma_{1}^{(A)}=\sqrt{p/2}\sigma_{x}^{(A)}\otimes\mathbf{1}_{B}$, $\Gamma_{0}^{(B)}=\mathbf{1}_{A}\otimes diag(\sqrt{1-p/2},\sqrt{1-p/2})$, and $\Gamma_{1}^{(B)}=\mathbf{1}_{A}\otimes\sqrt{p/2}\sigma_{x}^{(B)}$. The eigenvalue spectrum of $\varepsilon\left( \rho_{AB}\right) $ is given by (\[lamb\]), where the variables now take the form $\alpha=c_{1}$, $\beta=\left( 1-p\right) ^{2}c_{2}$, and $\gamma=\left( 1-p\right) ^{2}c_{3}$. The correlations can again be written as (\[Cpf\]) and (\[Qpf\]). The dynamic behavior of $\mathcal{C}$ and $\mathcal{Q}$ under bit flip is symmetrical to that for the phase flip channel (just exchanging $c_{1}$ and $c_{3}$). Type $(i)$ dynamics is obtained when $\left\vert c_{1}\right\vert \geq\left\vert c_{2}\right\vert ,\left\vert c_{3}\right\vert $. Type $(ii)$ occurs for $\left\vert c_{3}\right\vert \geq\left\vert c_{1}\right\vert ,\left\vert c_{2}\right\vert \ $or $\left\vert c_{2}\right\vert \geq\left\vert c_{1}\right\vert ,\left\vert c_{3}\right\vert $, and $\left\vert c_{1}\right\vert \neq0$, with a sudden change in behavior of $\mathcal{C}$ and $\mathcal{Q}$ at $p_{SC}=1-\sqrt{\left. \left\vert c_{1}\right\vert \right/ \max(\left\vert c_{2}\right\vert ,\left\vert c_{3}\right\vert )}$. Finally, if $\left\vert c_{1}\right\vert =0$, we have type $(iii)$ dynamics. *Bit-phase flip channel.* Now, the Kraus operators are [@NieChu; @Dav1] $\Gamma_{0}^{(A)}=diag(\sqrt{1-p/2},\sqrt{1-p/2})\otimes\mathbf{1}_{B}$, $\Gamma_{1}^{(A)}=\sqrt{p/2}\sigma_{y}^{(A)}\otimes\mathbf{1}_{B}$, $\Gamma_{0}^{(B)}=\mathbf{1}_{A}\otimes diag(\sqrt {1-p/2},\sqrt{1-p/2})$, and $\Gamma_{1}^{(B)}=\mathbf{1}_{A}\otimes\sqrt {p/2}\sigma_{y}^{(B)}$. The variables in Eq. (\[lamb\]) turn out to be $\alpha=\left( 1-p\right) ^{2}c_{1}$, $\beta=c_{2}$, and $\gamma=\left( 1-p\right) ^{2}c_{3}$. $\mathcal{C}$ and $\mathcal{Q}$ under bit-phase flip can again be written as (\[Cpf\]) and (\[Qpf\]), respectively. Once more, the conditions for the various types of dynamics are obtained by swapping $c_{2}$ and $c_{3}$ in the phase flip channel. For type $(ii)$ dynamics, we now have $p_{SC}=1-\sqrt{\left. \left\vert c_{2}\right\vert \right/ \max(\left\vert c_{1}\right\vert ,\left\vert c_{3}\right\vert )}$. Necessary conditions for $\mathcal{C}$ to remain constant under decoherence are the following: $$\left[ \Pi_{j},\Gamma_{k}^{(B)}\right] =0,\text{ \ }\forall\text{ \ }j,k.\label{COM}$$ These relations depend on the angles $\theta$ and $\phi$ that define the minimum in (\[CCM\]). For the channels mentioned above, $\Gamma_{0}^{(B)}\propto\mathbf{1}_{B}$ and $\Gamma_{1}^{(B)}\propto\sigma_{i}^{(B)}$ with $i=1$ for the bit flip, $i=2$ for the bit-phase flip, and $i=3$ for the phase flip. Hence, condition (\[COM\]) will be satisfied when the projective measurements that reach the minimum in Eq. (\[CCM\]), $\Pi_{j}$, are performed on eigenstates of $\sigma_{i}^{(B)}$ [@note3]. On the other hand, the angles $\theta$ and $\phi$ that define the minimum in Eq. (\[CCM\]) depend on the geometry of the initial state. When the larger component of state in Eq. (\[stateMMM\]) is in the direction $1$, $2$, or $3$, $\mathcal{C}$ remains constant under bit flip, bit-phase flip or phase flip, respectively. The fact that, for a given state, the classical correlation can remain unaffected by a suitable choice of noise channel, $\varepsilon$, immediately suggests an operational way (without any extremization procedure) of computing classical and quantum correlations. It could be done as follows: depending on the state geometry, we send its component parts through local channels that preserve its classical correlation, so that the quantum correlation will be given simply by the difference between the state mutual information $\mathcal{I}(\rho_{A:B})$ and the completely decohered mutual information, $\mathcal{I}\left[ \left. \varepsilon (\rho_{A:B})\right\vert _{p=1}\right] $:$$\mathcal{Q}(\rho_{AB})\equiv\mathcal{I}(\rho_{A:B})-\mathcal{I}\left[ \left. \varepsilon(\rho_{A:B})\right\vert _{p=1}\right] ,$$ since $\mathcal{I}(\rho_{A:B})=\mathcal{Q}(\rho_{AB})+\mathcal{C}(\rho_{AB})$ and $$\mathcal{C}(\rho_{AB})=\mathcal{I}\left[ \left. \varepsilon(\rho _{A:B})\right\vert _{p=1}\right] .$$ A suitable channel for the class of states described by Eq. (\[stateMMM\]) is chosen which satisfies condition (\[COM\]) as discussed above. A problem to be addressed before such a measure can be used for a general state is to establish a protocol to find the map (if this map exists) which leaves the classical correlation unaffected [@note4]. This suggests an interesting research program to develop an operational way of investigating the role of quantum and classical correlations in many scenarios, such as quantum phase transitions [@Sarandy], non-equilibrium thermodynamics [@Vlatko], etc. We thank E. I. Duzzioni for discussions at the very beginning of this study. J.M., L.C.C., and R.M.S. acknowledge the funding from UFABC, CAPES, FAPESP, CNPq, and Brazilian National Institute of Science and Technology for Quantum Information. V.V. acknowledges the Royal Society, the Wolfson Trust, the Engineering and Physical Sciences Research Council (UK) and the National Research Foundation and Ministry of Education (Singapore) for their financial support. [99]{} B. Groisman, S. Popescu, and A. Winter, Phys. Rev. A **72**, 032317 (2005). R. Landauer, IBM J. Res. Dev. **5**, 183 (1961). B. Schumacher and M. D. Westmoreland, Phys. Rev. A **74**, 042305 (2006). L. Henderson and V. Vedral, J. Phys. A **34**, 6899 (2001); V. Vedral, Phys. Rev. Lett. **90**, 050401 (2003). J. Oppenheim, M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. Lett. **89**, 180402 (2002). D. Yang, M. Horodecki, and Z. D. Wang, Phys. Rev. Lett. **101**, 140501 (2008). H. Ollivier and W. H. Zurek, Phys. Rev. Lett. **88**, 017901 (2001). D. Kaszlikowski, A. Sen(De), U. Sen, V. Vedral, and A. Winter, ** Phys. Rev. Lett. **101**, 070502 (2008). M. Piani, P. Horodecki, and R. Horodecki, Phys. Rev. Lett. **100**, 090502 (2008); M. Piani, M. Christandl, C. E. Mora, and P. Horodecki, Phys. Rev. Lett. **102**, 250503 (2009). A. Datta, A. Shaji, and C. M. Caves, Phys. Rev. Lett. **100**, 050502 (2008). B. P. Lanyon, M. Barbieri, M. P. Almeida, and A. G. White Phys. Rev. Lett. **101**, 200501 (2008). S. Luo, Phys. Rev. A **77**, 042303 (2008). Here, without loss of generality, we consider projective measures instead of more general positive operator-valued measure (POVM) used in the original definition for classical correlation in Ref. \[4\]. In fact in Ref. [@Hamieh], Hamieh *et al.* shows that for a two qubit system the projective measurement is the POVM which maximizes Eq. (\[CC\]). We restrict our analysis to states which have $S(\rho _{A})=S(\rho_{B})$. In this scenario, the classical correlation computed on measuring subsystem $A$ is equal to that given by Eq. (\[CC\]) on measuring subsystem $B$. M. A. Nielsen and I. L. Chuang, *Quantum Computation and Quantum Information* (Cambridge University Press, Cambridge, U.K., 2000). A. Salles, F. de Melo, M. P. Almeida, M. Hor-Meyll, S. P. Walborn, P. H. Souto Ribeiro, and L. Davidovich, Phys. Rev. A **78**, 022322 (2008). T. Yu and J. H. Eberly, Phys. Rev. Lett. **97**, 140403 (2006). M. Horodecki, P. Horodecki, R. Horodecki, J. Oppenheim, A. Sen, U. Sen, and B. Synak-Radtke, Phys. Rev. A **71**, 062307 (2005). T. Yu and J. H. Eberly, Phys. Rev. Lett. **93**, 140404 (2004); M. F. Santos, P. Milman, L. Davidovich, N. Zagury, Phys. Rev. A **73**, 040305(R) (2006); M. P. Almeida, F. de Melo, M. Hor-Meyll, A. Salles, S. P. Walborn, P. H. Souto Ribeiro, and L. Davidovich, Science **316**, 579 (2007). T. Yu and J. H. Eberly, Science **323**, 598 (2009) *and references therein*. T. Werlang, S. Souza, F. F. Fanchini, C. J. Villas Boas, Phys. Rev. A **80**, 024103 (2009); A. Ferraro, L. Aolita, D. Cavalcanti, F. M. Cucchietti, and A. Acín, *e-print* arXiv:0908.3157 These conditions do not depend on the representation of the Kraus operators. A detailed analysis of such a measure for general states will be presented elsewhere. M. S. Sarandy, Phys. Rev. A **80**, 022108 (2009); R. Dillenschneider, Phys. Rev. B **78**, 224413 (2008). V. Vedral, J. Phys. Conf. Ser. **143**, 012010 (2009). S. Hamieh, R. Kobes, and H. Zaraket, Phys. Rev. A **70**, 052325 (2004).
--- abstract: 'We examine the role of thermal fluctuations in two-species Bose-Einstein condensates confined in quasi-two-dimensional (quasi-2D) optical lattices using the Hartree-Fock-Bogoliubov theory with the Popov approximation. The method, in particular, is ideal to probe the evolution of quasiparticle modes at finite temperatures. Our studies show that the quasiparticle spectrum in the phase-separated domain of the two-species Bose-Einstein condensate has a discontinuity at some critical value of the temperature. Furthermore, the low-lying modes like the slosh mode becomes degenerate at this critical temperature, and this is associated with the transition from the immiscible side-by-side density profile to the miscible phase. Hence, the rotational symmetry of the condensate density profiles are restored, and so is the degeneracy of quasiparticle modes.' author: - 'K. Suthar' - 'D. Angom' bibliography: - 'tbec\_2d\_temp.bib' title: Thermal fluctuations enhanced miscibility of binary condensates in optical lattices --- Introduction ============ Ultracold atoms in an optical lattice offer fascinating prospects to study phenomena in many-body physics associated with strongly correlated systems in a highly controllable environment [@jaksh_98; @orzel_01; @greiner_02; @bloch_12]. These systems are recognized as ideal tools to explore new quantum phases [@demler_02; @kuklov_03; @kuklov_04], complex phase-transition [@pal_10; @sungsoo_11; @lin_15; @jurgensen_15], quantum magnetism [@trotzky_08; @simon_11], quantum information [@bloch1_08] and to simulate transport and magnetic properties of condensed-matter systems [@lewenstein_07; @bloch_08]. Moreover, the effect of phase separation [@mishra_07; @zhan_14], quantum emulsions and coherence properties [@greiner_01; @roscilde_07; @buonsante_08], and multicritical behaviour [@ceccarelli_15; @ceccarelli_16] of the mixtures have been explored in the past decade. Among the various observations made in the two-species Bose-Einstein condensates (TBECs) of ultracold atomic gases, the most remarkable is the phenomenon of phase separation, and it has been a long-standing topic of interest in chemistry and physics. For repulsive on-site interactions, the transition to the phase-separated domain or immiscibility is characterized by the parameter $\Delta = U_{11} U_{22}/U^{2}_{12} - 1$, where $U_{11}$ and $U_{22}$ are the intraspecies on-site interactions and $U_{12}$ is the interspecies on-site interaction. When $\Delta < 0$, an immiscible phase occurs in which, the atoms of species $1$ and $2$ have relatively strong repulsion, whereas $\Delta\geqslant 0$ implies a miscible phase [@ho_96; @timmermans_98; @esry_99]. The presence of an external trapping potential, however, modifies this condition as the trap introduces an additional energy cost for the species to spatially separate [@wen_12]. In experiments, the unique feature of phase separation has been successfully observed in TBECs with harmonic trapping potential [@papp_08; @tojo_10; @mccarron_11]. Previously, in the context of superfluid Helium at zero temperature, the phase separation of the bosonic mixtures of isotopes of different masses has also been predicted in Refs. [@chester_55; @miller_78]. The recent experimental realizations of TBECs in optical lattices, either of two different atomic species [@catani_08] or two different hyperfine states of same atomic species [@gadway_10; @soltan_11] provides the motivations to study these systems in detail. In recent works, we have examined the miscible-immiscible transition, and the quasiparticle spectra of the TBECs at zero temperature [@suthar_15; @suthar_16]. In other theoretical studies, the finite temperature properties of TBECs have been explorered [@ohberg_99; @shi_00; @kwangsik_07]. In continuum or TBECs with harmonic confining potentials alone, we have explorered the suppression of phase separation due to the presence of the thermal fluctuations [@arko_15]. However, a theoretical understanding of the finite temperature effects on the topology and the collective excitations of TBECs in optical lattices is yet to be explored. The Bose-Einstein condensation and hence, the coherence in a system of bosons depends on the interplay between various parameters, such as temperature, interaction strength, confinement, and dimensionality [@proukakis_06]. In particular, in the low-dimensional Bose gases, the coherence can only be maintained across the entire spatial extent at a temperature much below the critical temperature. The coherence property has already been observed experimentally [@dettmer_01; @hellweg_03; @richard_03; @esteve_06; @plisson_11]. With an attention towards this unexplored physics, we study the finite temperature effects of quasi-2D trapped TBECs in optical lattices. In the present work, we address the topological phase transition in the TBECs of two different isotopes of Rb with temperature as a control parameter in the domain $T<T_c$, where $T_c$ is the critical temperature of either of the species of the mixture. We study the evolution of the quasiparticle spectra of TBEC in quasi-2D optical lattices with temperature. For this work, we use Hartree-Fock-Bogoliubov (HFB) formalism with the Popov approximation, and starting from phase-separated domain at zero temperature we vary the temperature. We observe a topological transition of the TBEC at a critical value of the temperature. This transition is accompanied by a discontinuity in the quasiparticle excitation spectrum, and in addition, the slosh mode corresponding to each of the species becomes degenerate. Furthermore, we compute the equal-time first order spatial correlation functions which is the measure of the coherence and phase fluctuations present in the system. It describes off-diagonal long range order which is the defining characteristic of BEC [@penrose_56]. This is an important theoretical tool to study the many body effects in atomic physics experiments [@burt_97; @tolra_04]. At finite temperature the decay in the coherence of the TBECs is examined using the first order correlation function. This paper is organized as follows. In Sec. \[theory\_2s2d\] we describe the HFB-formalism, and the numerical techniques used in the present work. The evolution of the quasiparticle modes and the density distributions with the temperature are shown in Sec. \[results\]. Finally, our main results are summarized in Sec. \[conc\]. Theory and methods {#theory_2s2d} ================== HFB-Popov approximation for quasi-2D TBEC ----------------------------------------- We consider a binary BEC confined in an optical lattice with pancake shaped configuration of background harmonic trapping potential. Thus, the trapping frequencies satisfy the condition $\omega_{\perp} \ll \omega_z$ with $\omega_x = \omega_y = \omega_{\perp}$. In this system, the excitation energies along the axial direction is high, and the degree of freedom in this direction is frozen. The excitations, both the quantum and thermal fluctuations, are considered only along the radial direction. In the tight-binding approximation (TBA) [@chiofalo_00; @smerzi_03], the Bose-Hubbard (BH) Hamiltonian [@fisher_89; @lundh_12; @hofer_12] describing this system is $$\begin{aligned} \hat{H} = && \sum_{k=1}^2 \bigg[- J_k \sum_{\langle \xi\xi'\rangle} \hat{a}^{\dagger}_{k\xi}\hat{a}_{k\xi'} + \sum_\xi(\epsilon^{(k)}_{\xi} - \mu_k) \hat{a}^{\dagger}_{k\xi}\hat{a}_{k\xi}\bigg] \nonumber\\ &+& \frac{1}{2}\!\!\sum_{k=1, \xi}^{2}\!\! U_{kk}\hat{a}^{\dagger}_{k\xi} \hat{a}^{\dagger}_{k\xi}\hat{a}_{k\xi}\hat{a}_{k\xi} + U_{12}\!\!\sum_\xi \hat{a}^{\dagger}_{1\xi}\hat{a}_{1\xi} \hat{a}^{\dagger}_{2\xi}\hat{a}_{2\xi}, \label{bh2d} \end{aligned}$$ where $k = 1,2$ is the species index, $\mu_k$ is the chemical potential of the $k$th species, and $\hat{a}_{k\xi}$ ($\hat{a}^\dagger_{k\xi}$) is the annihilation (creation) operators of the two different species at $\xi$th lattice site. The index is such that $\xi \equiv (i,j)$ with $i$ and $j$ as the lattice site index along $x$ and $y$ directions, respectively. The summation index $\langle \xi\xi'\rangle$ represents the sum over nearest-neighbour to $\xi$th site. The TBA is valid when the depth of the lattice potential is much larger than the chemical potential $V_0 \gg \mu_k$, the BH Hamiltonian then describes the system when the bosonic atoms occupy the lowest energy band. A detailed derivation of the BH-Hamiltonian is given in our previous works [@suthar_15; @suthar_16]. In the BH-Hamiltonian, $J_k$ are the tunneling matrix elements, $\epsilon^{(k)}_{\xi}$ is the offset energy arising due to background harmonic potential, $U_{kk}$ ($U_{12}$) are the intraspecies (interspecies) interaction strengths. In the present work all the interaction strengths are considered to be repulsive, that is, $U_{kk},U_{12}>0$. In the weakly interacting regime, under the Bogoliubov approximation [@griffin_96; @amrey_04], the annihilation operators at each lattice site can be decomposed as $\hat{a}_{1\xi} = (c_{\xi} + \hat{\varphi}_{1\xi})e^{-i \mu_1 t/\hbar}$, $\hat{a}_{2\xi} = (d_{\xi} + \hat{\varphi}_{2\xi})e^{-i \mu_2 t/\hbar}$, where $c_{\xi}$ and $d_{\xi}$ are the complex amplitudes describing the condensate phase of each of the species. The operators $\hat{\varphi}_{1\xi}$ and $\hat{\varphi}_{2\xi}$ are the operators representing the quantum or thermal fluctuation part of the field operators. From the equation of motion of the field operators with the Bogoliubov approximation, the equilibrium properties of a TBEC is governed by the coupled generalized discrete nonlinear Schrödinger equations (DNLSEs) $$\begin{aligned} \mu_1 c_\xi = &-& J_1 \sum_{\xi'} c_{\xi'} + \left [\epsilon^{(1)}_\xi + U_{11} (n^{c}_{1\xi} + 2 \tilde{n}_{1\xi}) + U_{12} n_{2\xi} \right ] c_\xi, \nonumber \\~\\ \mu_2 d_\xi = &-& J_2 \sum_{\xi'} d_{\xi'} + \left [\epsilon^{(2)}_\xi + U_{22} (n^{c}_{2\xi} + 2 \tilde{n}_{2\xi}) + U_{12} n_{1\xi} \right ] d_\xi, \nonumber \\ \end{aligned}$$ \[dnls2d\] where $n^{c}_{1\xi} = |c_\xi|^2$ and $n^{c}_{2\xi} = |d_\xi|^2$ are the condensate, $\tilde{n}_{k\xi} = \langle {\hat{\varphi}}^{\dagger}_{k\xi}\hat{\varphi}_{k\xi} \rangle$ are the noncondensate and $n_{k\xi} = n^{c}_{k\xi} + \tilde{n}_{k\xi}$ are the total density of the species. Using Bogoliubov transformation $$\hat\varphi_{k\xi} = \sum_l\left[u^l_{k\xi}\hat{\alpha}_l e^{-i \omega_l t} - v^{*l}_{k\xi}\hat{\alpha}^{\dagger}_l e^{i \omega_l t}\right], \label{bog_trans_2d}$$ where $\hat{\alpha}_l (\hat{\alpha}^{\dagger}_l)$ are the quasiparticle annihilation (creation) operators, which satisfy the Bose commutation relations, $l$ is the quasiparticle mode index, $u^l_{k\xi}$ and $v^l_{k\xi}$ are the quasiparticle amplitudes for the $k$th species, and $\omega_l = E_l/\hbar$ is the frequency of the $l$th quasiparticle mode with $E_l$ is the mode excitation energy. Using the Bogoliubov transformation, we obtain the following HFB-Popov equations [@suthar_16]: $$\begin{aligned} E_l u^l_{1,\xi} = &-& J_1(u^l_{1,\xi-1} + u^l_{1,\xi+1}) + \mathcal{U}_1 u^l_{1,\xi} - U_{11} c^2_\xi v^l_{1,\xi} \nonumber\\ &+& U_{12} c_\xi(d^{*}_\xi u^l_{2,\xi} - d_\xi v^l_{2,\xi}),\\ E_l v^l_{1,\xi} = &~& J_1(v^l_{1,\xi-1} + v^l_{1,\xi+1}) + \underline{\mathcal{U}}_1 v^l_{1,\xi} + U_{11} c^{*2}_\xi u^l_{1,\xi} \nonumber\\ &-& U_{12} c^{*}_\xi(d_\xi v^l_{2,\xi} - d^{*}_\xi u^l_{2,\xi}),\\ E_l u^l_{2,\xi} = &-& J_2(u^l_{2,\xi-1} + u^l_{2,\xi+1}) + \mathcal{U}_2 u^l_{2,\xi} - U_{22} d^2_\xi v^l_{2,\xi} \nonumber\\ &+& U_{12} d_\xi(c^{*}_\xi u^l_{1,\xi} - c_\xi v^l_{1,\xi}),\\ E_l v^l_{2,\xi} = &~& J_2(v^l_{2,\xi-1} + v^l_{2,\xi+1}) + \underline{\mathcal{U}}_2 v^l_{2,\xi} + U_{22} d^{*2}_\xi u^l_{2,\xi} \nonumber\\ &-& U_{12} d^{*}_\xi(c_\xi v^l_{1,\xi} - c^{*}_\xi u^l_{1,\xi}), \end{aligned}$$ \[hfb\_eq\_2sp\] where $\mathcal{U}_1 = 2 U_{11} (n^{c}_{1\xi} + \tilde{n}_{1\xi}) + U_{12} (n^{c}_{2\xi} + \tilde{n}_{2\xi}) + (\epsilon^{(1)}_\xi - \mu_1)$, $\mathcal{U}_2 = 2 U_{22} (n^{c}_{2\xi} + \tilde{n}_{2\xi}) + U_{12} (n^{c}_{1\xi} + \tilde{n}_{1\xi}) + (\epsilon^{(2)}_\xi - \mu_2)$ with $\underline{\mathcal{U}}_k = -\mathcal{U}_k$. To solve the above eigenvalue equations, we use a basis set of on-site Gaussian wave functions, and define the quasiparticle amplitude as linear combination of the basis functions. The condensate and noncondensate densities are then computed through the self-consistent solution of Eqs. (\[dnls2d\]) and (\[hfb\_eq\_2sp\]). The noncondensate atomic density at the $\xi$th lattice site is $$\tilde{n}_{k\xi} = \sum_l \left[ (|u^l_{k\xi}|^2 + |v^l_{k\xi}|^2)N_0(E_l) + |v^l_{k\xi}|^2 \right],$$ where $N_0(E_l) = (e^{\beta E_l} - 1)^{-1}$ with $\beta = (k_{B}T)^{-1}$ is the Bose-Einstein distribution factor of the $l$th quasiparticle mode with energy $E_l$ at temperature $T$. The last term in the $\tilde{n}_{k\xi}$ is independent of the temperature, and hence, represents the quantum fluctuations of the system. To examine the role of temperature we measure the miscibility of the condensates in terms of the overlap integral $$\Lambda = \frac{\left[\int n_1(\mathbf r) n_2(\mathbf r) d\mathbf{r}\right]^2} {\left[\int n^2_1(\mathbf r) d\mathbf{r} \right] \left[\int n^2_2(\mathbf r) d\mathbf{r} \right]}.$$ Here $n_k (\mathbf r)$ is the total density of $k$th condensate at position $\mathbf r \equiv (x,y)$. If the two condensate of the TBEC complete overlap to each other then the system is in miscible phase with $\Lambda=1$, whereas for the completely phase-separated case $\Lambda=0$. Field-field correlation function -------------------------------- To define a measure of the coherence in the condensate we introduce the first order correlation function $g^{(1)}_{k} (\mathbf r, \mathbf r')$, which can be expressed as expectations of product of field operators at different positions and times [@glauber_63; @naraschewski_99; @bezett_08; @bezett_12]. These are normalized to attain unit modulus in the case of perfect coherence. Here, we restrict ourselves to discussing ordered spatial correlation functions at a fixed time. In terms of the quantum Bose field operator $\hat{\Psi}_{k}$ the first-order spatial correlation function is $$g^{(1)}_{k} (\mathbf r, \mathbf r') = \frac{\langle \hat{\Psi}^{\dagger}_{k}(\mathbf r)\hat{\Psi}_{k}(\mathbf r')\rangle} {\sqrt{{\langle \hat{\Psi}^{\dagger}_{k}(\mathbf r) \hat{\Psi}_{k}(\mathbf r) \rangle} {\langle \hat{\Psi}^{\dagger}_{k}(\mathbf r')\hat{\Psi}_{k}(\mathbf r') \rangle}}},$$ where, $\langle\cdots\rangle$ represents thermal average. It is important to note that the local first order correlation function is equal to the density, [i.e.]{} $g^{(1)}_{k}(\mathbf r, \mathbf r) = n_k(\mathbf r)$. The expression of $g^{(1)}_{k} (\mathbf r, \mathbf r')$ can also written in terms of condensate, and non-condensate correlations as $$g^{(1)}_{k} (\mathbf r, \mathbf r') = \frac{n^{c}_k(\mathbf r, \mathbf r') + \tilde{n}_k(\mathbf r, \mathbf r')} {\sqrt{n_k(\mathbf r) n_k(\mathbf r')}}, \label{corr_eq}$$ where, $$\begin{aligned} n^{c}_k(\mathbf r,\mathbf r') &=& \psi^{*}_k(\mathbf r) \psi_k(\mathbf r'), \\ \tilde{n}_k (\mathbf r, \mathbf r') &=& \sum_l \big[\left\{ u^{*l}_{k}(\mathbf r) u^{l}_{k}(\mathbf r') + v^{*l}_{k}(\mathbf r) v^{l}_{k}(\mathbf r') \right\}N_0(E_l) \\ &&+ v^{*l}_{k}(\mathbf r) v^{l}_{k}(\mathbf r') \big], \\ n_{k}(\mathbf r) &=& n^{c}_k(\mathbf r) + \tilde{n}_k (\mathbf r) \end{aligned}$$ are the condensate density correlation, noncondensate correlation and total density of the $k$th species, respectively. In the above expressions the $n^{c}_k(\mathbf r,\mathbf r')$ and $\tilde{n}_k (\mathbf r, \mathbf r')$ are obtained by expanding the complex amplitudes ($c_{\xi},d_{\xi}$) and the quasiparticle amplitudes ($u^l_{k,\xi},v^l_{k,\xi}$) in the localized Gaussian basis. At $T=0$ K, the entire condensate cloud has complete coherence, and therefore $g^{(1)}=1$ within the condensate region. In TBECs, the transition from phase-separated to the miscible domain at $T \neq 0$ is characteristic signature in the spatial structure of $g^{(1)}_{k} (\mathbf r, \mathbf r')$. Numerical methods ----------------- To solve the coupled DNLSEs, Eqs. (\[dnls2d\]), we scale and rewrite the equations in the dimensionless form. For this we choose the characteristic length scale as the lattice constant $a=\lambda_L/2$ with $\lambda_{L}$ as the wavelength of the laser which creates the lattice potential. Similarly, the recoil energy $E_R = \hbar^2k_L^2/2m$ with $k_L=2\pi/\lambda_L$ is chosen as the energy scale of the system. We use fourth order Runge-Kutta method to solve these equations for zero as well as finite temperatures. To start the iterative steps to solve the equations an appropriate initial guess value of $c_{\xi}$ and $d_{\xi}$ are chosen. For the present work we chose the values corresponding to the side-by-side profile as it gives quasiparticle excitation energies which are real, and not complex. This is important as this shows that the solution we obtain is a stable one, and not metastable. The stationary ground-state wave-function of the TBEC is obtained through imaginary time propagation. In the tight-binding limit, the width of the orthonormalized Gaussian basis functions localized at each lattice site is $0.3a$. Furthermore, to study the quasiparticle excitation spectrum, we cast Eqs. (\[hfb\_eq\_2sp\]) as matrix eigenvalue equations and diagonalize the matrix using the routine ZGEEV from the LAPACK library [@anderson_99]. For finite temperature computations, to take into account the thermal fluctuations, we solve the coupled equations Eqs. (\[dnls2d\]) and Eqs. (\[hfb\_eq\_2sp\]) self-consistently. The solution of the DNLSEs is iterated until it satisfies the convergence criteria in terms of the number of condensate and noncondensate atoms. In general, the convergence is not smooth, and most of the time we encounter severe oscillations in the number of atoms. To remedy these oscillations by damping, we use a successive over- (under-) relaxation technique while updating the condensate (noncondensate) atoms. The new solutions after an iteration cycle (IC) are $$\begin{aligned} c^{\rm new}_{\xi,\rm IC} = r^{\rm ov} c_{\xi,\rm IC} + (1 - r^{\rm ov}) c_{\xi,\rm IC-1}, \\ d^{\rm new}_{\xi,\rm IC} = r^{\rm ov} d_{\xi,\rm IC} + (1 - r^{\rm ov}) d_{\xi,\rm IC-1}, \\ \tilde{n}^{\rm new}_{k\xi,\rm IC} = r^{\rm un} \tilde{n}_{k\xi,\rm IC} + (1 - r^{\rm un}) \tilde{n}_{k\xi,\rm IC-1}, \end{aligned}$$ where $r^{\rm ov} > 1$ ($r^{\rm un} < 1$) is the over (under) relaxation parameter. [![The condensate density profiles at different temperatures in phase-separated domain of $^{87}$Rb -$^{85}$Rb TBEC. The condensate density distribution of the first species (upper panel) and the second species (lower panel) are shown at $T/T_c = 0, 0.08, 0.17$, and $0.2$, which correspond to $T = 0, 30, 60$, and $70$ nK. Here $x$ and $y$ are measured in units of the lattice constant $a$.[]{data-label="den_cond_rb"}](cond.jpg "fig:"){width="8.5cm"}]{} [![The noncondensate density profile at different temperatures in phase-separated domain of $^{87}$Rb -$^{85}$Rb TBEC. The density distribution of the noncondensate atoms of first species (upper panel) and the second species (lower panel) are shown at $T/T_c = 0, 0.08, 0.17$, and $0.2$, which correspond to $T = 0, 30, 60$, and $70$ nK. Here $x$ and $y$ are measured in units of the lattice constant $a$.[]{data-label="den_nc_rb"}](noncond.jpg "fig:"){width="8.5cm"}]{} Results and discussions {#results} ======================= To examine the effects of thermal fluctuations on the quasiparticle spectra we consider the $^{87}$Rb -$^{85}$Rb TBEC with $^{87}$Rb labeled as species $1$ and $^{85}$Rb labeled as species $2$. The radial trapping frequencies of the harmonic potential are $\omega_x = \omega_y = 2\pi\times 50$ Hz with the anisotropy parameter $\omega_z/\omega_{\perp} = 20.33$, and these parameters are chosen on the experimental work Gadway and collaborators [@gadway_10]. It is important to note that we consider equal background trapping potential for species $1$ and $2$. The laser wavelength used to create the 2D lattice potential and the lattice depth are $\lambda_L = 1064$ nm and $V_0 = 5E_R$, respectively. We then take the total number of atoms $N_1 = N_2 = 100$ confined in a ($40\times40$) quasi-2D lattice system. It must be mentioned that, the number of lattice sites considered much larger than the spatial extent of the condensate cloud. Albeit the computations are more time consuming with the larger lattice size, we chose it to ensure that the spatial extent of the thermal component is confined well within the lattice considered. The tunneling matrix elements are $J_1 = 0.66 E_R$ and $J_2 = 0.71 E_R$, which correspond to an optical lattice potential with a depth of $5 E_R$. The intraspecies and interspecies on-site interactions are set as $U_{11} = 0.07 E_R$, $U_{22} = 0.02 E_R$ and $U_{12} = 0.15 E_R$, respectively. For this set of parameters the ground state density distribution of $^{87}$Rb -$^{85}$Rb TBEC is phase-separated with side-by-side geometry. This is a symmetry-broken profile where one species is placed to the left and other to the right of trap center along $y$-axis. The evolution of the ground state from miscible to the side-by-side density profile due to decrease in the $U_{22}$ is reported in our previous work [@suthar_16]. In the present work, we demonstrate the role of temperature in the phase-separated domain of the binary condensate. [![The quasiparticle energies of the low-lying modes as a function of the temperature in phase-separated domain of $^{87}$Rb -$^{85}$Rb TBEC. At $T/T_c = 0.185$ the Kohn and higher modes energy becomes degenerate and system transformed from side-by-side to miscible density profile. In the figure, the slosh mode (SM), Kohn mode (KM), breathing mode (BM), and quadrupole mode (QM) are marked by the black arrows. Here the excitation energy $E_l$ and the temperature $T$ are scaled with respect to the recoil energy $E_R$ and the critical temperature $T_c$ of $^{87}$Rb.[]{data-label="mode_rb"}](mode_rb.pdf "fig:"){width="8.5cm"}]{} Zero temperature ---------------- At $T = 0$ K, in the phase-separated domain, the energetically preferable ground state of TBEC is the side-by-side geometry, which is reported in our previous work [@suthar_16]. Unlike in one-dimensional system [@suthar_15] in quasi-2D system the presence of the quantum fluctuations does not alter the ground state. We start with the phase-separated $^{87}$Rb -$^{85}$Rb TBEC, which has the overlap integral $\Lambda = 0.10$. The density distributions of the condensate and noncondensate atoms of the two species at $T = 0$ K is shown in Fig. \[den\_cond\_rb\] and Fig. \[den\_nc\_rb\]. This is a symmetry broken side-by-side geometry with noncondensate atoms more localized at the edges of the condensate along $y$-axis. [![The mode function of first excitation mode (slosh mode) as a function of the temperature in phase-separated domain of $^{87}$Rb -$^{85}$Rb TBEC. The slosh mode is an out-of-phase mode, where the density flow of first species (upper pannel) is in opposite direction to the flow of second species (lower panel). The value of $T/T_c$ is shown at the upper left corner of each plot in upper panel. These values correspond to $T = 0, 30, 64$, and $66$ nK. Here $x$ and $y$ are in units of the lattice constant $a$.[]{data-label="mode_fn1"}](mode1.jpg "fig:"){width="8.5cm"}]{} [![The mode function of second excitation mode (slosh mode), which at $T/T_c > 0.185$ becomes degenerate to the mode shown in Fig. \[mode\_fn1\] of $^{87}$Rb -$^{85}$Rb TBEC. Here the density flow of first species (upper panel) is out of phase to the flow of second species (lower panel). The value of $T/T_c$ is shown at the upper left corner of each plot in upper panel. These values correspond to $T = 0, 30, 64$, and $66$ nK. Here $x$ and $y$ are in units of the lattice constant $a$.[]{data-label="mode_fn2"}](mode2.jpg "fig:"){width="8.5cm"}]{} [![The evolution of the interface mode in the phase-separated domain of $^{87}$Rb -$^{85}$Rb TBEC with temperature. At $T/T_c > 0.185$, this mode transformed into breathing mode as the system acquires the rotational symmetry. These are out-of-phase modes as the density perturbation of first species (upper panel) is in opposite direction to the second species (lower panel). The value of $T/T_c$ is shown at the upper left corner of each plot in upper panel. Here $x$ and $y$ are in units of the lattice constant $a$.[]{data-label="mode_fn14"}](mode14.jpg "fig:"){width="8.5cm"}]{} Finite temperatures ------------------- At $T\neq0$, in addition to the quantum fluctuations, which are present at the zero temperature, the thermal cloud also contribute to the noncondensate density. As shown in Figs. \[den\_cond\_rb\] and \[den\_nc\_rb\], at $T = 30$ nK, the condensate density profiles of both the species begin to overlap, or in other words, the two species are partly miscible. This is also evident from the value of $\Lambda=0.16$, which shows a marginal increase compared to value of 0.10 at zero temperature. Upon further increase in temperature, at $T = 60$ nK, $\Lambda = 0.36$, this indicates an increase in the miscibility of the two species. Another important feature at $30$ and $60$ nK is the localization of the non-condensate atoms at the interface. This is due to repulsion from the condensate atoms, and lower thermal energy which is insufficient to overcome this repulsion energy. At higher temperatures, the extent of overlap between the condensate density profiles increases, and TBEC is completely miscible at $T = 70$ nK. This is reflected in the value of $\Lambda = 0.95$, and the condensate as well as the noncondensate densities acquire rotational symmetry. The transition from the phase-separated into miscible domain can further be examined from the evolution of the quasiparticle modes as a function of the temperature. The evolution of the few low-lying mode energies with temperature is shown in Fig. \[mode\_rb\] with the temperature defined in the units of the critical temperature $T_c$ of the $^{87}$Rb atoms. It is evident from the figure that there are mode energy bifurcations with the increase in the temperature. These are associated with the restoration of rotational symmetry when the TBEC is rendered miscible through an increase in temperature. As to be expected the two lowest energy mode are the zero energy or the Goldstone modes, which are the result of the spontaneous symmetry breaking associated with the condensation. In the phase-separated domain, these modes correspond to one each for each of the species. The first two excited modes are the non-degenerate Kohn or slosh modes of the two species, and these remain non-degenerate in the domain $T/T_c \leq 0.185$. The structure of these modes are shown in Figs. \[mode\_fn1\] and \[mode\_fn2\]. When $T/T_c > 0.185$ the TBEC acquires a rotational symmetry and the slosh modes becomes degenerate with $\pi/2$ rotation. A key feature in the quasiparticle mode evolution is that the energy of all the out-of-phase mode increases for $T/T_c > 0.185$, whereas all the in-phase mode remains steady. Here, out-of-phase and in-phase mean the amplitudes $u_1$ and $u_2$ of a quasiparticle are of different and same phases, respectively. Among the low-energy modes, the Kohn mode is an in-phase whereas the breathing and quadrupole modes are out of phase in nature. One unique feature of TBEC in the immiscible phase is the presence of interface modes, which have amplitudes prominent around the interface region. The existence of these modes is reported in our previous work [@suthar_16], and were investigated in other works  [@ticknor_13; @ticknor_14] for TBECs confined in harmonic potential alone at zero temperature. One of the low-energy interface modes is shown in Fig. \[mode\_fn14\]. It is evident from the figure that the mode is out-of-phase in nature, and it is transformed into breathing mode of the miscible domain when $T/T_c > 0.185$. In the miscible domain, the breathing mode becomes degenerate with the quadrupole mode and gain energy. The quasiparticles of the miscible domain have well-defined azimuthal quantum number, and modes undergo rotations as $T$ is further increased. [![The first-order off-diagonal correlation function $g^{(1)}_{k} (0,\mathbf r)$ of $^{87}$Rb (upper panel) and $^{85}$Rb (lower panel) at $T/T_c = 0, 0.08, 0.17,$ and $0.2$, which correspond to $T = 0, 30, 60$, and $70$ nK. Here $x$ and $y$ are measured in units of the lattice constant $a$.[]{data-label="corr_fn"}](corr.png "fig:"){width="8.5cm"}]{} To investigate the spatial coherence of TBEC at equilibrium, we examine the trends in $g^{(1)}_{k} (0, \mathbf r)$ defined earlier in Eq. (\[corr\_eq\]), and are shown in Fig. \[corr\_fn\] for various temperatures. As mentioned earlier, at $T=0$ K, $n_k(\mathbf r) \approx n^{c}_{k}(\mathbf r)$ have complete phase coherence, and therefore, $g^{(1)}_{k} = 1$ within the extent of the condensates, this is shown in Fig. \[corr\_fn\]. At zero temperature or in the limit $\tilde{n}_k \equiv 0$ the correlation function, Eq. (\[corr\_eq\]), resemble a Heaviside function, and the negligible contribution from the quantum fluctuations smooth out the sharp edges as $g^{(1)}_{k}$ drops to zero. More importantly, in the numerical computations this cause a loss of numerical accuracy as it involves division of two small numbers in Eq. (\[corr\_eq\]) [@gies_04]. However at finite temperature the presence of the noncondensate atoms modify the nature of the spatial coherence present in the system. The decay rate of the correlation function increases with the temperature, and this is evident from Fig. \[corr\_fn\], which shows $g^{(1)}_{k} (0, \mathbf r)$ at $T=30,60$, and $70$ nK. In addition to this, the transition from phase-separated to the miscible TBEC is also reflected in the decay trends of $g^{(1)}_{k} (0, \mathbf r)$. Conclusions {#conc} =========== We have examined the finite temperature effects on the phenomenon of phase separation in TBECs confined in quasi-2D optical lattices. As temperature is increased the phase-separated side-by-side ground state geometry is transformed into miscible phase. For the case of TBEC comprising of $^{87}$Rb and $^{85}$Rb, the transformation occurs at $T/T_c\approx 0.185$. This demonstrates the importance of thermal fluctuations which can make TBECs miscible albeit the interaction parameters satisfy the criterion of phase separation. The other key observation is that the transition from phase-separated domain to miscible domain is associated with a change in the nature of the quasiparticle energies. The low-lying out-of-phase mode, in particular, the slosh mode becomes degenerate and increase in energy. On the other hand, the in-phase mode, such as Kohn mode, remain steady as temperature ($T<T_c$) is increased. The interface modes, which are unique to the phase-separated domain, in addition to change in energy are geometrically transformed into rotationally symmetric breathing modes in the miscible domain. The temperature driven immiscible to the miscible transition is also evident in the profile of the correlation functions. We thank Arko Roy, S. Gautam, S. Bandyopadhyay and R. Bai for useful discussions. The results presented in the paper are based on the computations using Vikram-100, the 100TFLOP HPC Cluster at Physical Research Laboratory, Ahmedabad, India.
--- abstract: | Spherical collapse of the Bose-Einstein Condensate (BEC) dark matter model is studied in the Thomas Fermi approximation. The evolution of the overdensity of the collapsed region and its expansion rate are calculated for two scenarios. We consider the case of a sharp phase transition (which happens when the critical temperature is reached) from the normal dark matter state to the condensate one and the case of a smooth first order phase transition where there is a continuous conversion of “normal” dark matter to the BEC phase. We present numerical results for the physics of the collapse for a wide range of the model’s space parameter, i.e. the mass of the scalar particle $m_{\chi}$ and the scattering length $l_s$. We show the dependence of the transition redshift on $m_{\chi}$ and $l_s$. Since small scales collapse earlier and eventually before the BEC phase transition the evolution of collapsing halos in this limit is indeed the same in both the CDM and the BEC models. Differences are expected to appear only on the largest astrophysical scales. However, we argue that the BEC model is almost indistinguishable from the usual dark matter scenario concerning the evolution of nonlinear perturbations above typical clusters scales, i.e., $\gtrsim 10^{14}M_{\odot}$. This provides an analytical confirmation for recent results from cosmological numerical simulations \[H.-Y. Schive [*et al.*]{}, Nature Physics, [**10**]{}, 496 (2014)\].\ **Key-words**: Cosmology; Dark matter; Bose-Einstein condensates PACS numbers: 98.80.-k; 95.35.+d; 67.85.Hj, 67.85.Jk author: - 'Rodolfo C. de Freitas' - Hermano Velten title: 'Non-linear clustering during the BEC dark matter phase transition' --- Introduction ============ It is widely accepted that dark matter is one of the main components of the universe. Due to the strong observational evidence corroborating its existence many different areas of physics have incorporated dark matter related investigations in their agenda. According to the standard cosmological model, dark matter composes around $1/4$ of the universe’s energy budget and $5/6$ of the total matter. Baryons represent the remaining fraction of the latter. This picture has been confirmed by different data, but remarkably by the latest [*Planck*]{} results [@PlanckCosmoParam]. The crucial aspects of these studies concern the particle nature and the astrophysical/cosmological behavior of such component. At the particle level, candidates belonging to the WIMP (weakly interacting massive particles) category produce a viable model (see [@Baer]—and references therein—for a very recent review). Also, for the homogeneous, isotropic and expanding background, the dark matter ensemble should present a vanishing pressure in order to enable structure formation [@cdmpressure]. Although the success of the Cold Dark Matter (CDM) scenario it is important to mention some of its drawbacks. The theoretical clustering patterns (calculated via numerical simulations) of CDM particles at galactic level correspond to the NFW profile [@NFW] which is cuspy at the centre of the particle distribution. This seems to be in clear contradiction to the observed velocities in the central region of galaxies which demand a cored distribution. At the same time, the simulated distribution of satellites around typical Milk Way like galaxies shows one order of magnitude excess of sub-structures which are not observed. These two issues are known as the cusp-core problem and the missing satellites problem, respectively. Even if baryonic physics in such simulations could eventually alleviate these problems, it is not clear so far whether or not CDM is the correct model for the dark matter phenomena. See [@gov] and references therein. One can argue that dark matter is a pathological manifestation of choosing Einstein’s general relativity (GR) as the gravitational theory. This suspect is the pillar of a research line in which modified gravity theories are invoked. See [@Mod] for reviews on modified gravity models and their observational constraints. However, reliable experiments at the solar system level confirm GR predictions with great accuracy [@Will]. Therefore, this fact seems to be powerful enough to keep in a fist moment GR as our standard description for gravitational interaction. Since there are no confirmed evidence to abandon GR, dark matter remains being essential and therefore one needs new alternatives within this context. In this case, the possibilities are also vast. The classical ones were hot dark matter (HDM) [@HDM] and warm dark matter (WDM) [@wdm]. While the former has been ruled out due to the positive observation of galaxies below the jeans mass scale of relativistic dark matter particles, the latter is one of the leading rivals of CDM. Indeed, particles with masses $m\sim$ keV fit the WDM spirit. They are not as light as HDM particles and therefore allowing the existence of structures and, at the same time, not as heavy as CDM, in such a way that there would exist some suppression mechanism able to alleviate the small scale problems of the CDM paradigm (see however [@wdm2] for a recent discussion of WDM results). Models with a similar clustering dynamics as WDM are, for instance, fuzzy dark matter [@fuzzy], the self interacting dark matter [@sidm] and the viscous dark matter [@vdm]. In this work we study a dark matter model which has a different nature. Let us assume $0-$spin DM particles having therefore a bosonic distribution. As predicted and already observed in laboratory bosonic particles are able to condensate [@BEClab] (see also [@BEClab2]), occupying the same energy state and forming the so-called Bose-Einstein condensates when their temperature reaches the critical value $T_{crt}$. Of course, this phenomenon occurs under very controlled experimental situations, but one might wonder in principle what happens if the same would happen on astrophysical scales. Although quite hypothetical this description could serve as an effective approach for understanding dark matter as a cosmological scalar field $\phi$ whose dynamics is driven by some repulsive potential $V(\phi)$. This gives rise to the Bose-Einstein Condensate (BEC) dark matter model which has been widely studied [@Siki; @Bohmer; @harko1; @Abril; @Maxim; @LiShapiro]. The main idea is that normal, i.e., non-condensate, dark matter undergoes a phase transition at some critical redshift $z_{crt}$ during the universe’s evolution. Then, independently on the details of the transition, all the dark matter converts into the condensate state forming a BEC “fluid” [^1]. The dynamics of BEC systems is studied via the Gross-Pitaevskii equation, which is a nonlinear Schrodinger equation [@BEC]. From this starting point, the Madelung decomposition is used to transform the BEC dynamics into a set of fluid equations resulting in an effective positive pressure. With such fluid picture one is able to investigate astrophysical/cosmological problems. This procedure will be shown in more details in the next section. The general aspects of this model concerning the background evolution and the linear perturbations are already very well understood [@MaximZel; @AbrilSF; @wamba; @Chavanis; @BenKain; @rodolfo; @Alcu]. But, in order to fully understand the final clustering patterns of the BEC dark matter model high resolution hydrodynamical/N-body simulations are still needed [@simu]. More recently, Ref. [@Mocz] has formulated smoothed-particle hydrodynamics numerical methods to solving general Gross-Pitaevskii-Poisson system. Schive et al [@Schive] provided recently high-resolution cosmological simulations for the model. They obtained that there is a remarkable difference at the internal galactic level, i.e., its density profile. The latter result is indeed desired. However, they found that BEC DM is indistinguishable from CDM at large cosmological scales. Our focus here in this work is to understand such latter claim. From the theoretical point of view, a first step on this issue is the study of the nonlinear gravitational collapse in a cosmological background. Concerning the BEC dark matter model, recently Ref. [@HarkoBECCollapse] addressed the collapse of “already formed BEC condensates”, i.e., only the post-transition stage. Nevertheless, a realistic configuration can be more complicated since it also involves the dynamics of the baryonic component as the universe evolves from the matter to the dark energy domination epochs. Moreover, the phase transition can also take place during the evolution of the collapsed region. Therefore, especially for galaxy cluster scales, the evolution of the background cosmological dynamics should be taken into account. We will perform in this work a natural extension of Ref. [@HarkoBECCollapse] which has analysed the “free-fall” collapsed of a BEC dark matter sphere. However, we assume a more realistic cosmological scenario where dark matter coexist with baryons and a cosmological constant. Then, we address the correct case where the transition occurs during the nonlinear clustering process. Fundamental quantities here are the condensate parameters, namely, the mass of the particle $m_{\chi}$ and the scattering length $l_s$. They determine the moment at which the phase transition takes place $z_{crt}$ and the speed of sound in the condensate fluid, for example. After the critical redshift $z_{crt}$ one can admit two different dynamics. The simplest case is to assume an abrupt transition, i.e., for $z<z_{crt}$ all dark matter obeys the Bose-Einstein dynamics. This seems to be a reasonable approximation to the problem. This situation will be studied in section \[SectionIII\]. One can also assume the case in which the full conversion of all dark matter occurs in a finite time and it finishes at a redshift $z_{BEC}<z_{crt}$. Therefore, the phase transition lasts a finite time in which a mixture of “normal” and condensate dark matter make up the total matter component. We study in section \[SectionIV\] this case. We present our results covering many order of magnitude in the model parameter space $10^{-6}$ meV$< m_{\chi} < 10^{4}$ meV$ ; 10^{-12}$ fm$ < l_s < 10^{12} $ fm. Interesting quantities to be found here are the final (at $z=0$) value of the density contrast and the expansion rate and the redshift of the turnaround $z_a$, i.e., the moment at which the collapsed region detaches from the background. In summary, this paper has the following structure. In the next section we develop the background dynamics of the BEC dark matter. We present in section \[SectionIII\] general equations for the spherical top-hat collapse formalism. These equations will be studied in more detail in sections \[SectionIV\] and \[SectionV\] where, respectively, we address the case of abrupt transition and the usual phase transition. We conclude in the final section. The background dynamics of the Bose-Einstein Condensate dark matter =================================================================== In this work we always have a flat background dynamics composed by baryons, dark matter and a cosmological constant. This expansion rate reads $$H^2=\frac{8\pi G}{3}\left(\rho_b+\rho_{dm}+\rho_{\Lambda}\right).$$ The post-decoupling dynamics of the baryonic component is assumed to be pressureless $P_b=0$ and therefore $\rho_b=\rho_{b0}(1+z)^3$ where $\rho_{b0}$ is its density today at $z=0$. Its value is such that $\rho_{b0}=\Omega_{b0} \rho_{c0}$, where $\rho_{c0}=3H^2_0 / 8\pi G$. We can safely adopt $\Omega_{b0}=0.05$ according to nucleosynthesis constraints. The Hubble constant assumed here is $H_0=70$ Km/s/Mpc. We will also fix $\Omega_{dm0}=0.25$ or equivalently $\Omega_{\Lambda}=0.75$. The difference here from the standard $\Lambda$CDM model will be the dark matter dynamics. Before the transition takes place, at temperatures $T>T_{crt}$; or redshifts $z>z_{crt}$, DM behaves as an isotropic gas in thermal equilibrium. From kinetic theory the pressure of a non-relativistic gas in this regime is given by $$\label{pdm} p_{dm}=\frac{g_s}{3h^3}\int\frac{q^2 c^2}{E}f(q)d^3p\approx4\pi\frac{g_s}{3h^3}\int\frac{q^4}{m}\rightarrow\sigma^2 \rho_{dm},$$ with $\sigma^2=\left\langle v^2\right\rangle /3 c^2$, where $g_s$ is the number of spin degrees of freedom, $h$ the Planck constant, $q$ the momentum of a particle with energy $E=\sqrt{q^2c^2+m^2c^4}$ and distribution function $f$. A typical value for the velocity dispersion is $\sigma=3 \times 10^{-6}$. In practice, since this quantity can be seen as the dark matter equation of state parameter $w_{dm}=p_{dm}/\rho_{dm}$ this value is consistent with the assumption of pressureless fluid usually adopted for CDM. Note that the full relativistic fluid is obtained when $\left\langle v^2\right\rangle = c^2$. After dark matter’s conversion it obeys the condensate dynamics, which is governed by the Gross-Pitaevskii equation $$\label{GP} i \hbar \frac{\partial \Psi}{\partial t}=-\frac{\hbar^2}{2 m_{\chi}}\nabla^2 \Psi + V(r,t)\Psi +g(\left|\Psi\right|) \Psi,$$ where $m_{\chi}$ is the mass of the particle and $V(r,t)$ is the trapping potential. The non-linearity term with only two-body interparticle interaction (quadratic) reads $$g(\left|\Psi\right|)=U_0 \left|\Psi\right|^2,$$ where $U_0=4\pi \hbar^2 l_s/m^3_{\chi}$. This definition has the fundamental parameters of the model, namely the scattering length $l_s$ and the particle mass $m_{\chi}$. The former is associated to the nature of the short range self-interactions in the condensate. For example, in laboratory systems, it can be either positive (the case of Rb$^{87}$ atoms with $l_s=5.45$ nm and then repulsive interactions) [@Ju] or negative (the case of Li$^7$ atoms with $l_s=−1.45$ nm and then attractive interactions) [@Abdu]. In this work we will consider only cases where $l_s>0$. The impact of $l_s$ on the mass-radius configurations of astrophysical BEC has been investigated in [@delfini]. Note that there appears some degeneracy for the $U_0$ parameter, i.e., there are infinities combinations of $l_s$ and $m$ capable to produce the same $U_0$ value. We discuss this degeneracy and the admissible numerical values of these parameters in the next sections. In order to apply the Gross-Pitaevskii equation to astrophysical problems one proceeds with the so-called Madelung decomposition. In this procedure, the wave function is replace by $$\Psi=\sqrt{\rho(r,t)}\, e^{\frac{i}{\hbar}S(r,t)},$$ where $\rho=\left|\Psi\right|^2$ is the number density of the system and $S$ is the velocity potential. The mass/energy density can be written in terms of the mass of each individual particle as $\rho_{\chi}=m_{\chi}\rho$. Therefore, the BEC system can be described in terms of a hydrodynamical set of equations, which are $$\begin{aligned} & & \frac{\partial\vec{u}}{\partial t}+\left(\vec{u}\cdot\nabla\right)\vec{u}=-\frac{\nabla p_{\chi}}{\rho_\chi}-\nabla\left(\frac{V}{m_\chi}\right)-\frac{\nabla Q}{m} \,, \\ & & \frac{\partial \rho_\chi}{\partial t}+\nabla\cdot\left(\rho_{\chi} \vec{u}\right)=0 \,,\end{aligned}$$ where we define $$\begin{aligned} \vec{u} &=& \frac{\hbar}{m_\chi}\nabla S \,, \\ Q &=& -\frac{\hbar^2}{2m_{\chi}}\frac{\nabla^2\sqrt{\rho_\chi}}{\sqrt{\rho_\chi}} \,.\end{aligned}$$ The particle self-interaction of this specific BEC-inspired fluid gives rise to a pressure of polytropic form $$\label{Pbec} p_{\chi}=\frac{2\pi\hbar^2 l_s}{m^3_{\chi}}\rho^2_{\chi}.$$ On the other hand, the quantum potential $Q/m_\chi$ results in the often called quantum pressure[^2]. We can use the identity $$\frac{\partial_{j}p_{ij}}{\rho_\chi} \equiv \frac{\partial_i Q}{m_\chi} \,,$$ where $p_{ij}$ is the quantum anisotropic pressure tensor [@delfini], given by $$\label{anisotropic} p_{ij} = \frac{\hbar^2}{2m^{2}_{\chi}}\left(\frac{\partial_i\rho\partial_j\rho}{\rho_\chi}-\delta_{ij}\nabla^2\rho\right) \,.$$ For the problem we have in mind, the potential $V(r,t)$ in (\[GP\]) is in fact the gravitational potential which is sourced by $\rho_{\chi}$ via the Poisson equation $\nabla^2 V = 4 \pi G \rho_{\chi}$. This allows us to solve the system of equations. In the cases where the pressure due to the particle self-interaction dominates, the quantum anisotropic pressure can be neglected. This is the so called Thomas-Fermi approximation. In [@RindlerDaller] the authors estimated, for the case of BEC dark matter halos, in which cases the Thomas-Fermi limit is valid. They consider the forces associated with both pressures, that balances the gravitational collapse, and find that the Thomas-Fermi regime is valid when $$\frac{\kappa}{\kappa_H} \gg 2 \, ,$$ where $$\kappa = 4\pi \hbar^2\frac{l_s}{m_\chi} \,. \\$$ Adopting $R$ as the mean radius and $M$ as the mass of a BEC dark matter halo it is found that $$\label{kappaH} \kappa_H = \frac{2}{3}\pi \hbar^2 \frac{R}{M} \,.$$ The quantity (\[kappaH\]) can be written in characteristic values $$\kappa_H = 2.252\times 10^{-64}\left(\frac{R}{100~\textup{kpc}}\right)\left(\frac{10^{12}~M_{\odot}}{M}\right)~\textup{eV}~\textup{cm}^3 \,.$$ If we consider halos with a size between the Milky Way ($M=10^{12}~M_{\odot}$ and $R=100~\textup{kpc}$) and a typical dwarf galaxy ($M=10^{10}~M_{\odot}$ and $R=10~\textup{kpc}$) we can constraint $\kappa_H$ in the range $$\kappa_H \approx 2\times \left(10^{-64} \textrm{ -- } 10^{-63}\right) ~\textup{eV}~\textup{cm}^3 \,.$$ Using the model parameters range which will be adopted in this work ($10^{-6}$ meV$< m_{\chi} < 10^{4}$ meV$ ; 10^{-12}$ fm$ < l_s < 10^{12} $ fm) we calculate that $$\kappa \approx 2\times \left(10^{-43} \textrm{ -- } 10^{27}\right) ~\textup{eV}~\textup{cm}^3 \,,$$ which indicates that the Thomas-Fermi approximation can be adopted. Another comment about the justification of the use of the Thomas-Fermi approximation relies on the fact that we are going to focus on the largest cosmological scales. For example, in the Fourier space density perturbations are affected by the quantum pressure contribution proportionally to $k^4$ while usual pressure contributions modifies the evolution of the density contrast (which will be defined soon) according to $k^2$ [@Chavanis]. Therefore, the quantum pressure corrections could be relevant for the very small scales. Besides, in the top-hat spherical collapse the density of all fluids inside the spherical overdense region is homogeneous [@Abramo2] and the anisotropic pressure (\[anisotropic\]) should be zero. A cosmological dark matter fluid with the above pressure leads to the background expansion $$H^2=\frac{8\pi G}{3}\left(\rho_b+\rho_{\chi}+\rho_{\Lambda}\right).$$ where $\rho_{\chi}$ is the BEC dark matter density, which in the Thomas-Fermi limit is determined by the pressure (\[Pbec\]) via the continuity equation. More details will be discussed in sections \[SectionIV\] and \[SectionV\]. In fact, we will follow in this work the background expansion determined in ref. [@harko1]. The nonlinear top-hat collapse {#SectionIII} ============================== Here we present the basic equations that describe the evolution of a spherical collapsing matter region in an expanding background. This is the ideal technique for studying the clustering patterns of dark matter halos. We will follow standard calculations presented in Refs. [@Abramo:2007iu; @Abramo2; @Rui; @carames]. For general fluids, we define quantities such as $$\begin{aligned} \vec{v}^c &=& \vec{u}^0 + \vec{v}^p, \\ \rho^c&=&\rho\left(1+\delta\right) , \\ p^c&=&p + \delta p.\end{aligned}$$ They are respectively the velocity, density and pressure of the collapsed region. The background velocity expansion is given by $\vec{u}^0$ and is associated with the Hubble’s law. Peculiar motions are denoted by $\vec{v}^p$. The total density within this spherical region under collapse $\rho^c$ is written as the sum of the background density and the overdensity fraction $\delta \rho$. The same happens to the pressure definition. The rate at which the overdense region expands reads $$h=H+\frac{\theta}{3}(1+z),$$ where $\theta=\vec{\nabla} \cdot \vec{v}^p$. Energy conservation is also required for the collapsing region. Therefore, each component $i$ obeys a separate equation of the type $$\label{eqdeltaorig} \dot{\delta_i}=-3H(c^2_{eff_i}-w_i)\delta_i-\left[1+w_i+(1+c^2_{eff_i})\delta_i\right]\frac{\theta}{a},$$ where the energy density contrast is defined as $$\label{deltadef} \delta_i = \left(\frac{\delta \rho}{\rho}\right)_i,$$ and the effective speed of sound is computed following $c^2_{eff_i} = (\delta p / \delta \rho)_i$. Note that over dot means derivative with respect to the cosmic time $\it t$. The dynamical evolution of the homogeneous spherical region will be governed by the Raychaudhuri equation $$\label{eqthetaorig} \dot{\theta}+H\theta+\frac{\theta^2}{3a}=-4\pi Ga \sum_i (\delta\rho_i + 3\delta p_i)\ .$$ For a cosmological model composed by $N$ distinct fluids one has to solve $N+1$ equations. One of the type \[eqdeltaorig\] for each fluid and, since we adopt the top-hat profile, one single equation for the velocity potential $\theta$ which is sourced by the density fluctuations of the $N$ fluids. Since we will use the standard $\Lambda$CDM universe as our reference model here we show its equations for the spherical collapse. Both the baryonic and the dark matter component are assumed to be pressureless fluids. Therefore, we can write down $$\label{blambda} \dot{\delta}_b=-\left(1+\delta_b\right)\frac{\theta}{a},$$ $$\label{dmlambda} \dot{\delta}_{dm}=-\left(1+\delta_{dm}\right)\left(1+\sigma^2\right)\frac{\theta}{a},$$ $$\label{thetalambda} \dot{\theta}+H\theta+\frac{\theta^2}{3a}=-4\pi G a \left[\rho_b\delta_b+\rho_{dm}\delta_{dm}(1+\sigma^2)\right]\ .$$ Note that there is no equation of clustering of the cosmological constant since it is treated as a background quantity. Therefore, it influences this set of equations only via the expansion rate $H\equiv H(\rho_b, \rho_{dm}, \Lambda)$. In order to numerically solve (\[blambda\]-\[thetalambda\]) one usually specifies the initial conditions for $\delta_b$, $\delta_{dm}$ and $\theta$ at the redshift of decoupling $z_{dec}\sim 1000$ from which one can treat baryons as an independent fluid. Abrupt phase transition {#SectionIV} ======================= The temperature $T_{crt}$ sets the beginning of the BEC phase transition. This is in fact a process which takes some finite time $\Delta t$ until all the normal dark matter has been converted into the BEC phase. As estimated in [@harko1] $\Delta t$ is of order of $10^{6}$ years. Although the latter value is parameter dependent, it is in general indeed an almost negligible fraction of the universe’s lifetime. Therefore, the assumption that at $z_{crt}$ there is an instantaneous conversion to the BEC phase seems to be plausible and it will be considered in this section. For $z>z_{crt}$ the dark matter equation of state calculated in (\[pdm\]) reads $$\label{p1} p_{dm} = \sigma^2\rho_{dm} \,,$$ where $\sigma^2 \equiv \left\langle \vec{v}^2 \right\rangle / 3c^2$. Applying this to the continuity equation one finds $$\label{rho1} \rho_{\chi} = \rho_{crt}\left(\frac{1+z}{1+z_{crt}}\right)^{3(1+\sigma^2)} \,, \quad z\geq z_{crt} \, ,$$ where $z_{crt}$ is the redshift at the transition point and $\rho_{crt}\equiv\rho(z_{crt})$. For $z<z_{crt}$ the effective equation of state of the BEC dark matter is $$p_\chi = u_0 \rho_\chi^2 \,, \quad u_0 \equiv \frac{2 \pi \hbar^2 l_s}{m^3_{\chi}} \, ,$$ and again, using the continuity equation we find $$\rho_\chi = \frac{\rho_{crt}}{(1+\omega_{crt})(\frac{1+z_{crt}}{1+z})^3-\omega_{crt}} \, , \quad z\leq z_{crt} \,,$$ where $\rho_\chi$ is a continuous function at $z_{crt}$ and $\omega_{crt} \equiv \frac{p_{crt}}{\rho_{crt}}=\sigma^2$. At this point the continuity of the pressure (see discussion in [@harko1]) sets $$\label{eq:pressaoconstante} \sigma^2 \rho_{crt} = u_0 \rho_{crt}^2 \quad \Rightarrow \quad \rho_{crt} = \frac{\sigma^2}{u_0} \, ,$$ which, of course, depends on the model parameters. From this definition, $$\rho_{\chi 0} = \frac{\rho_{crt}}{(1+\omega_{crt})(1+z_{crt})^3-\omega_{crt}}$$ resulting in $$\label{redshiftcrt} (1+z_{crt})^3 = \frac{\frac{\Omega_{crt}}{\Omega_{\chi 0}}+\omega_{crt}}{1+\omega_{crt}} \, ,$$ where $\Omega_{\chi 0} = 0.25$ is the today’s fractionary dark matter energy density parameter. The critical temperature at the point of Bose-Einstein condensation is $$\begin{aligned} &T_{crt}&=\frac{2\pi \hbar^2 \rho_{crt}^{2/3}}{\zeta(3/2)^{2/3}m_\chi^{5/3}k_B} =\left[\frac{2\pi \hbar^2}{\zeta(3/2)^2 k_B^3}\frac{m_\chi\sigma^4}{l_s^2}\right]^{1/3} \\ &=& 6.87 \left(\frac{m_\chi}{1~\textup{meV}}\right)^{1/3}\left(\frac{\sigma^2}{3\times 10^{-6}}\right)^{2/3}\left(\frac{l_s}{1~\textup{ftm}}\right)^{-2/3}\textup{eV} \,, \nonumber\end{aligned}$$ where $\zeta(3/2)$ is the Riemann zeta function and $k_B$ is the Boltzmann constant. Note that before the phase transition we have $c_s^2=\sigma^2 = \omega_{crt}$. After this point, the equation of state parameter and the adiabatic ($c^2_s=\partial p / \partial \rho$) speed of sound associated to this fluid reads, respectively, $$\omega_\chi(z) = u_0 \rho_\chi(z) \, , \quad c^2_{s_\chi} = 2u_0 \rho_\chi(z) = 2\omega_\chi(z) \, .$$ Concerning the perturbed region the effective speed of sound is actually given by the expression $$\label{ceffchi} c^2_{eff_{\chi}}=\frac{\delta p_\chi}{\delta \rho_\chi}=\frac{p^c_\chi-p_\chi}{\rho^c_\chi-\rho_\chi}=w_\chi\frac{(1+\delta_\chi)^2-1}{\delta_\chi}=w_\chi(2+\delta_{\chi}),$$ from which one can expand for small values of $\delta$ finding $c^2_{eff} \rightarrow c^2_s$ as expected. Since a crucial issue in this model is the determination of the moment at which the transition happens in Fig. \[zcA\] we show the dependence of $z_{crt}$ on the model parameters $m_{\chi}$ and $l_s$. This figure is numerically done after solving the equality proposed in (\[redshiftcrt\]). A giving $z_{crt}$ value represents a curve in the $m_{\chi}$ [*versus*]{} $l_s$ plane. The solid line sets the parameter values for which the transition happens today at $z=0$. Therefore, only for the parameters values below the solid line the BEC dark matter model is able to leave some imprint on the observations. Note, for example that the configuration $(m_{\chi} , l_s)=(10^{-4} $ meV $ , 10^5 $ fm$)$ is an acceptable one. However, is this case, it would be impossible to probe the bosonic nature of dark matter since the transition will happen in a far future. On the other hand, over the long-dashed line the transition happens at the time of photon-baryon decoupling. In principle, $z_{crt}<1000$ is also allowed but its possible effect on the primordial CMB anisotropies is still not clearly known. Although this issue has not yet been investigated in detail we keep for convenience $0<z_{crt}<1000$ where we can consider a matter dominated universe–apart from late $\Lambda$ effects–and pressureless baryons. This redshift range corresponds to the gray region in this plot. The short-dashed line corresponds $z_{crt}=10$ and it is shown to guide the reader on how $z_{crt}$ evolves in this plane. We also show in this figure the usual range for axion masses $10^{-3}$meV $ < m_{\rm axion} < 1 $meV. Taking typical axion scattering lengths $<10^{-16}$ fm, Fig. 1 estimates correctly that the axion condensation happens indeed very early in the universe history. In our work we are not advocating in favor of any specific DM particle candidate. But in particular it is desired that most of the successes of the standard CDM paradigm should be kept. Indeed, it has been realised long ago that axions are very promising candidates for CDM [@axionCDM]. Therefore, the existence of such particles exemplifies the validity of our approach since it guarantees the non-relativistic behavior of the DM component before the phase transition takes place. Of course, there is no direct relation to actual CDM axion models, which condensate much earlier in the universe history, to our approach. Notice also that axions are characterized by an attractive self-interaction. We just use them as instance of CDM light particles. At the same time, our approach also relies on the fact that before the transition we are dealing with CDM like particles. Therefore, one should avoid to keep in mind the use of lighter particles since they would be associated to warm/hot dark matter models. The meaning of the mass of the dark matter particle is quite clear. But in the cosmological context what does the scattering length $l_s$ mean? Typical BEC experiments work with values in the range $10^{6} $ fm$<l_s <10^{9}$ fm. For these values the condition $z_{crt}>0$, i.e., assuring that the transition has already occurred, is satisfied for masses $m > 10$ meV and $100$ meV, respectively. Of course, usual BEC experiments with atoms cannot guide us in our search for viable DM parameters. However, we also note that by extrapolating the contours to lighter particles, as for example ultra-light masses of order $m \sim 10^{-22}$ meV, $z_{crt}>0$ requires almost negligible $l_s$ values which can be much smaller than the Planck length ($l_{plk}\sim 10^{-20}$ fm). It is also worth noting that the space parameter indicated by the gray region is consistent with the stability of BEC dark matter halos as calculated in Ref. [@haloenergy]. However, see also a related discussion on the non-stability of BEC halos in Ref. [@halo]. ![The redshift of the phase transition ($z_{crt}$) in the parameters plane $l_s$ x $m$. The solid line sets the parameters in which the transition happens today at $z=0$. The long dashed line sets the parameters for which the transition takes place around the decoupling time $z_{cr}=1000$. The axion mass range is shown only for the sake of comparison.[]{data-label="zcA"}](Abrup1.eps){width="47.00000%"} In order to solve for the evolution of the perturbed quantities during the collapse we adopt the following strategy. We solve numerically the $\Lambda$CDM equations taking initial conditions at a redshift $z_i=1000$ and with the values $\delta_{dm}(z_i)=3.5 \times 10^{-3}$, $\delta_{b}(z_i)=10^{-5}$ and $\theta(z_i)=0$ [@Rui; @carames]. These values represent the standard amplitudes in the linear perturbation spectrum associated to today’s clusters scales around the decoupling time. Indeed, the top-hat profile remains appropriate for such scales. Notice that clusters scales collapsed at low redshifts and therefore already within the BEC dark matter epoch. Smaller scales which have collapsed before the BEC phase transition will preserve the CDM structure and only differences in the final virial configuration would exist which is not the scope of this work. With such initial conditions this set of equations is evolved until the critical redshift $z_{crt}$. At this point, the quantities $\delta_{dm}(z_{crt})$, $\delta_{b}(z_{crt})$ and $\theta(z_{crt})$ are used as initial conditions for the BEC dark matter equations, which uses \[ceffchi\], from the critical redshift to $z=0$. We have studied in great detail the parameter space $m_\chi$ and $l_s$ and although the BEC dark matter model indeed yields to a distinct dynamics at nonlinear level, this difference is, in practice, almost negligible. We show in the left panel of Fig. \[fig2\] this feature where the expansion of the collapsed region is shown. The solid red line represents the standard cosmology while the dashed black line was calculated for a mass $m_{\chi}=20$ meV and a scattering length $l_s=10^6$ fm. With this choice the transition occurs at $z_{crt}=3.19$ as seen in the vertical dashed line. Both curves are in practice indistinguishable. The effective speed of sound is plotted in the right panel of Fig. \[fig2\]. This shows the reason there are no significant changes in the evolution. We remark again that this result is not due to the specific choice $m_{\chi}=20$ meV and $l_s=10^6$ fm. It is a general feature of the model. ![Expansion rate (left) and effective speed of sound (right) of the collapsed region. In both plots we have $m_{\chi}=20$ meV and $l_s=10^6$ fm.[]{data-label="fig2"}](Abruph1.eps "fig:"){width="43.00000%"} ![Expansion rate (left) and effective speed of sound (right) of the collapsed region. In both plots we have $m_{\chi}=20$ meV and $l_s=10^6$ fm.[]{data-label="fig2"}](Abrupceff1.eps "fig:"){width="43.00000%"} Smooth phase transition {#SectionV} ======================= We deal now with the situation in which there is a gradual conversion of “normal” dark matter into the condensed phase which starts at a redshift $z_{crt}$ and is finished at a redshift $z_{BEC}$. This is indeed the more realistic case. The dynamics shown in this section was also developed for the first time in Ref. [@harko1]. As mentioned in the last section the estimated duration $\Delta t=t(z_{BEC})-t(z_{crt})$ of this transition is of order $\Delta t \sim 10^6$ years, that is a small fraction of the universe’s lifetime $t_U\sim 10^{10}$ years [@harko1]. However, $\Delta t$ depends on the model parameters $l_s$ and $m_{\chi}$. We calculate here again $\Delta t$ for some values $l_s$ and $m_{\chi}$ and plot it in Fig. \[fig.Deltat\]. In the right panel of this figure, there is a maximum value $\Delta t_{\mathrm{max}} = 3.4\times 10^{9}$ years assuming, for instance, a mass $m_{\chi} = 1$ meV and $l_{s} \sim 3.1\times 10^2~\mathrm{fm}$. There are of course other combinations of $l_s$ and $m_{\chi}$ which produces similar $\Delta t$ values. The lower values for $\Delta t$ we have found are $\sim 10^6$ years. Therefore, this analysis shows that contrary to previous estimations, the phase transition can last a non-negligible fraction of the universe’s lifetime. It is worth noting that recently Ref. [@harko2015] has pointed out the preferred values $m_{\chi}\sim 10^{-3}$ meV and $l_s \sim 10^{-7} ~\mathrm{fm}$ which according to our Fig. 3 maximize the duration of the phase transition. As we will see below, the background dynamics and the evolution of the perturbation for the smooth phase transition differs significantly from the abrupt case studied in the last section. Then, one can expect that now we can observe some distinguishable feature of the BEC dark matter nonlinear collapse. Let us now develop the dynamics during the smooth phase transition. Before the transition starts, we have the same dynamics of a isotropic non-relativistic gas, as described in the last section by equations (\[p1\]) and (\[rho1\]). During the phase transition we can define the fraction of converted dark matter as $$\label{eq:deffracao} f(z) = \frac{\rho(z)-\rho_{crt}}{\rho_{\mathrm{BEC}}-\rho_{crt}} \,,$$ where $\rho(z)$ is the dark matter density along the transition, $\rho_{crt}$ is the dark matter density before the transition and $\rho_{\mathrm{BEC}}$ its value afterwards. The function $f(z)$ is defined in such a way that at $z_{crt}$ we have $f(z_{crt})=0$. When the dark matter has fully converted to the BEC phase $f(z_{\mathrm{BEC}}) = 1$. Using (\[eq:deffracao\]) into the continuity equation and integrating it from $z_{crt}$ to $z\geq z_{\mathrm{BEC}}$ we find $$\label{eq:fracao} f(z) = \frac{1+\omega_{crt}}{\frac{\Omega_{\mathrm{BEC}}}{\Omega_{crt}}-1}\left[\left(\frac{1+z}{1+z_{crt}}\right)^3-1\right] \, .$$ ![The phase transition time length $\Delta t$ as a function of the models parameters $l_{\mathrm s}$ and $m$, where we fixed $\sigma^2=3 \times 10^{-6}$. []{data-label="fig.Deltat"}](Deltatsmoothm.eps "fig:"){width="43.00000%"} ![The phase transition time length $\Delta t$ as a function of the models parameters $l_{\mathrm s}$ and $m$, where we fixed $\sigma^2=3 \times 10^{-6}$. []{data-label="fig.Deltat"}](Deltatsmoothls.eps "fig:"){width="43.00000%"} Then, the dark matter density evolution becomes now $$\begin{aligned} &&\rho_\chi=\rho_{crt}\left(\frac{1+z}{1+z_{crt}}\right)^{3(1+\sigma^2)} \,, \quad z \geq z_{crt} \, ; \\ \label{eq:densBECuni} &&\rho_\chi=\rho_{crt}\left\{1+(1+\omega_{crt})\left[\left(\frac{1+z}{1+z_{crt}}\right)^3-1\right]\right\}, \\ \nonumber &&\quad z_{crt} \geq z \geq z_{\mathrm{BEC}} \, ; \\ \label{eq:BECcondensada} &&\rho_\chi=\rho_0 \frac{(1+z)^3}{(1+\omega_0)-\omega_0(1+z)^3} \,, \quad z \leq z_{\mathrm{BEC}} \, .\end{aligned}$$ We still have to the determine the redshift $z_{\mathrm{BEC}}$ when the phase transition is over. With the condition $f(z_{\mathrm{BEC}})=1$ inserted in (\[eq:fracao\]) we find $$\label{eq:eqOmegaBEC} \left[ \frac{\Omega_{\mathrm{BEC}}}{\Omega_{crt}}-1 + (1+\omega_{crt})\right]\left(\frac{1+z_{crt}}{1+z_{\mathrm{BEC}}}\right)^3 = (1+\omega_{crt}) \, ,$$ and using $z=z_{\mathrm{BEC}}$ in the expression (\[eq:BECcondensada\]) for the condensed dark matter density we have $$\label{eq:eqzBEC} \frac{\Omega_{\mathrm{BEC}}}{\Omega_{crt}} = \frac{\frac{\Omega_0}{\Omega_{crt}}(1+z_{\mathrm{BEC}})^3}{(1+\omega_0)-\omega_0(1+z_{\mathrm{BEC}})} \, .$$ Eqs. (\[eq:eqOmegaBEC\]) and (\[eq:eqzBEC\]) can now be solved, leading to a solution for $z_{\mathrm{BEC}}$ and $\Omega_{\mathrm{BEC}}$. As said before, during the phase transition both non-condensed and condensed dark matter coexist and the dark matter pressure is constant having the same value for both components in the interval $z_{\mathrm{BEC}}\leq z \leq z_{\mathrm{crt}}$, as given by Eq. \[eq:pressaoconstante\]. We will assume that the same happens for the collapsed pressure $p^{\mathrm c}$. This allows us to find the constraint $$1+\delta_\sigma^{\mathrm{crt}} = (1+\delta_{\mathrm B}^{\mathrm{crt}})^2 \, ,$$ where we used the expression $\rho_\chi(z) = \rho_\sigma(z)+\rho_B(z)$, which compared with Eq. (\[eq:fracao\]) allows us to identify $\rho_\sigma(z) = \rho_{\mathrm{crt}}(1-f(z))$ as the non-condensed dark matter density and $\rho_B(z) = \rho_{\mathrm{BEC}}f(z)$ as density of the condensed state. The continuity of dark matter fluid pressure enable us to treat both components as one single fluid also at perturbed level. In this case, the effective fluid sound velocity during the phase transition becomes $$c^2_{eff_\chi} = \frac{p^c_{crt} - p_{crt}}{\rho_\chi \delta_\chi} = \sigma^2\frac{\rho_{\mathrm{crt}}}{\rho_\chi}\frac{\delta_{\mathrm{crt}}}{\delta_\chi} = \omega(z)\frac{\delta_{\mathrm{crt}}}{\delta_\chi} \,,$$ where $\omega(z)=\omega_{\mathrm{crt}}\rho_{\mathrm{crt}}/\rho_\chi(z)$ is the equation of state parameter for the dark matter fluid during the phase transition. After the phase transition is completed, i.e., when $z \leq z_{\mathrm{BEC}}$, the effective fluid sound velocity of the BEC dark matter will be $$c^2_{eff_\chi} = \frac{p^c_{crt} - p_{crt}}{\rho_\chi \delta_\chi} = \frac{\sigma^2\rho_{crt}\delta_{crt}}{\rho_\chi \delta_\chi} = \omega(z)(2+\delta_\chi) \,,$$ where $\omega(z)=\omega_{\mathrm{crt}}\rho_\chi(z)/\rho_{\mathrm{crt}}$ is the equation of state parameter for the dark matter after the phase transition. Since the velocity dispersion $\sigma^2$ for the dark matter particles before the BEC phase transition is small the same assumptions made on $z_{\mathrm{crt}}$ in the previous section is still valid here, and we will consider values for the model parameters ($m_\chi,l_{\mathrm s}$) such that $0<z_{\mathrm{crt}}<1000$. We will also consider only cases where $z_{\mathrm{BEC}}\geq0$. This set of equations is evolved until the critical redshift $z_{crt}$ assuming the same initial conditions at $z_i$ as before. At this point, the quantities $\delta_{dm}(z_{crt})$, $\delta_{b}(z_{crt})$ and $\theta(z_{crt})$ are used as initial conditions for the phase transition perturbed Eqs. (\[eqdeltaorig\]) and (\[eqthetaorig\]) with the suitable background parameters. This set of equations is again evolved until the $z_{\mathrm{BEC}}$, and the quantities $\delta_{dm}(z_{\mathrm{BEC}})$, $\delta_{b}(z_{\mathrm{BEC}})$ and $\theta(z_{\mathrm{BEC}})$ are used as initial conditions for the BEC dark matter perturbed equations. ![Expansion rate of the collapsed region for the smooth phase transition approach.[]{data-label="fig.hSmooth"}](h_Smooth_1.eps "fig:"){width="43.00000%"} ![Expansion rate of the collapsed region for the smooth phase transition approach.[]{data-label="fig.hSmooth"}](h_Smooth_2.eps "fig:"){width="43.00000%"} ![Dark matter density contrast for the smooth phase transition approach.[]{data-label="contrast1"}](delta_Smooth_1.eps "fig:"){width="43.00000%"} ![Dark matter density contrast for the smooth phase transition approach.[]{data-label="contrast1"}](delta_Smooth_2.eps "fig:"){width="43.00000%"} In Fig. \[fig.hSmooth\] we show the expansion of the collapsed region for the smooth phase transition model, where the solid red curve represents the standard $\Lambda$CDM model and the black dashed curve represents the BEC model, for $m_\chi = 20$ meV in the left panel, $m_\chi=10$ meV in the right panel and $l_\mathrm{s} = 10^6$ fm in both cases. The dashed vertical lines show the initial and the final points of the transition phase. In the left panel, $z_\mathrm{crt}=3.19$ and $z_\mathrm{BEC}=1.43$ and $z_\mathrm{crt}=1.10$ and $z_\mathrm{BEC}=0.45$ in the right panel. These intervals correspond to $2.40\times 10^9$ years and $3.38\times 10^9$ years. As in the abrupt transition model there are no major difference between CDM and BEC dark matter. The evolution of the non-linear density perturbations are shown in Fig. \[contrast1\], where $\delta_\mathrm{dm} \equiv \delta \rho_{\mathrm{dm}}/\rho_{\mathrm{dm}}$ is the dark matter density contrast. Again, the red curve represents the standard $\Lambda$CDM model, while the black dashed curve shows the behavior of the BEC model for $m_\chi = 20$ meV in the left panel, $m_\chi = 10$ meV in the right panel and $l_\mathrm{s} = 10^6$ fm in both cases. The curves are again indistinguishable. The redshift of turnaround $z_{\mathrm{ta}}$ is the one which marks the moment when the perturbed region starts to decrease its physical radius. This happens when $h=0$, i.e., $z_{\mathrm{ta}}=z(h=0)$. For $\Lambda$CDM model $z_{\mathrm{ta}}^{\Lambda\mathrm{CDM}} = 0.2113$ and for the cases seen in both panels of Fig. \[fig.hSmooth\] we have $|z_{\mathrm{ta}}^{\mathrm{BEC}}-z_{\mathrm{ta}}^{\Lambda \mathrm{CDM}}| \approx 10^{-4}$. Conclusions =========== We have studied the nonlinear clustering properties of the Bose-Einstein dark matter model. In this scenario, bosonic dark matter particles are able to undergo a phase transition as their temperature reaches the critical one $T_{crt}$ which corresponds to some critical redshift $z_{crt}$. The main questions here are: i) how does $z_{crt}$ depend on the fundamental model parameters $m_{\chi}$ (the particle mass) and $l_s$ (the scattering length)? and ii) what is the background and perturbative dynamics during the phase transition? Fig. \[zcA\] shows in detail the expected degeneracy of $z_{crt}$ values in the $l_s$ x $m_{\chi}$ plane, i.e., for a given $z_{crt}$, there are many admissible parameter configurations. This result identifies the parameters values for which $z_{crt}>0$ and therefore are able to leave imprints on large scale structure observations. At the same time, if the actual parameters values of the BEC model lie in the region $z_{crt}<0$ then the bosonic nature of the dark matter particles cannot be accessed via cosmological observables. If the present model is employed for BEC phase-transitions, ultra-light candidates ($m_{\chi} \lesssim 10^{-22}$ eV) would only lead to possible observational imprints for $l_s$ of order of the Planck length or smaller. Note that this claim is limited to the fluid description used here. Recent calculations on the full dynamics of the ultra-light axion scalar field sho that there are indeed possible observable imprints in the cosmological data [@Pedro]. Our strategy was to identify specific signatures of the BEC dark matter nonlinear clustering. Since there is a positive pressure associated to the BEC dark fluid one can expect that the corresponding effective speed of sound will modifies somehow the agglomeration rate. We tried to understand this process via both the abrupt and the smooth phase transition approaches. In the former scenario the dark matter dynamics changes suddenly at $z_{crt}$. In the latter, there is a continuous conversion from the “normal” to the BEC phase. Although we showed that the smooth transition can indeed last quite a significant fraction of the universe lifetime. Then, it seems that this case could, eventually, lead to a remarkable dynamics. However, in both approaches of the phase transition we could not identify any relevant difference between the BEC model and the standard CDM model. This is mostly because the model parameters leading to $z_{crt}<0$ produce almost negligible $c_{eff}$ values. On one hand, this guarantees that the nonlinear clustering patterns of the BEC model at large scales are very similar to the CDM model. We have provided a theoretical confirmation for the recent numerical results of Schive et al [@Schive] which claims that the differences between BEC DM and standard CDM appears only in the internal structure of DM halos rather than on the cosmological large scale distribution. On the other hand, this eliminates the cosmological nonlinear perturbative study as a possible technique to probe the bosonic nature of dark matter particles. It is also worth noting that the typical value for the critical overdensity for collapse $\delta_c=1.686$ remains unchanged for the BEC parameter space probed here. Perharps, this conclusion is in part due to the fact we have assumed a simple version of the first order phase transition of the BEC DM model. Taking properly into account, for example, the latent heat released during the transition and the resulting dynamics associated to the nucleation of the new bubbles we could end up with a very drastic effect on the non-linear clustering. We will leave this analysis for a future work. [**Acknowledgments:**]{} We acknowledge T. Harko for useful correspondence and the anonymous referee for his/her remarks that substantially improved this work. We thank CNPq (Brazil) and FAPES (Brazil) for partial financial support. HV also acknowledges the financial support of A\*MIDEX project (n° ANR-11-IDEX-0001-02) funded by the “Investissements d’avenir" French Government program, managed by the French National Research Agency (ANR). [00]{} P. A. R. Ade, [*et al.*]{}, arXiv:1502.01589 \[astro-ph.CO\]. H. Baer, K. Y. Choi, J. E. Kim and L. Roszkowski, Phys. Reports, [**555**]{}, 1 (2015). L. Xu, Y. Chang, Phys. Rev. D [**88**]{}, 127301 (2013); R. Hlozek, D. Grin, D. J. E. Marsh and P. G. Ferreira Phys. Rev. D [91]{}, 103512 (2015). J. F. Navarro, C. Frenk, S. White, The Astrophysical Journal [**463**]{}, 563 (1996). D. H. Weinberg, J. S. Bullock, F. Governato, R. Kuzio de Naray, A. H. G. Peter, arXiv:1306.0913v1 \[astro-ph.CO\]; J. Oñorbe [*et. al.*]{} arXiv:1502.02036v1 \[astro-ph.GA\]. A. De Felice and S. Tsujikawa, Living Rev. Relativity [**13**]{}, 3 (2010); S. Capozziello, M. De Laurentis Physics Reports, [**509**]{} 4, 167 (2011); T. Clifton, P. G. Ferreira, A. Padilla, C. Skordis, Physics Reports [**513**]{}, 1 (2012). C. M. Will, Living Rev. Rel.  [**17**]{}, 4 (2014). S. Tremaine and J. E. Gunn. Phys. Rev. Lett. [**42**]{}, 407 (1979). Paul Bode [*et al.*]{} ApJ [**556**]{}, 93 (2001); H. J. de Vega, P. Salucci, N. G. Sanchez, New Astronomy, [**17**]{}, 653 (2012); C. Destri, H. J. de Vega, N. G. Sanchez, Phys. Rev. D[**88**]{}, 083512 (2013); M. Viel, G.D. Becker, J.S. Bolton and M.G. Haehnelt, Phys. Rev, D [**88**]{} 043502 (2013). A. Schneider, D. Anderhalden, A. Maccio, J. Diemand, Mon.Not.Roy.Astron.Soc. [**441**]{}, 6 (2014). W. Hu, R. Barkana and A. Gruzinov. Phys. Rev. Lett. [**85**]{}, 1158 (2000). M. Rocha, A. H. G. Peter, J. S. Bullock, M. Kaplinghat, S. Garrison-Kimmel, J. Onorbe and L. A. Moustakas, MNRAS [**430**]{}, 81 (2013). H. Velten, D. J. Schwarz, J. C. Fabris, W. Zimdahl, Physical Review D [**88**]{}, 103522 (2013); H. Velten , IJGMMP [**11**]{}, 02, 1460013 (2014); H. Velten, T.R.P. Caramês, J. C. Fabris, L. Casarini, R. C. Batista, Phys. Rev. D [**90**]{}, 123526 (2014). C. C. Bradley, C. A. Sackett, J. J. Tollett, and R. G. Hulet, Physical Review Letters [**75**]{}, 1687 (1995); M. H. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman, and E. A. Cornell, Science [**269**]{}, 198 (1995); E. A. Cornell and C. E. Wieman, Rev. Mod. Phys. [**74**]{}, 875 (2002); W. Ketterle, Rev. Mod. Phys.[**74**]{}, 1131 (2002). P. Sikivie and Q. Yang, Phys.Rev.Lett. [**103**]{}, 111301 (2009). C.G. Böhmer and T. Harko, JCAP [**06**]{} (2007) 025. T. Harko, Phys. Rev. D [**83**]{}, 123515 (2011). A. Suarez, Victor H. Robles, T. Matos, Astrophysics and Space Science Proceedings [**38**]{}, Chapter 9 (2013). M.Yu.Khlopov, A.S.Sakharov and D.D.Sokoloff, Nucl.Phys. B (Proc. Suppl.) [**72**]{}, 105-109 (1999); I.G.Dymnikova, M.Yu.Khlopov, Mod. Phys. Lett. A [**15**]{} 2305 (2000). B. Li, T. Rindler-Daller, and Paul R. Shapiro, Phys. Rev. [**D89**]{} (2014) 083536. A. H. Guth, M. P. Hertzberg, C. Prescod-Weinstein \[arXiv:1412.5930 \[astro-ph.CO\]\]. F. Dalfovo, S. Giorgini, L.P. Pitaevskii and S. Stringari, Rev. Mod. Phys. [**71**]{} (1999) 463. M. Yu. Khlopov, B. E. Malomed and Ya. B. Zeldovich, MNRAS [**215**]{}, 575 (1985). A. Suarez, T. Matos, Mon.Not.Roy.Astron.Soc. [**416**]{}, 87 (2011). H. Velten and E. Wamba, Phys. Lett. B [**709**]{}, 1 (2012). P.-H. Chavanis, A&A [**537**]{},A127 (2012). B. Kain and H. Y. Ling, Phys. Rev. D [**85**]{}, 023527 (2012). R. C. Freitas and S. V. B. Gonçalves, JCAP [**04**]{}, 049 (2013). M. Alcubierre, A. de la Macorra, A. Diez-Tejedor, J. M. Torres, arxiv:1501.06918 \[gr-qc\]. V. Springel, Astronomische Nachrichten, [**333**]{}, Issue 5-6, 515 (2012). P. Mocz and S. Succi, \[arXiv:1503.03869 \[physics.comp-ph\]\]. H.-Y. Schive , T. Chiueh and T. Broadhurst, Nature Physics, [**10**]{}, 496 (2014); H.-Y Schive, M.-H. Liao, T.-P Woo, S.-K. Wong, T. Chiueh, T. Broadhurst and W.-Y.P. Hwang, Phys. Rev. Lett., [**113**]{}, 261302 (2014). T. Harko, Phys. Rev. D [**89**]{}, 084040, (2014). P. S. Julienne, F. H. Mies, E. Tiesinga, and C. J. Williams, Phys. Rev. Lett. [**70**]{}, 1880 (1997). F. Kh. Abdullaev, B. B. Baizakov, S. A. Darmanyan, V. V. Konotop, and M. Salerno, Phys. Rev. A [**64**]{}, 043606 (2001). Pierre-Henri Chavanis, Phys. Rev. D [**84**]{}, 043531 (2011) ; P.H. Chavanis, L. Delfini, Phys. Rev. D [**84**]{}, 043532 (2011). T. Rindler-Daller and P. R. Shapiro, Mon. Not. Roy. Astron. Soc.  [**422**]{}, 135 (2012). L. R. Abramo, R. C. Batista, L. Liberato, and R. Rosenfeld, Phys. Rev. D [**79**]{}, 023516 (2009). L. R. Abramo, R. C. Batista, L. Liberato and R. Rosenfeld, JCAP [**0711**]{}, 012 (2007). R. A. A. Fernandes, J. P. M. de Carvalho, A. Yu. Kamenshchik, U. Moschella, A. da Silva, Phys. Rev. D [**85**]{}, 083501 (2012). Thiago R. P. Caramês, Júlio C. Fabris, Hermano E. S. Velten, Phys. Rev. D [**89**]{}, 083533 (2014); Hermano E. S. Velten, Thiago R. P. Caramês, Phys. Rev. D [**90**]{}, 063524 (2014). J. Preskill, M. Wise, and F. Wilczek, Phys. Lett. B [**120**]{}, 127 (1983); L. Abbott and P. Sikivie, Phys. Lett. B [**120**]{}, 133 (1983); M. Dine and W. Fischler, Phys. Lett. B [**120**]{}, 137 (1983); P. Sikivie, Lect. Notes Phys. [**741**]{}, 19 (2008). J. C. C. de Souza, M. O. C. Pires, JCAP[**03**]{}, 010 (2014). F. S. Guzman, F. D. Lora-Clavijo, J. J. Gonzalez-Aviles, F. J. Rivera-Paleo, JCAP [**09**]{}, 034 (2013). T. Harko, P. Liang, S.-D. Liang, G. Mocanu, [*Testing the Bose-Einstein Condensate dark matter model at galactic cluster scale* ]{}, arXiv: 1510.06275. D. Marsch and P. Ferreira, Phys. Rev. D [**82**]{}, 103528 2010; R. Hlozek, D. Grin, D. J.E. Marsh, P. G. Ferreira, Phys. Rev. D [**91**]{}, 103512 (2015). [^1]: A recent controversial claim challenging the existence of such astronomical BEC condensates has been discussed in [@Guth]. [^2]: Note that both quantum pressure and self-interaction pressure are of quantum mechanical origin.
--- abstract: 'We examine in detail an alternative method of retrieving the information written into an atomic ensemble of three-level atoms using electromagnetically induced transparency. We find that the behavior of the retrieved pulse is strongly influenced by the relative collective atom-light coupling strengths of the two relevant transitions. When the collective atom-light coupling strength for the retrieval beam is the stronger of the two transitions, regeneration of the stored pulse is possible. Otherwise, we show the retrieval process can lead to creation of soliton-like pulses.' author: - 'Amy Peng, Mattias Johnsson, and Joseph J. Hope' title: 'Pulse retrieval and soliton formation in a non-standard scheme for dynamic electromagnetically induced transparency' --- Introduction ============ Recent progress in the coherent control of light-matter interactions has led to many interesting possibilities and practical applications. Amongst them is the concept of electromagnetically induced transparency (EIT), first proposed by Harris [@Harris1997], in which a strong coherent field (“control") is used to make an otherwise opaque medium transparent near atomic resonance for a second weak (“probe") field. This EIT scheme can be used to store and retrieve the full quantum information in a weak probe field by changing the strength of the control field while the pulse is inside the atomic sample [@Fleischhauer2000]. In this paper we examine in detail an alternative method of retrieving the stored information that was first proposed by Matsko [*et al.*]{} [@Matsko2001], and investigate the parameter regime under which this process is feasible. The usual way of retrieving the stored information consists of a time-reversed version of the writing process [@Liu2001; @Phillips2001]. The elegant physics behind this scheme was described by Fleischhauer and Lukin, who noted that the combined atomic and optical state adiabatically follows a dark state polariton [@Fleischhauer2002]. The “writing" process involves turning the control field to zero, storing the quantum information of the light beam as the purely atomic form of the polariton. When the control field is returned back to its original value, the polariton switches back to photonic form, identical to original optical pulse. In work describing some of the detailed behavior of that process, Matsko [*et al.*]{} noted an alternate scheme that may also store and reproduce a copy of a weak probe pulse [@Matsko2001]. In this scheme, the writing process remains the same, but retrieval of the pulse involves applying a coherent control field to the transition originally coupled by the probe field. This causes the probe pulse to be regenerated on the transition originally coupled by the control field. This alternative scheme cannot be explained in terms of dark state polaritons, and behaves quite differently for different parameters of the system. We analyze this scheme in detail in this paper. As it is important to distinguish between the different control fields applied at different times, we will refer to the control field as the “writing" field during the first part of the process (storage) and as the “retrieval" field during the second part (the regeneration of the probe pulse). We will show that when the collective atom-light coupling strength for the retrieval beam is the largest of the two transitions, retrieval is possible and the retrieved pulse is amplified, time reversed, stretched or compressed in time and phase conjugated compared to the input pulse. Conversely, if the collective coupling strength of the retrieval beam is not the largest of the two transitions, we find that the retrieved pulse differs substantially from the input pulse and the retrieval process can lead to the creation of a soliton-like combination of electromagnetic fields and atomic coherences that propagates without change in shape. Model ===== To analyze the system we use a quasi one-dimensional model, consisting of two copropagating pulses passing through an optically thick medium of length $l$ consisting of three-level atoms. The atoms have two metastable ground states $|b \rangle$ and $|c \rangle$ which interact with two fields $\hat{\mathcal{E}}_p(z,t)$ and $\hat{\mathcal{E}}_c(z,t)$ as shown in Figure \[threelevel\]. $\hat{\mathcal{E}}_i$ are the slowly varying amplitude related to the positive frequency part of the electric field given by $$\begin{aligned} \hat{E}_p^{+} & = & \sqrt{\frac{\hbar \omega_{ab}}{2 \epsilon_0 V}} \hat{\mathcal{E}}_p (z,t) e^{\frac{i \omega_{ab}}{c}(z-ct)} \nonumber \\ \hat{E}_c^{+} & = & \sqrt{\frac{\hbar \omega_{ac}}{2 \epsilon_0 V}} \hat{\mathcal{E}}_c (z,t) e^{\frac{i \omega_{ac}}{c}(z-ct)}. \nonumber\end{aligned}$$ Here $\omega_{\mu \nu} = (E_\mu - E_\nu)/\hbar$ is the resonant frequency of the $|\mu \rangle \leftrightarrow | \nu \rangle$ transition and $V$ the quantization volume, here taken as the interaction volume. As the pulses are co-propagating we are able to neglect Doppler effects. ![Level structure of the atoms[]{data-label="threelevel"}](threelevel.eps){width="6cm" height="2.8cm"} To perform a quantum analysis of the light-matter interaction it is useful to use locally-averaged atomic operators. We take a length interval $\delta z$ over which the slowly-varying field amplitudes do not change much, containing $n \mathcal{A} \delta z \gg 1$ atoms, where $n$ is the atomic density and $\mathcal{A}$ is the cross sectional area of the pulses, and introduce the continuous atomic operators $$\hat{\sigma}_{\mu \nu}(z,t) = \frac{1}{n \mathcal{A} \delta z} \sum_{i, z \leq z_i < z + \delta z} | \mu^i(t) \rangle \langle \nu^i(t) | e^{ \frac{i \omega_{\mu \nu}}{c}(z - ct)}. \label{contatom}$$ where $z_i$ is the position of the $i$th atom and $| \mu^i(t) \rangle$ is the $| \mu \rangle$ state wavefunction for the $i$th atom.. Using these continuous atomic operators, the interaction Hamiltonian under the rotating wave approximation is given by $$\hat{\mathcal{H}} = - \int_0^{l} \frac{N \hbar}{l} [ g_p \hat{\mathcal{E}}_p(z,t) \hat{\sigma}_{ab}(z,t) + g_c \hat{\mathcal{E}}_c(z,t) \hat{\sigma}_{ac}(z,t) + H.c.] dz \label{hamiltonian}$$ where $l$ is the length of the cell, $N$ is the number of atoms in the interaction region and the coupling constants are $g_p = d_{ab} \sqrt{\omega_{ab}/2 \epsilon_0 V \hbar}$, $g_c = d_{ac} \sqrt{\omega_{ac}/2 \epsilon_0 V \hbar}$ where $d_{ab}$ and $d_{ac}$ are the dipole moments of the $|a \rangle \leftrightarrow | b \rangle$ and $| a \rangle \leftrightarrow |c \rangle$ transitions respectively. This Hamiltonian leads to the following equations of motion for the density matrix elements and fields $$\begin{aligned} \dot{\rho}_{bb} & = & \gamma_b \rho_{aa} - i \Omega_p \rho_{ba} + i\Omega_p^{\ast} \rho_{ab} \label{rhobb} \\ \dot{\rho}_{cc} & = & \gamma_c \rho_{aa} - i \Omega_c \rho_{ca} + i \Omega_c^{\ast} \rho_{ac} \nonumber \\ \dot{\rho}_{ab} & = & - \gamma_{ab} \rho_{ab} + i \Omega_p (\rho_{bb} - \rho_{aa}) + i \Omega_c \rho_{ab} \nonumber \\ \dot{\rho}_{cb} & = & - i \Omega_p \rho_{ca} + i \Omega_c^{\ast} \rho_{ab} \nonumber \\ \dot{\rho}_{ca} & = & - \gamma_{ca} \rho_{ca} - i \Omega_p^{\ast} \rho_{cb} + i \Omega_c^{\ast}( \rho_{aa} - \rho_{cc} ) \nonumber \\ \left( \frac{\partial}{\partial t} \right. & + & \left. c \frac{\partial}{\partial z} \right) \Omega_p = i \alpha_p \rho_{ab} \nonumber \\ \left( \frac{\partial}{\partial t} \right. & + & \left. c \frac{\partial}{\partial z} \right) \Omega_c = i \alpha_c \rho_{ac} \label{motion}\end{aligned}$$ where the $\gamma_\mu$ are phenomenological decay rates and we have defined the two Rabi frequencies as $\Omega_i = g_i \langle \hat{\mathcal{E}}_i \rangle$ and the constants and are the collective atom-light coupling constants for the transitions $| b \rangle \leftrightarrow | a \rangle$ and $| c \rangle \leftrightarrow | a \rangle$ respectively. We now proceed to consider the modified version [@Matsko2001; @Zibrov2002] of the storage and retrieval scheme using dynamic EIT. With all the atoms initially optically pumped into the ground state $| b \rangle$, we turn on the writing beam driving the $| c \rangle \leftrightarrow | a \rangle$ transition to its maximum value $\Omega_c^0$, while a small input pulse of maximum amplitude $\Omega_p^0 \ll \Omega_c^0$ is sent into the medium on the $| b \rangle \leftrightarrow | a \rangle$ transition. EIT effects cause the group velocity of the the input pulse to be drastically reduced to a new value given by [@Fleischhauer2002] $$v_g = \frac{c}{ 1 + \alpha_p / |\Omega_c^0|^2} \label{vg}$$ which can be much less than the speed of light. This leads to significant spatial compression of the input pulse as it enters the medium, allowing the entire pulse (the typical pulse length outside the cell is of order of a few kilometers) to be stored inside a vapor cell (of length a few centimeters). Once the input pulse is completely inside the medium, we slowly turn off the writing beam on the $| c \rangle \leftrightarrow | a \rangle$ transition. This writes the information carried by the input pulse onto a collective atomic coherence for storage [@Fleischhauer2002]. After a controllable storage time $T_s$, the stored information can be retrieved by turning on a retrieval beam driving the $| b \rangle \leftrightarrow | a \rangle$ transition as proposed in [@Matsko2001; @Zibrov2002]. Note that this is in contrast to the usual retrieval scheme for EIT light storage in which the retrieval beam is on the $| c \rangle \leftrightarrow | a \rangle$ transition and the output pulse is generated on the $| b \rangle \leftrightarrow | a \rangle$ transition [@Fleischhauer2002]. For normal EIT in the ideal case, the output pulse is identical to the input pulse. Here, because the time reversed version of the writing beam is on the $| b \rangle \leftrightarrow |a \rangle$ transition, the output pulse is generated on the $| c \rangle \leftrightarrow | a \rangle$ transition. As a consequence the output pulse need not have the same frequency or polarization as the input pulse. It is not immediately clear why any fields made by this process would be correlated with the stored pulse. After all, the original pulse produces a small population in state $|c\rangle$ and leaves most atoms in state $|b\rangle$, so the control beam is mainly interacting with states that have not been affected by the original pulse in any significant way. Also, as the strong retrieval beam is initiated on the most populated transition, it will lead to significant spontaneous emission, which would appear to dominate any kind of coherent process necessary for the retrieval. However, as will be shown later, our simulation indicates that under some parameter regimes, it is indeed possible to recover an output related in amplitude and phase to the input pulse. Retrieving the probe pulse ========================== We will start with an example that illustrates a successful pulse retrieval, and then go on to analyze other possible behaviors of this system. We assume standard atomic initial conditions for EIT given by $\rho_{bb}(z,0)=1$; $\rho_{cc}(z,0) = \rho_{ab}(z,0) = \rho_{cb}(z,0) = \rho_{ca}(z,0) = 0$. In order to better identify the properties of the new retrieval scheme and to judge the quality of the retrieval process, we choose an input pulse at $z=0$ to be the sum of two Gaussians of different height (so the total input pulse is non-symmetric) with a time-varying phase factor. The boundary conditions for the fields at $z=0$ is of the form $$\begin{aligned} \Omega_p(0,t) & = & \Omega_p^0 f(t) e^{i f(t)} + \frac{\Omega_c^0}{2} \left[ 1 + \tanh \left( \frac{t - T_{on}}{T_s} \right) \right] \label{omegapin} \\ \Omega_c(0,t) & = & \frac{\Omega_c^0}{2} \left[ 1 - \tanh \left( \frac{t-T_{off}}{T_s} \right) \right] \label{omegacin}\end{aligned}$$ where $f(t)$ is a unit amplitude envelope function. The first part of equation (\[omegapin\]) describes the input (signal) pulse that we wish to store, whereas the second part of equation (\[omegapin\]) describes the turning on of the retrieval beam (amplitude $\Omega_c^0$) at time $T_{on}$ with a switching time of approximately $T_s$. $\Omega_c$ describes the turning off of the writing beam at time $T_{\mbox{\it off}}$. These boundary conditions for the fields are plotted in Figure \[inputpulses\]. With these initial conditions, the equations of motion (\[motion\]) can be solved numerically in the moving frame defined by $\xi = z$, $\tau = t - z/c$ using the method described in Shore [@Shore] implementing the numerical integration with a fourth order Runge-Kutta method [@XMDS]. For convenience, we choose $\gamma_b = \gamma_{ab} = \gamma$ and $\gamma_c = \gamma_{ca} = (\alpha_c/\alpha_p) \gamma$. ![Envelope of pulses entering the medium at $z=0$ as a function of time. a) $\Omega_p$ is the field driving the $| b \rangle \leftrightarrow | a \rangle$ transition and b) $\Omega_c$ is the field driving the $| c \rangle \leftrightarrow |a \rangle$ transition. The parameters are $T_{\mbox{\it off}} = 140 \gamma^{-1}$, $T_s = 18.85 \gamma^{-1}$, $T_{on} = 259 \gamma^{-1}$, $\Omega_p^0 = 0.0265 \gamma$ and $\Omega_c^0 = 2.6526 \gamma$. Amplitude of pulses are displayed in units of $\gamma$. The input pulse to be stored is $\Omega_p$ before the start of storage. The retrieved pulse is $\Omega_c$ after the storage process.[]{data-label="inputpulses"}](InputPulses.eps){width="9cm" height="8cm"} ![Comparison of a) amplitude and b) phase of both the retrieved and input pulses. The solid line represents the retrieved pulse and the dashed line the input pulse (magnified by a factor of ten). The parameters are $\alpha_p = 30177 \gamma$, $\alpha_c = 29272 c\gamma$, $l = 4$cm. The input pulses are as shown in Figure \[inputpulses\]. The large amplitude on the retrieval pulse transition at earlier times is because the field on this transition was used as the writing beam initially. Similarly the large value on the input pulse transition at later times represents the retrieval beam.[]{data-label="result1"}](result1.eps){width="8cm" height="8cm"} Figure \[result1\] shows a comparison of the phase and amplitude of the retrieved and input pulses, both propagating in the positive $z$ direction. In practice, this output pulse of well-defined shape will be superimposed on top of spontaneously emitted photons, so for the purposes of observing this output, it would be better to choose the lowest atomic density allowed for EIT to work. From the graphs, we observe that the retrieved pulse is time reversed, amplified, widened in time and is the phase conjugate of the input pulse. We can understand this behavior by examining the interaction of the retrieval beam with the stored coherence and how the retrieved pulse is generated. As the writing part of our process is identical to the usual dynamic EIT setup [@Fleischhauer2002], we know that at the end of the storage process, the only variable of the system aside from $\rho_{bb}$ that is significantly nonzero is $\rho_{cb}$ whose spatial variation, shown in Figure \[stored\], encodes the phase and amplitude information of the original input pulse. ![Amplitude a) and phase b) of $\rho_{cb}$ during storage.[]{data-label="stored"}](stored.eps){width="8cm" height="8cm"} ![Spatial variation of the retrieval beam divides the medium into three regions as described in the text. At any time, the retrieved pulse is generated due to the dynamics that occur in region II where the retrieval beam is in the process of pumping population out of state $| b \rangle$. In region I, all atoms are in state $|c \rangle$. The beam has not yet penetrated to region III.[]{data-label="RetrievalBeam"}](RetrievalBeam.eps){width="8cm" height="3.5cm"} The retrieval beam enters the medium driving the transition $| b \rangle \leftrightarrow | a \rangle$ while almost all the atoms are in state $| b \rangle$. As the medium is optically thick, the retrieval beam is strongly absorbed and its initial wavefront moves across the medium at a speed much less than the speed of light. The spatial variation of the retrieval beam at any time during the retrieval process has the general shape shown in Figure \[RetrievalBeam\]. At any time, this shape divides the medium into three regions with distinct dynamics. In region I, the retrieval beam $\Omega_p$ has attained its maximum value as it has optically pumped all atoms into the state $| c \rangle$. In region II, the wavefront of the retrieval beam is severely attenuated due to absorption and in region III, not yet reached by the retrieval beam, all the atoms are still in the polariton state left by the writing process. This means that most of the atoms are in state $| b \rangle$ here. We note that at any point in time, the interesting dynamics related to the retrieval of the stored information is occurring only in region II. In this region the retrieval beam $\Omega_p$ is in the process of pumping atoms from $| b \rangle$ to $| a \rangle$ at a rate of $| \Omega_p |^2 / \gamma$. Before a significant population has accumulated in $| c \rangle$, the retrieval beam can coherently scatter off the stored coherence $\rho_{cb}$, generating a retrieved pulse on the $| a \rangle$ to $| c \rangle$ transition. These generated photons will encode some of the properties of the input pulse and propagate out of the medium at the speed of light through region III, as this region contains only atoms in $| b \rangle$. On long time-scales optical pumping alters $\rho_{cb}$ and will overwrite any coherence that was initially stored there. At this point, photons emitted when atoms decay from $| a \rangle$ to $| c \rangle$ will bear no relation to the input pulse. Furthermore, any field on the $| c \rangle \leftrightarrow | a \rangle$ transition will be strongly damped due to the significant population in state $| c \rangle$. Note that in contrast to the adiabatic following in the usual EIT procedure [@Fleischhauer2002], the retrieval process here is non-adiabatic and as demonstrated in Figure \[popa\](a), the population in the excited state $| a \rangle$ can be quite significant during the retrieval process. ![a) $\rho_{cc}$ (dashed) and $\rho_{aa}$ (solid) as a function of $z$ during the retrieval process. A significant population resides in the excited state during the retrieval process. b) $|\rho_{cb}(0.024,\tau)|$ as a function of $\tau \gamma^{-1}$. The spike in the graph shows the effect of optical pumping by the retrieval beam which overwrites the stored coherence.[]{data-label="popa"}](PopulationInA.eps){width="8cm" height="7cm"} Using this description of the dynamics, one can explain many of the features exhibited by the retrieved pulse. The time reversal occurs because the wavefront of the retrieval beam moves slowly from left to right, so the part of the stored coherence that is closest to the cell entrance sees the retrieval beam first. This part of the coherence near the medium entrance corresponds to the tail end of the input pulse (i.e. the part of the input pulse that entered the cell last), so the tail end of the input pulse will be retrieved first, resulting in an output that is time reversed. Figure \[result1\] also demonstrates that the retrieved pulse can be amplified, and in this particular example the amplitude is increased by a factor of about twenty. This occurs because the new retrieval scheme has a larger reservoir of atoms available for producing photons. In the normal EIT setup photon number in the retrieved pulse is limited by the number of atoms in state $| c \rangle$ during the storage process which is strictly less than the number of photons in the input pulse. In this scheme the number of photons generated is limited by the maximum number of atoms in state $|c \rangle$ that the medium can sustain without destroying the stored coherence. This is a property of both the size of the stored coherence and the medium (e.g. optical density). It is, however, independent of the properties of the retrieval beam. For an optically dense medium, this is generally much larger than the number of photons in the input pulse. ![Comparison of a) amplitude and b) phase of retrieved (solid) and input (dashed) pulses. The input pulse has been magnified by a factor of ten. The amplitude of the retrieval beam is now $\Omega_p^0 = 5.3052 \gamma$ twice the value in Figure \[result1\]. We see that the retrieved pulse is made earlier in time and narrower in time because of the increased pumping rate by the retrieval beam. All other parameters are identical to those in Figure \[result1\].[]{data-label="result4"}](result4.eps){width="8cm" height="7cm"} It is also clear that the width of the retrieved pulse is determined by the speed at which the wavefront of the retrieval beam propagates across the medium and not the initial pulse length. Figure \[result4\] shows the retrieved pulse when the amplitude of the retrieval beam is doubled compared to Figure \[result1\] while keeping all other parameters identical. We see that the output pulse is narrower and its amplitude larger while the total number of photons (as indicated by the area underneath the graph of intensities) remains approximately the same. This is due to the the more intense retrieval beam now being able to pump atoms out of $| b \rangle$ at a higher rate, enabling it to move across the medium more quickly and ensuring a shorter time interval between the retrieval of the front and back part of the original input pulse. Since the total number of generated photons is unaffected by the intensity of the retrieval beam, energy conservation necessitates that the retrieved pulse has greater amplitude. The phase conjugation of the output pulse is most easily understood by examining the equations of motion (\[motion\]) and (\[rhobb\]). On a short time scale, we have $\rho_{bb} \approx 1 \gg \rho_{cc}, \rho_{aa}$, and the following set of equations describes the generation of the retrieved pulse from the conjugate of the stored coherence $\rho_{cb}^{\ast}$ $$\begin{aligned} \left( \frac{\partial}{\partial t} \right. & + & \left. c \frac{\partial}{\partial z} \right) \Omega_c = i \alpha_c \rho_{ac} \label{outputeq} \\ \rho_{ac}(z,t) & = & i \int_{T_{on}}^t e^{- \gamma (t-s)} \Omega_p(z,s) \rho_{cb}^{\ast}(z,s) ds \label{rhoacexp}.\end{aligned}$$ Since $\rho_{cb}^{\ast}$ stores the conjugate of the input phase, the retrieved pulse is therefore the phase conjugate of the input pulse. From examining the phase of the generated output field compared to the input, it is clear that the quality of the retrieval process, (in terms of extracting a pulse of the same shape as the time reversed input) is not perfect. For example, from Figure \[result1\] (b), we see that the phase of the retrieved pulse fails to decay to zero after a certain time. This feature is even more prominent when we set the collective atom-light coupling constants to be equal $\alpha_p = \alpha_c$, as shown in Figure \[result2\]. Again we see that the phase of the retrieved pulse plateaus, this time near the peak of the second Gaussian. Increasing $\alpha_c$ further to move into the regime $\alpha_c > \alpha_p$, we see from Figure \[result3\] that the quality of the retrieval process is so bad that the output pulse does not even display the characteristic double peak of the input pulse. ![Comparison of a) amplitude and b) phase of retrieved (solid) and input (dashed) pulses. The input pulse has been magnified by a factor of ten. The abrupt cut off of the retrieved pulse near $\tau \gamma^{-1} = 600$ is due to the arrival of the retrieval beam at the exit of the cell $z=0.04$ which pumps all atoms into state $| c \rangle$ and prevents any field existing on this transition. The parameters are $\alpha_p = \alpha_c = 30177 \gamma$, $l = 4$cm. The input pulses are as given in Figure \[inputpulses\][]{data-label="result2"}](result2.eps){width="8cm" height="7cm"} ![Comparison of a) amplitude and b) phase of retrieved (solid) and input (dashed) pulses. The input pulse has been magnified by a factor of ten. The parameters are $\alpha_p = 30177 c \gamma$ &lt; $\alpha_c = 31082 c \gamma$, $l = 4$cm. The input pulses are as given in Figure \[inputpulses\].[]{data-label="result3"}](result3.eps){width="8cm" height="7cm"} To further investigate the behavior of the retrieved pulse as the relative strength of the collective coupling constant is varied, we consider a Gaussian input envelope as this makes clearer the difference in retrieval quality. Figure \[gaussamp\] show the three cases $\alpha_p>\alpha_c$, $\alpha_p = \alpha_c$, and $\alpha_p< \alpha_c$. ![Comparison of amplitudes of input pulse (dashed) magnified by a factor of ten and output pulse (solid) for the three parameter regimes $\alpha_p>\alpha_c$ (a), $\alpha_p=\alpha_c$ (b) and $\alpha_p<\alpha_c$.[]{data-label="gaussamp"}](gaussamp.eps){width="8cm" height="7cm"} For $\alpha_p > \alpha_c$, the generation of the output pulse automatically ceases so that the retrieved pulse will always display the falling end of the Gaussian input. However, as $\alpha_c$ approaches the value of $\alpha_p$, the pulse generation takes a longer time to cease giving the assymetric long tail shape shown in Figure \[gaussamp\] (a). When $\alpha_p = \alpha_c$, eventually a constant value of the generated field is maintained until the retrieval beam reaches the exit of the cell and pumps all the atoms to $| c \rangle$. Continuing the trend, Figure \[gaussamp\] (c) shows that in the regime of $\alpha_p < \alpha_c$, the generated pulse amplitude grows until the retrieval beam has optically pumped all the atomic population into $| c \rangle$. The phase variation of the retrieved pulse confirms the general trend that quality of the retrieval process decreases as we move across the three parameter regimes. Soliton-Like Behavior ===================== Our numerical results strongly indicate the existence of a soliton solution at the critical point $\alpha_p = \alpha_c$. At this point the field $\Omega_c$ generated from the stored coherence is able to induce a nonzero coherence $\rho_{cb}$ by a co-operative action with the retrieval beam $\Omega_p$. The induced $\rho_{cb}$ in turn can then be used to generate $\Omega_c$. This steady state cycling action explains why the system can continue to generate an output even when the ’dynamic’ region (region II in Figure \[RetrievalBeam\]) has moved beyond where the input pulse was originally stored. More specifically, when $\alpha_p > \alpha_c$, the induced coherence is not large enough to maintain the cycling action, causing the effect to die out; if $\alpha_p = \alpha_c$, the effect is self sustaining where the induced $\rho_{cb}$ is just sufficient to maintain the value of $\Omega_c$ that generated it; and for $\alpha_p<\alpha_c$, a greater coherence ($\rho_{cb}$) is induced, leading to an output at the cell exit that is continually amplified in time until cut off by the arrival of the retrieval beam. However, we have observed that when the cell length is increased, the amplitude of the output pulse appears to tend to a limiting value. Figure \[sameRcb\] shows the induced $\rho_{cb}$, $\Omega_p$ and $\Omega_c$ for $\alpha_p = \alpha_c$ when the initially stored coherence no longer exists. We note that the shape and size of $\rho_{cb}$ and the fields propagate unchanged across the cell, indicating behavior characteristic of solitons. We also found that the shape and size of the final output is independent of the values of the coherences originally stored, indicating that the final field-coherence formation is a characteristic of the system independent of the storage/retrieval process requiring an initial nonzero $\rho_{bc}$ coherence only as a seed. To demonstrate analytically that our results are in fact solitons we substitute the ansatz $\Omega_i(z,t) = \Omega_i(z-vt)$, $\rho_{\mu \nu}(z,t) = \rho_{\mu \nu}(z-vt)$ into equations (\[motion\]) for the case $\alpha_p = \alpha_c$, where $v$ is the soliton parameter that designates the constant speed with which the soliton propagates across the medium. This allows us to determine the following relationship between the two pulses and the coherence generated as $$\rho_{cb}(s) = - \frac{c - v}{\alpha_0 v} \Omega_p(s) \Omega_c^{\ast}(s) \label{coherencegen}$$ where $\alpha_p = \alpha_c = \alpha_0$ is the collective light-atom coupling constant identical for both transitions and $s=z-vt$. We also obtain the following relationship between the limiting values of the two fields ![$|\Omega_p|$ (dash), $|\Omega_c|$ (dash-dot) and $|\rho_{cb}| \times 5$ (solid) as a function of $z$ at times $\tau \gamma^{-1} = 477.6$ (a), $489$ (b), $501.6$ (c) for $\alpha_p = \alpha_c$ case. The field and atomic coherence propagates together unchanged across the cell[]{data-label="sameRcb"}](soliton.eps){width="8cm" height="7cm"} $$\frac{c-v}{\alpha_0 v} (|\Omega_p^{\infty}|^2 + |\Omega_c^{\infty}|^2) = 2 \label{limitvalues}$$ where $ \Omega_p^{\infty} = \lim_{s \rightarrow -\infty} \Omega_p(s)$ and $ \Omega_c^{\infty} = \lim_{s \rightarrow \infty} \Omega_c(s)$. When we compare equations (\[coherencegen\]) and (\[limitvalues\]) against the results of our numerics for various soliton parameters (the soliton parameter can be changed by varying the amplitude of the retrieval beam), we find good agreement. After further elimination of all the atomic variables, we obtain the two soliton equations relating the two fields $$\begin{aligned} v \left[ \Omega_p \frac{d^2 \Omega_c^{\ast}}{ds^2} - \Omega_c^{\ast} \frac{d^2 \Omega_p}{ds^2} \right] - \gamma \left[\Omega_p \frac{d \Omega_c^{\ast}}{ds} \right. & - & \left. \Omega_c^{\ast} \frac{d \Omega_p}{ds} \right] \nonumber \\ = - \Omega_p \Omega_c^{\ast} \left( \frac{|\Omega_p^{\infty}|^2}{v} \right. & - & \left. \frac{\alpha_0}{c-v} \right)\end{aligned}$$ $$\begin{aligned} - \Omega_p \left( v \frac{d^3 \Omega_p}{ds^3} - \gamma \frac{d^2 \Omega_p}{ds^2} \right) & + & \left(\frac{d \Omega}{ds} + \frac{2 \gamma \Omega_p}{v} \right) \left(v \frac{d^2 \Omega_p}{ds^2} - \gamma \frac{d \Omega_p}{ds} \right) \nonumber \\ & = & \frac{2 \Omega_p^2}{v} \frac{d}{ds} \left(|\Omega_c|^2 + |\Omega_p|^2 \right) \nonumber \\ & + & \frac{\gamma}{v^2} \Omega_p^2(|\Omega_p^{\infty}|^2 - |\Omega_c|^2 - |\Omega_p|^2 ) \label{soliton2}\end{aligned}$$ from which the dispersive and nonlinear terms are clearly visible. We believe the photons generated from the cycling action outlined above bear little relation to the input pulse and are therefore not useful as far as quantum information retrieval is concerned. However the existence of the soliton-like solution and its sensitivity with respect to the coupling parameters is likely to lead to other interesting possibilities. A further analysis of the generation and properties of the solitons is beyond the scope of this paper, although the possibility that solitons can exist in atomic $\Lambda$ systems has previously been considered [@Rybin2004; @Konopnicki1981] In conclusion, we have demonstrated the feasibility of an alternative retrieval scheme for dynamic EIT under certain parameter regimes and provided physical explanations for its behavior. Our numerical simulation also demonstrated the ability of this new scheme to create soliton-like features that are sensitive to the relative coupling strength of the two transitions. Due to its sensitivity to parameter change, we believe this solitonic behavior could prove useful within the context of magnetometry or high precision measurement. We thank Elena Ostrovskaya for helpful discussions regarding the soliton features. [12]{} S. E. Harris Phys. Today [**[50]{}**]{} 36 (1997) M. Fleischhauer and M. D. Lukin, Phys. Rev. Lett. [**[84]{}**]{}, 5094 (2000) A. B. Matsko, Y. V. Rostovtsev, O. Kocharovskaya, A. S. Zibrov and M. O. Scully, Phys. Rev. A [**[64]{}**]{} 043809 (2001) C. Liu, Z. Dutton, C. Behroozi and L. V. Hau, Nature [**[409]{}**]{} 490 (2001) D. F. Phillips, A. Fleischhauer, A. Mair, R. L. Walsworth, and M. D. Lukin, Phys. Rev. Lett. [**[86]{}**]{} 783 (2001) M. Fleischhauer and M. D. Lukin, Phys. Rev. A [**[65]{}**]{}, 022314 (2002) A. S. Zibrov, A. B. Matsko, O. Kacharovskaya, Y. V. Rostovstsev, G. R. Welch and M. O. Scully, Phys. Rev. Lett. [**[88]{}**]{} 103601 (2002) B. W. Shore, The Theory of Coherence Atomic Excitation Wiley New York 1990 G. Collecutt, P.D. Drummond, P. Cochrane and J. J. Hope, “Extensible Multi-Dimensional Simulator,” documentation and source available from http://www.xmds.org A. V. Rybin and I. P. Vadeiko J. Opt. B [**[6]{}**]{} 416 (2004) M. J. Konopnicki and J. H. Eberly Phys. Rev. A [**24**]{} 2567 (1981)
--- abstract: 'In this work, we propose a powerful probe of neutrino effects on the large-scale structure (LSS) of the Universe, i.e., Minkowski functionals (MFs). The morphology of LSS can be fully described by four MFs. This tool, with strong statistical power, is robust to various systematics and can comprehensively probe all orders of N-point statistics. By using a pair of high-resolution N-body simulations, for the first time, we comprehensively studied the subtle neutrino effects on the morphology of LSS. For an ideal LSS survey of volume $\sim1.73$ Gpc$^3$/$h^3$, neutrino signals are mainly detected from void regions with a significant level up to $\thicksim 10\sigma$ and $\thicksim 300\sigma$ for CDM and total matter density fields, respectively. This demonstrates its enormous potential for much improving the neutrino mass constraint in the data analysis of up-coming ambitious LSS surveys.' author: - Yu Liu - Yu Yu - 'Hao-Ran Yu' - Pengjie Zhang bibliography: - 'mf.bib' title: 'Neutrino effects on the morphology of cosmic large-scale structure' --- Introduction ============ Neutrino mass problem is one of major challenges in fundamental physics. The $Z$ boson lifetime measurements found that the number of active neutrinos is 3 ($N^{active}_{\nu}$ = $2.9840\pm0.0082$) [@2006PhR...427..257A], and the neutrino oscillation experiments also revealed that at least two of the three neutrino eigenstates are massive [@1992PhRvD..46.3720B; @1998PhRvL..81.1562F; @2004PhRvL..92r1301A]. However, the oscillation experiments only give the mass-squared splittings between the neutrino eigenstates, which implies lower bound on the sum of neutrino masses, $\Sigma m_{\nu}$, to be 0.05 and 0.1 eV for the normal and inverted-mass hierarchies (e.g., [@2008PhRvL.101m1802A]), respectively. The beta decay and neutrinoless double-beta decay experiments are the promising laboratory-based experiments for obtaining the absolute neutrino mass scale. Nevertheless, due to current technical limitations in particle physics experiments (e.g., [@2010NIMPA.623..442W; @2017JPhG...44e4004A]), further accurate measurement of absolute neutrino mass will be challenging. In cosmology, the analysis of cosmological observables (e.g., anisotropies of CMB, distribution of LSS) can provide crucial complementary information on neutrino masses beyond particle physics experiments. At present, the strongest constraint on the upper bound of neutrino mass sum, $\Sigma m_{\nu} < 0.12$ eV (2$\sigma$), comes from cosmology by combination analysis of CMB and BAO data assuming $\Lambda$CDM cosmology [@2018arXiv180706209P]. The next-generation LSS surveys (e.g., SKA [@SKA], DESI [@2016arXiv161100036D], LSST [@2009arXiv0912.0201L], WFIRST [@wfirst], Euclid [@euclid]) and CMB surveys (e.g., the Simons Observatory [@2019JCAP...02..056A] and CMB-S4 [@2016arXiv161002743A]) will map the cosmic large-scale structure with high precision, which provides great opportunity to improve the measurements of neutrino mass sum upper bound and other cosmological parameters. Cosmic neutrinos with large thermal velocities can suppress the density perturbations below their free streaming scale, $\lambda_{fs}(m_{\nu},z) = a(2\pi/k_{fs}) \simeq 7.7(1+z)/[\Omega_{\Lambda}+\Omega_{m}(1+z)^3]^{1/2}$($1$eV)/$m_{\nu}$ Mpc/$h$ [@1998PhRvL..80.5255H; @article; @2011ARNPS..61...69W; @2013neco.book.....L]. The damping amplitude of density perturbation on nonlinear scales depends on the total neutrino masses, which has been commonly used to constrain and forecast the $\Sigma m_{\nu}$ (e.g., [@1998PhRvL..80.5255H; @2008PhRvL.100s1301S; @2016MNRAS.462.4208P; @2017JCAP...02..052A; @2019arXiv190706666C]). In linear theory, the damping amplitudes, $|\Delta P/P|$, on small scales, $k\lambda_{fs}\gg1$, in total matter power spectrum and in CDM power spectrum are $\sim 8f_{\nu}$ and $\sim 6f_{\nu}$, respectively [@2019arXiv190706598B]. Here, the neutrino mass fraction is defined by $f_{\nu} \equiv \Omega_{\nu}/\Omega_{m}$, and density parameter of non-relativistic neutrinos is given by $\Omega_{\nu}=\Sigma m_{\nu}/(93.14h^2$eV) [@article]. On large scales, $k\lambda_{fs}\ll1$, neutrinos cluster just as CDM and baryonic matter. However, the damping level on power spectrum (two-point statistics) is small for realistic neutrino masses, $f_{\nu} \lesssim 1\%$, which makes the damping effect easily contaminated by uncertainties from different sources, e.g., non-linear bias, redshift space distortions (RSDs), baryonic effects [@2019JCAP...01..010P] and degeneracies with $\sigma_8$ [@2018ApJ...861...53V]. Worse still, two-point statistics can only capture Gaussian information, missing substantial higher-order information for density field being highly non-Gaussian at late Universe, while neutrino signals are basically detected around nonlinear scales. These deficiencies downgrade their power for neutrino mass constraints. Other possible unknown systematics beyond standard $\Lambda$CDM cosmology may also mimic neutrino effect on matter power spectrum and consequently affect neutrino mass constraints (e.g., nonzero curvature, dynamical dark energy, modified gravity , interactions in the dark sector, etc.). For these reasons, there is strong motivation to investigate new neutrino effects (e.g., [@2019arXiv190500361Z; @2019PhRvD..99l3532Y]) and novel alternative methods beyond two-point statistics (e.g., [@2018JCAP...03..003R; @2019JCAP...05..043C; @2019PhRvD..99f3527L; @2019JCAP...06..019M]). At meanwhile, accurate modeling of neutrino effects is also becoming increasingly essential and critical to the neutrino study in cosmology. In this work, we propose a powerful non-Gaussian probe of neutrino effects on LSS, i.e., Minkowski functionals (MFs), toward improving constraining power on $\Sigma m_{\nu}$ in data analysis of up-coming LSS surveys. This method can comprehensively capture all orders of N-point statistics [@2017PhRvL.118r1301F] of LSS and be robust to various systematic effects [@Park_2010; @2001MNRAS.327.1041B; @2012ApJ...747...48W; @2014MNRAS.437.2488B; @10.1093/pasj/55.5.911; @2017PhRvL.118r1301F], e.g., nonlinear evolution, nonlinear bias, RSDs, etc. In particular, its potential in constraining $\Sigma m_{\nu}$ was only addressed for the $2D$ weak lensing (WL) convergence field in Ref. [@2019JCAP...06..019M], where the neutrino effects on WL correspond to that on the projected LSS (along line of sight) in between source and observer. In this work, we mainly focus our study on the analysis of neutrino effects on LSS, by using $3D$ MFs. In comparison with previous case-by-case studies (e.g., neutrino impacts on voids [@2015JCAP...11..018M; @2019MNRAS.488.4413K] and halos/clusters [@2012PhRvD..85f3521I; @2013JCAP...12..012C], which can only capture local information of neutrino effects on LSS), analysis by using MFs is helpful to comprehensively understand subtle neutrino effects on different density regions of LSS. Moreover, we find neutrino signals in MFs are mainly detected from underdense regions, which makes the neutrino detections potentially avoid various systematics from high density regions. Due to including higher-order information, non-Gaussian tools (e.g., $2D$ MFs [@2012PhRvD..85j3513K; @2017MNRAS.466.2402S], peak statistics , three-point statistics [@2017MNRAS.466.2402S; @2010APh....32..340V; @2019arXiv190911107H], etc. ) combined with other probes also help breaking parameter degeneracies in various cosmological studies. Minkowski functionals ===================== Minkowski Functionals are a set of morphological descriptors. They are all additive, motion invariant, which makes them insensitive to observational effects, e.g., the survey shape [@10.1093/pasj/55.5.911]. This tool, originally derived from theory of convex bodies and integral geometry, was first introduced to cosmology by Ref. , and then was commonly used to detect deviations from Gaussianity (e.g., ). According to Hadwiger’s theorem [@Hadwiger1957Vorlesungen], the morphological properties of any pattern in $d$-dimensional space can be fully characterized by $d+1$ MFs, which allows MFs to comprehensively probe all orders of N-point statistics at once. Therefore, MFs can be served as a powerful Non-Gaussian statistical tool in cosmology to provide extra information beyond popular two-point statistics, leading to improving power on cosmological parameter constraint (e.g. $\Omega_m$, $\sigma_8$, $w$ and $\Sigma m_{\nu}$ in $2D$ weak lensing convergence field analysis [@2012PhRvD..85j3513K; @2019JCAP...06..019M]). For $3D$ LSS analysis in cosmology, the most commonly used patterns (other patterns also could be found in literatures, e.g., ) are the excursion sets ($F_{\nu}$) of matter density field (or halo/galaxy field), where the density threshold ($\nu$) is adopted to be diagnostic parameter for displaying the morphological features. Here, the excursion set $F_{\nu}$ is the set of all points $\mathbf{x}$ with density $\nu(\mathbf{x})\geq \nu$. The Minkowski Functionals measure the volume ($V_0$) and the surface’s area ($V_1$), integrated mean curvature ($V_2$), and Euler characteristic ($V_3$) of the excursion set, normalized by the whole field volume $|\mathscr{D}|$, $$\label{1} \begin{aligned} &V_{0}(\nu)=\frac{1}{|\mathscr{D}|}\int_{F_{\nu}}d^3x,\\ &V_{1}(\nu)=\frac{1}{6|\mathscr{D}|}\int_{\partial F_{\nu}}dS(\mathbf{x}),\\ &V_{2}(\nu)=\frac{1}{6\pi|\mathscr{D}|}\int_{\partial F_{\nu}}(\frac{1}{R_1(\mathbf{x})}+\frac{1}{R_2(\mathbf{x})})dS(\mathbf{x}),\\ &V_{3}(\nu)=\frac{1}{4\pi|\mathscr{D}|}\int_{\partial F_{\nu}}\frac{1}{R_1(\mathbf{x})R_2(\mathbf{x})}dS(\mathbf{x}),\\ \end{aligned}$$ where $R_1(\mathbf{x})$ and $R_2(\mathbf{x})$ are the principal radii of curvature of the excursion set’s surface orientated toward lower density region. The first two MFs describe the size of the excursion set, and the last two MFs characterize the shape (geometrical property) and connectivity (topological property) of the set surface (isodensity contours at level $\nu$), respectively. The last MF is simply related to the genus ($G=1-V_3$), that is the first topological descriptor commonly used in cosmology (e.g., [@1986ApJ...306..341G; @2012ApJ...747...48W] ). The topological Euler characteristic $\chi$, obtained through a surface integration of the Gaussian curvature according to the Gauss-Bonnet theorem, is proportional to $V_3$ by a factor 2, $\chi=2V_3$. And, $V_3$ is related to the number of isolated regions (balls) with density above a given threshold, empty regions inside balls (bubbles) and holes in ball surfaces (tunnels) per unit volume, $V_3=\frac{1}{|\mathscr{D}|}(N_{\rm{ball}}+N_{\rm{bubble}}-N_{\rm{tunnel}})$. This makes it more convenient to use than $G$ due to its additivity. Moreover, it is also insensitive to systematic effects [@Park_2010; @2001MNRAS.327.1041B; @2012ApJ...747...48W; @2014MNRAS.437.2488B], since the intrinsic topology can be well conserved during deformation. There are two standard numerical methods (i.e., the Koenderink invariant and the Crofton’s formula) developed by [@Schmalzing_1997] for measuring density field’s MFs. The MFs of Gaussian random field have analytic expressions, which remarkably agree with these numerical results [@MELOTT19901; @Schmalzing_1997]. In this work, we choose the Crofton’s formula method to quote our results, for the two methods giving consistent results. N-body simulations ================== Beyond the attempts to understand neutrino effects on LSS analytically (e.g., [@2012PhRvD..85f3521I; @2015JCAP...03..046F]), the neutrino cosmological N-body simulations are essential to study neutrino nonlinear dynamics. Various approaches have been proposed to implant massive neutrinos into the standard N-body simulations, e.g., the particle-based, the grid-based [@2009JCAP...05..002B], the linear response [@2013MNRAS.428.3375A], the hybrid approach between the particle-based and the grid-based [@2010JCAP...01..021B] (or the linear response [@2018MNRAS.481.1486B]) and even fluid techniques [@2016JCAP...11..015B; @2017PhRvD..95f3535I]. In general, the grid-based and the linear response approaches cannot accurately resolve the non-linear neutrino structure formation on small scales, which can be alleviated by the hybrid approaches. While, particle-based approach can naturally capture the full non-linear neutrino clustering. But at meanwhile, this method is hindered by Poisson noise on small scales (induced by the large thermal motion of neutrinos), which has to be reduced by increasing the number of neutrino particles in the simulation. Our neutrino N-body simulation (*TianNu*) adopt the particle-based approach. For reducing Poisson noise, *TianNu* incorporates neutrinos with pushing to the extreme scales, which makes it currently one of world’s largest cosmological N-body simulations [@2017RAA....17...85E]. Specifically, we adopt a pair of high-resolution N-body simulations (i.e., *TianZero* with $\Sigma m_{\nu}=0$ eV and *TianNu* with $\Sigma m_{\nu}=0.05$ eV [@2017RAA....17...85E]) realized using publicly-available code, CUBEP3M [@2013MNRAS.436..540H], for resolving the subtle neutrino effects between neutrinos and CDM, especially on non-linear scale [@2017PhRvD..95h3518I; @2017NatAs...1E.143Y]. CUBEP3M here is optimized using hybrid-parallelized Particle-Mesh (PM) algorithm for long-range gravitational force calculation, plus an adjustable Particle-Particle (PP) algorithm ($r_{soft} = L/(20n_{p}^{1/3})$) for increasing resolution below mesh scale. Both simulations were initialized at $z = 100$ with the same initial condition parameterized with \[$\Omega_c$, $\Omega_b$, $h$, $n_s$, $\sigma_8$\] = \[$0.27$, $0.05$, $0.67$, $0.96$, $0.83$\], evolving $n_p = 6912^3$ CDM particles with mass resolution of $7\times10^8 M_\odot$ in periodic cubic box of width $L = 1200$ Mpc/$h$ (volume $\sim1.73$ Gpc$^3$/$h^3$). In *TianNu*, $13824^3$ neutrino particles with mass resolution of $3\times10^5 M_\odot$ are incorporated into the mixture with $\Omega_m$ fixed for cleanly extracting neutrino effects. Here, the minimal normal hierarchy mass model is chosen to simulate neutrinos with one massive species ($m_\nu = 0.05$ eV) treated as particles and other two light species ($m_\nu = 0$ eV) included in background cosmology by using the CLASS [@2011JCAP...07..034B] transfer function. Data ==== Analysis and results in this work are based on density fields at $z = 0.01$, which is instrumental in forecasting neutrino signatures from a shallower, lower-redshift galaxy survey with high number density (e.g., Bright Galaxy Survey (BGS) sample within $0.05 < z < 0.4$ in DESI [@2016arXiv161100036D]). Here, the advantage of using density fields to perform analysis is that it can help us better understand subtle neutrino effects on LSS. Both CDM field ($\Phi_{dm}$ in *TianZero* and *TianNu*) and total matter field ($\Phi_{total}$ in *TianNu*) are computed by Cloud-In-Cell (CIC) interpolation technique onto $N_g = 2048^3$ regular grids. For interpolation of $\Phi_{total}$ in *TianNu*, each particle is weighted by a factor of $\Omega_i/(\Omega_m N_i)$, where $\Omega_i$ and $N_i$ are the energy fraction and number of particles of species $i$, repectively. We subsequently smooth these fields separately by two Gaussian window functions with different smoothing scales, $R_G$ (i.e., $0.2L_g = 0.12$ Mpc/$h$ and $0.4L_g = 0.24$ Mpc/$h$, where $L_g \equiv L/N_g^{1/3}$ is the grid size), to obtain the smoothed fields. These Gaussian smoothed fields serve for investigating the impacts of smoothing on our results. The MFs are then measured for all these fields as a function of $\rho/\overline{\rho} \equiv 1 + \delta$, which is the density threshold used to define the excursion set. We compare the MFs measured from different cosmology models (i.e., $\Lambda$CDM and $\nu\Lambda$CDM) to highlight neutrino signatures and analyze the neutrino effects on LSS. Neutrino effects on the morphology of LSS ========================================= Our results are presented in Figure \[fig:mf1\]. Left panels show the MFs themselves, while the differences in MFs between $\nu\Lambda$CDM and $\Lambda$CDM cosmology, the $\Delta V_i$s, are displayed in right panels. The results are well visualized by logarithmic x-axis in the range of \[0.003, 1000\], considering that the probability distribution function (PDF) of density field roughly obeys lognormal form at low redshift [@1991MNRAS.248....1C]. The error bars are estimated by the standard errors [@numerical.book] of MFs of $\Phi_{dm/total}$ (i.e., $\Phi_{dm}$ or $\Phi_{total}$), $s_e = \sigma / \sqrt{n}$, where the $\sigma$ is the standard deviation of the MFs measured from $n$ ($8^3 = 512$) sub-fields ($L_{sub} = 150$ Mpc$/h$) obtained by equal-subdivided $\Phi_{dm/total}$. The $\Delta V_i$s are measured by two cases, i.e., $\Delta V_i^{dm/total} \equiv V_i(\Phi_{dm/total})_{\rm{TianNu}} - V_i(\Phi_{dm})_{\rm{TianZero}}$, considering that $\Phi_{dm}$ and $\Phi_{total}$ can in principle be inferred from galaxy clustering and weak lensing [@2007Natur.445..286M] (or integrated Sachs-Wolfe effect) from various cosmological surveys, respectively. In the following, the neutrino effects on LSS are resolved by understanding the $\Delta V_i$s. Nevertheless, we will also mention $V_i$s when they are necessary for helping our understanding of $\Delta V_i$s. In linear theory, cosmic neutrino background can slow down the growth of CDM perturbations, e.g., on scales $k \gg k_{nr} \simeq 0.018(\Omega_m)^{1/2}(m_{\nu}/1$eV$)^{1/2}$ $h$Mpc$^{-1}$, the $\delta_{dm} \propto a$ is replaced by $\delta_{dm} \propto a^{p_+} \simeq a^{1-3/5f_{\nu}}$ during matter domination and $\delta_{dm} \propto ag(a)$ is replaced by $\delta_{dm} \propto [ag(a)]^{p_+} \simeq [ag(a)]^{1-3/5f_{\nu}}$ during $\Lambda$ domination, where $g(a)$ is a damping factor normalized to $g = 1$ for $a \ll a_{\Lambda}$ [@article] (corresponding to the global slowdown of structure growth caused by $\Lambda$). Overall, in Figure \[fig:mf1\] the $\Delta V_i$s measured by the two cases have the same trend, which can be well interpreted by the aforementioned neutrino effects. For *TianNu*, $\Phi_{total}$ is partially contributed by neutrinos, i.e., $\delta_{total} = f_{dm}\delta_{dm} + f_{\nu}\delta_{\nu}$, where $f_{dm} \equiv \Omega_c/\Omega_m$ and $f_{\nu} \approx 0.37\%$. While, the clustering of neutrinos is much weaker than that of CDM, due to neutrino free-streaming ($\lambda_{fs}(0.05$ eV$,0.01) \approx 20$ Mpc/$h$). Therefore, the matter perturbations in $\Phi_{total}$ are slightly lower than that in $\Phi_{dm}$, which makes the amplitudes of $\Delta V_i^{total}$s are relatively larger than that of $\Delta V_i^{dm}$s (cf. Figure \[fig:mf1\]). When $\rho/\overline{\rho}$ is low enough, the complement of excursion set will be the isolated void regions with closed surfaces whose positive directions point inward, which leads to a negative mean curvature ($\overline{K}$) of the excursion set’s surface, i.e., $V_2 < 0$. Specifically, in the range of $\rho/\overline{\rho} \lesssim 0.2$, we find the $\Delta V_0 > 0$ and $\Delta V_1 < 0$, which means that voids’ sizes become smaller and their inner matter becomes denser with presence of massive neutrinos [@2015JCAP...11..018M]. Therefore, the mean curvature ($\overline{K}$) of the excursion set’s surface at meanwhile becomes smaller, i.e., $\Delta\overline{K} < 0$. These results can be well understood, since neutrinos contribute to the interior mass of underdense regions and slow down CDM evacuation from voids [@2015JCAP...11..018M]. The trend of $\Delta V_2$ in this range is a little bit complicated, since $\Delta V_2$ is the combination result between $\Delta V_1$ and $\Delta \overline{K}$. We note that $V_2$ can be roughly expressed by $ V_2\sim\overline{K} \cdot V_1$ ($\Delta V_2 \sim \Delta \overline{K} \cdot V_1 + \overline{K} \cdot \Delta V_1$), where $V_1$ is always positive [@2017PhRvL.118r1301F]. Therefore, when dominated by $\Delta V_1$, $\Delta V_2$ follows a completely opposite trend with $\Delta V_1$ in the range of $\rho/\overline{\rho} \lesssim 0.08$ considering $\overline{K} < 0$, i.e., $\Delta V_2 > 0$; when dominated by $\Delta \overline{K}$, $\Delta V_2$ shares the same trend with $\Delta\overline{K}$ in the range of $0.08 \lesssim \rho/\overline{\rho} \lesssim 0.2$, i.e., $\Delta V_2 < 0$. For a higher $\rho/\overline{\rho}$, the excursion set will turn into the non-virialized web-like skeletons surrounding voids, which makes a positive $\overline{K}$, i.e., $V_2 > 0$. The transition from negative to positive of $V_2$ happens at $\rho/\overline{\rho} \in [0.2, 0.3]$ in our study. Here, we note that the accurate $V_i$s depend on the smoothing ($R_g$) and resolution ($N_g$) of the density field, which can result in different transition point of $V_2$. In the range of $0.2 \lesssim \rho/\overline{\rho} \lesssim 1$, we find $\Delta V_0 > 0$, $\Delta V_1 > 0$ and $\Delta V_2 < 0$, since neutrino background delays the structure growth making web-like skeletons bigger and looser. Here, the $\overline{K}$ still becomes smaller, i.e., $\Delta\overline{K} < 0$, and $\Delta V_2$ is dominated by $\Delta\overline{K}$. When $\rho/\overline{\rho}$ is high enough, for the same reason, the over-dense regions ($\rho/\overline{\rho} \gtrsim 1$) shrink in size, making the excursion set smaller and resulting in $\Delta\overline{K} > 0$. Therefore, we see that the $\Delta V_0$ and $\Delta V_1$ transit from positive to negative in the range of $\rho/\overline{\rho} \gtrsim 1$. Meanwhile, we find $\Delta V_2 > 0$ in the vicinity of $\rho/\overline{\rho} \approx 1$, where $\Delta\overline{K}$ plays a key role in $\Delta V_2$. When $\rho/\overline{\rho}$ reaches a high enough level ($\rho/\overline{\rho} \gg 1$), $\Delta V_2$ will be dominated by $\Delta V_1$, making them share the same trend, i.e., $\Delta V_2 < 0$. For understanding $\Delta V_3$, we need deep insights in hierarchical void formation, since topology of the excursion set heavily relies on the subtle structures of LSS. For void hierarchy [@2004MNRAS.350..517S], there are two classifications of voids, i.e., big *void-in-void* voids embedded in larger underdense regions (larger distinct voids) and small *void-in-cloud* voids embedded within a larger-scale overdensity. Here, *void-in-void* voids form at early epoch and then collide and merge with one another at late epoch, forming a larger distinct void. In this process, matter between them is squeezed and evacuated along walls and filaments towards the enclosing boundary of the larger newly formed void, leaving faint and gradually fading imprint of the initial internal substructures. And the same basic process repeats as this rearrangement of structure develops to a larger scale. While, *void-in-cloud* voids are squeezed by larger-scale overdensity and will vanish when the region around them has collapsed completely. Specifically, we find $\Delta V_3 < 0$ for $\rho/\overline{\rho} \lesssim 0.05$ due to the decline in number of isolated underdense troughs (bubbles), corresponding to the suppression effect on number function of big voids in neutrino cosmology [@2015JCAP...11..018M; @2019MNRAS.488.4413K]. The cosmic neutrinos slow down the *void-in-void* process, making the faint regions in sheet-like structures denser and evener. As a result, with $\rho/\overline {\rho}$ adjusted to higher value, it gets harder to pierce through their thinner parts to form tunnels in the excursion set’s surface. At $\rho/\overline{\rho} \approx 0.5$, we find that $V_3$ stop rising and start falling, which can be reasonably attributed to the emergence of tunnels. Therefore, in the range of $0.05 \lesssim \rho/\overline{\rho} \lesssim 0.2$, we see $\Delta V_3 < 0$ due to the decrease in number of tunnels in $\nu\Lambda$CDM cosmology. Neutrino suppress matter clustering on small-scale, which has been well understood by the minimum of “spoon” shape around $k = 1$ $h$Mpc$^{-1}$ (corresponding to the size of massive halos) on $P_{m}^{\nu}/P_{m}^{fiducial}$, at $z \sim 0$ (e.g., [@2011MNRAS.410.1647A; @2014JCAP...12..053M]). Due to this neutrino effect, matter in virialized objects is smeared around and filled in the *void-in-cloud* voids. In addition, this smeared matter also patch the relatively thin parts in the denser sheet-like structures. Therefore, with $\rho/\overline{\rho}$ rising ($\rho/\overline{\rho} > 0.2$), we first see a similar trend as we see in the former two scenarios but with mild amplitude; when $\rho/\overline{\rho}$ being high enough, the excursion set will turn into isolated virialized density peaks (balls), finally we see $V_3$ goes to below zero, corresponding to the suppression effect on mass function of massive halos in neutrino cosmology [@2012PhRvD..85f3521I; @2013JCAP...12..012C; @2018JCAP...03..049L] ; i.e., $\Delta V_3 < 0$, then $\Delta V_3 > 0$ and finally $\Delta V_3 < 0$. With $\rho/\overline{\rho}$ rising higher and higher, we find $\Delta V_3$ approximates to zero asymptotically, since the small halos with higher concentrations [@1997ApJ...490..493N] are less impacted by massive neutrinos, corresponding to the upturn at high $k$ ($> 1$ $h$Mpc$^{-1}$) on $P_{m}^{\nu}/P_{m}^{fiducial}$ [@2018JCAP...03..049L]. Smoothing effects ================= Gaussian smoothing is usually used to reduce noise contribution to the fields for MFs measurements (e.g., [@Schmalzing_1997; @2013MNRAS.429.2104D; @2017PhRvL.118r1301F]). While, this process also erase non-Gaussian information from original fields [@2013MNRAS.429.2104D], which can downgrade the discriminative power of MFs when resolving neutrino signatures. In our work, we note that appropriately smoothing density fields (e.g., $R_g = 0.2L_g$) can improve the signal-to-noise (S/N) ratios of neutrino signals, $|\Delta V_i|/\sigma$, on $\Delta V_0$, $\Delta V_1$ and $\Delta V_2$ within a narrow range around $\rho/\overline{\rho} = 0.03$, while it otherwise depresses the S/N ratios in other ranges. Conversely, it seems to definitely decrease the S/N ratios on $\Delta V_3$ in the whole ranges, regardless of $R_g$. This may be due to that topology ($V_3$) is more susceptible to this artificial smearing of LSS than other $V_i$s, on the premise of noise reduction. From the decline ratios of $\Delta V_i$s’ amplitudes and the S/N ratios of neutrino signatures on $\Delta V_i$s caused by different smoothing in Figure \[fig:mf1\] and Figure \[fig:mf2\], we can preliminary infer that the sensitivities of MFs to non-Gaussianity (and to $\sum m_{\nu}$) roughly obey $V_1 < V_2 < V_3 \lesssim V_4$, which is consistent with previous studies on $2D$ MFs of weak lensing and CMB (e.g., [@2012PhRvD..85j3513K; @2013MNRAS.429.2104D; @2019JCAP...06..019M]). Summary and Conclusion ====================== In the past decade, cosmology has achieved great success in neutrino mass constraint. However, further improvement of neutrino mass constraint using LSS is mainly hindered by the challenges of statistical methods and systematics. The key problems are as follows: 1. LSS has evolved to be highly non-Gaussian in the later Universe. Traditional methods for extracting neutrino information are based on two-point statistics. These traditional methods only can probe Gaussian information from LSS, missing substantial higher-order information. 2. Moreover, the neutrino signals extracted by traditional methods are mainly contributed by the neutrino effects on the small scales of high-density regions of LSS. However, this neutrino information suffers from the contaminations by tricky nonlinear effects and baryonic physics effects, etc. Toward solving these critical problems (for improving constraining power on $\sum m_{\nu}$ in data analysis of up-coming LSS surveys), we propose an alternative powerful non-Gaussian probe of neutrino effects on LSS, i.e., Minkowski functionals (MFs), in this work. This tool not only has strong statistical power, but also has strong robustness to systematics. It can extract full information encoded in LSS, circumventing a more complicated N-point statistics formalism. Better yet, the neutrino information extracted by this method is mainly from low-density regions [@2013MNRAS.431.3670V], which potentially make the extracted neutrino signals well avoid various contaminations of high-density regions. Therefore, the problems faced in the past are expected to be greatly alleviated. By using this novel method, for the first time, we comprehensively studied subtle neutrino effects on the morphology of LSS, which further deepens our understanding of neutrino effects and provides essential and critical information for accurate modeling of neutrino effects in the future. For an ideal LSS survey of volume $\sim1.73$ Gpc$^3$/$h^3$, we show a compelling result that the neutrino signals can be extracted with a significant level up to $\thicksim 10\sigma$ and $\thicksim 300\sigma$ for CDM and total matter density fields, respectively, with an individual MF measurement (cf. Figure \[fig:mf2\]). These results demonstrate its great potential for much improving neutrino mass constraint in the data analysis of forth-coming LSS surveys. Nevertheless, we have to mention that matter fields cannot be directly obtained from galaxy surveys. In reality, underlying matter fields are mapped by biased tracers, i.e., halos and galaxies. Here, our results can be treated as the theoretical upper limit of neutrino effects on halo/galaxy distribution. In view of the strong statistical power of MFs [@2017PhRvL.118r1301F], these neutrino probings can probably survive in ambitious galaxy surveys with large galaxy number densities (e.g., BGS sample in DESI [@2016arXiv161100036D]). We postpone such a comprehensive study in an ongoing work, where stochasticity is reduced in the mass-weighted halo field [@2010PhRvD..82d3515H] and mock galaxies are constructed by halo occupation distribution (HOD) technique [@2005ApJ...633..791Z]. Acknowledgements ================ We thank the anonymous referee for the useful comments and suggestions. Y.L. thanks Thomas Buchert and Wenjuan Fang for helpful communications. This work was supported by the National Key Basic Research and Development Program of China (No. 2018YFA0404504), and the National Science Foundation of China (grants No. 11773048, 11621303, 11890691). HRY is supported by National Science Foundation of China 11903021.
--- abstract: 'A recently proposed technique correlating electric fields and particle velocity distributions is applied to single-point time series extracted from linearly unstable, electrostatic numerical simulations. The form of the correlation, which measures the transfer of phase-space energy density between the electric field and plasma distributions and had previously been applied to damped electrostatic systems, is modified to include the effects of drifting equilibrium distributions of the type that drive counter-streaming and bump-on-tail instabilities. By using single-point time series, the correlation is ideal for diagnosing dynamics in systems where access to integrated quantities, such as energy, is observationally infeasible. The velocity-space structure of the field-particle correlation is shown to characterize the underlying physical mechanisms driving unstable systems. The use of this correlation in simple systems will assist in its eventual application to turbulent, magnetized plasmas, with the ultimate goal of characterizing the nature of mechanisms that damp turbulent fluctuations in the solar wind.' author: - 'Kristopher  G. Klein' title: 'Characterizing Fluid and Kinetic Instabilities using Field-Particle Correlations on Single-Point Time Series' --- Introduction {#sec:intro} ============ A significant goal of plasma physics research is the characterization of mass, momentum, and energy transport in a wide variety of complex systems. In particular, the question of what mechanisms mediate the transfer of energy between turbulent fields and distributions of plasma particles, leading to the eventual damping and dissipation of turbulence, is open. One system which displays turbulent behavior is the solar wind, a hot, diffuse emanation from the Sun’s surface that fills the heliosphere. While lacking the precise control over conditions afforded in a laboratory setting, the large volume of in situ measurements of the solar wind over the last half century has led to the accrual of observations of turbulence with a wide variety of plasma parameters. Such measurements have proven useful to the study of phenomena in magnetized turbulence.[@Bruno:2005] A limitation of these in situ observations is that they are taken at a single-point in space at a given time.[^1] This raises at least two significant complications; one must disentangle the dynamics associated with spatial and temporal variation, and track the evolution of spatially integrated quantities, such as the energy content of a field or distribution of charged particles, given only single-point measurements. The first of these complications is addressed by invoking Taylor’s Hypothesis,[@Taylor:1938] the conjecture that for single-point measurements of sufficiently fast flows, the time evolution is essentially frozen and the measurement traces out the spatial structure of the turbulence; a review of the application of Taylor’s Hypothesis to solar wind observations can be found in Klein et al 2014.[@Klein:2014b] To address the second complication of inaccessible spatially integrated quantities, one may consider the dynamics of the phase-space energy density rather than the total energy. A technique has been recently proposed to measure the local-in-phase-space energy transfer between fields and plasma distributions using single-point time series of simple plasma systems.[@Klein:2016a; @Howes:2016] By correlating the product of the electric field and velocity derivative of the particle distribution measured at a single point in space, the velocity structure of the transfer of energy between fields and particles is obtained. By averaging this correlation over a selected time interval, the oscillatory energy transfer between the fields and particles is removed, leaving only the secular energy transfer. The mechanisms responsible for this energy transfer can be identified by the velocity space structure of the field-particle correlation. Initial work applied this correlation to systems that damp via the Landau resonance.[@Landau:1946] Here, we consider the transfer of energy in linearly unstable systems, and show that field-particle correlations are able to identify the presence of both fluid and kinetic instabilities in such systems. The unstable systems under consideration are reviewed in Section \[sec:drifts\]. In Section \[sec:vp\], we present the nonlinear numerical code employed in this work, `VP`, as well as the three simulations under consideration. In Section \[sec:fpc\], the field-particle correlation is presented and applied to the three simulations. Analysis and discussion related to the correlations are found in Section \[sec:method\]. Application of field-particle correlations to simple, homogeneous, linearly unstable systems enables the construction of signatures of basic energy transfer mechanisms. Combined with signatures for all relevant energy transfer mechanisms, such correlations may be usefully employed to diagnose the behavior of more complex systems. ![image](f1.eps){width="16.5cm"} Linearly Unstable Systems {#sec:drifts} ========================= The 1D1V electrostatic systems of interest in this work are governed by the Vlasov and Poisson equations $$\frac{{\partial }f_s}{{\partial }t} + v \frac{{\partial }f_s}{{\partial }x} - \frac{q_s}{m_s}\frac{{\partial }\phi}{{\partial }x} \frac{{\partial }f_s}{{\partial }v} = 0, \label{eqn:vlasov.ld}$$ and $$\frac{{\partial }^2 \phi}{{\partial }x^2} = -4 \pi \sum_s q_s \int_{-\infty}^\infty dv f_s \label{eqn:poisson.ld}$$ which evolve the distribution of species $s$, $f_s(x,v,t)$ and the electric field $E(x,t)=-{\partial }\phi(x,t)/{\partial }x$. Recent work has applied field-particle correlations to electrostatic stable systems as a means of extracting the velocity dependent signature of Landau damping from single-point time series.[@Klein:2016a; @Howes:2016] Here, we consider electrostatic systems unstable to either fluid or kinetic instabilities, and characterize the signature of the associated growth using field-particle correlations. For fluid instabilities, the behavior of the plasma depends on the bulk parameters of the system and the energy transfer is not organized by characteristic velocities such as the resonant wave phase speed; energy transfer for kinetic instabilities depends on such characteristic velocities, as the electrostatic field acts to exchange particles of higher and lower kinetic energies near the field’s resonant velocity. For both types of instabilities, one or more of the distributions lose energy, while the fields and other distributions gain energy, resulting in an inverse damping. General discussion of plasma instabilities, as well as particular treatments of the electron drift instabilities of interest in this work, can be found in many plasma textbooks.[@Krall:1973; @Stix:1992; @Hazeltine:2004] $n_{e1}/n_{i}$ $v_{d1}/v_{te}$ $n_{e2}/n_{i}$ $v_{d2}/v_{te}$ -------- ---------------- ----------------- ---------------- ----------------- Case 1 $0.5$ $0.75$ $0.5$ $-0.75$ Case 2 $0.5$ $1.75$ $0.5$ $-1.75$ Case 3 $0.9$ $0.00$ $0.1$ $ 4.25$ : Electron Bulk Parameters \[tb:params\] To highlight the distinct velocity space structure of energy transfer in unstable systems, we consider three cases, each with distinct sets of parameters for one population of ions and two populations of electrons. The equilibrium distributions for the three populations take the Maxwellian form $$F_{s0}(v)=\frac{n_{sj}}{\sqrt{2\pi}}\exp\left[ \frac{-\left( v-v_{dsj}\right)^2}{2 v_{ts}^2} \right] \label{eqn:F0.fpc}$$ where $v_{dsj}$ is the population’s drift velocity. The linear normal mode behavior for such equilibria is governed by solutions of the dispersion relation $$\underline{\underline{D}}\left(\omega,k\right)=k^2\lambda_{De}^2 + \sum_j \left(\frac{q_s}{q_e}\right)^2\frac{n_{sj}}{n_i}\frac{T_e}{T_s} \left[1+ \xi_{sj} Z_0(\xi_{sj}) \right] \label{eqn:disp.fpc}$$ where $\underline{\underline{D}}$ is a function of wavenumber $k$ and complex frequency $(\omega,\gamma)$, $q_s$ is the species charge, $Z_0$ is the plasma dispersion function[@Fried:1961] with argument $\xi_{sj}=\left(\omega/ \omega_{pe} k \lambda_{De} \right) \left(\sqrt{T_e m_s/2 T_s m_e} \right) - v_{dsj}/v_{ts}$, with the sum taken over the three plasma populations $j$. The electron plasma frequency $\omega_{pe} \equiv \sqrt{4 \pi \sum_j n_{ej}q^2/m_e}$ and Debye length $\lambda_{De}=\sqrt{T_e/4 \pi \sum_jn_{ej}q^2}$, both defined using the total electron density, normalize the time and length scales of our system. Values for the normalized density of electron population $j$, $n_{ej}/n_i$, and bulk velocity, normalized by the electron thermal velocity $v_{te}=\sqrt{T_e/m_e}$, are given in Table \[tb:params\]. For all three cases, the electron populations have equal temperatures; the ions are singly ionized, and initialized with $T_i=T_e$, $m_i=100m_e$, and $v_{di}=0$. Case 1 is stable to the effects of the counter-streaming electrons, while the increase in $|v_{dej}|$ for case 2 yields the classic counter-streaming instability. Case 3 is unstable to the bump-on-tail instability. Cases 2 and 3 serve as examples of fluid and kinetic instabilities respectively. In Fig \[fig:linear\], solutions to Eqn \[eqn:disp.fpc\] for the three cases are presented. In panel a, the complex frequency solutions $(\omega, \gamma)/\omega_{pe}$ are given for fixed $k\lambda_{De}=0.2$. For case 1 (black circles) all modes are shown to be damped. For case 2 (red triangles), the increase in $|v_{dej}|$ leaves the frequency and damping rate of the Langmuir modes, the modes with $|\omega| \approx \omega_{pe}$, largely unaffected. The pair of least damped acoustic modes from case 1 are now both unstable, having $\omega=0$ and two distinct growth rates, $\gamma>0$. The parametric path from stability ($|v_{dej}|=0.0$, open triangles) to instability ($|v_{dej}|=1.75$, filled triangles) for these two modes is illustrated as a function of complex frequency in panel b. For case 3, the $\omega<0$ Langmuir mode and the least damped acoustic modes are negligibly affected by the bump distribution. By parametric variation of $v_{de2}$ from $0.0$ (open diamonds in panel c) to $4.25$ (filled diamonds), it is observed that the growing mode for the bump-on-tail instability arises from an acoustic mode which is strongly damped in the stable regime (solid line), while the $\omega>0$ Langmuir mode (dashed line) becomes heavily damped. The dispersion relations for the least damped and/or fastest growing Langmuir (acoustic) modes as a function of $k \lambda_{De}$ for the three cases are presented in panels d and f (e and g) illustrating the wavelengths for which linear instabilities arise for cases 2 and 3 (solid lines in panel g). Numerical Simulations {#sec:vp} ===================== To evaluate the time evolution of these systems, we have extended the Vlasov-Poisson solver `VP`[@Howes:2016] to allow for the inclusion of an arbitrary number of drifting plasma populations. `VP` solves the nonlinear Vlasov-Poison system using second-order finite differencing for spatial and velocity derivatives and a third-order Adams-Bashforth scheme in time. As a test of this extension, we compare numerical solutions of Eqn. \[eqn:disp.fpc\] for the three cases described in section \[sec:drifts\] to frequencies and damping rates extracted from time traces of the electrostatic field energy from linear and small-amplitude nonlinear `VP` simulations for a range of wavevectors, given as points in Fig. \[fig:linear\]. Agreement between the dispersion relation and `VP` are close as long as the mode of interest is not heavily damped and unstable modes supported by the system do not grow too quickly with respect to the damped modes. ![image](f2.eps){width="16.5cm"} For the evaluation of the field-particle correlation, we perform three nonlinear simulations corresponding to the three cases from section \[sec:drifts\]. For these simulations, we add a sinusoidal perturbation to the ion’s equilibrium distribution of the form $0.1 F_{i0}(x,v) \sin(k_0 x)$ with $k_0\lambda_{de}=0.2$. 256 (128) points in velocity (coördinate) space are resolved over the interval $v/v_{ts}\in[-8,8]$ $(x /\lambda_{De}\in[-5 \pi,5 \pi])$. Each simulation is run for longer than $t = 40 \omega_{pe}^{-1}$ and the total energy in the system $$W_{\rm total}=\int dx \frac{E^2}{8 \pi} + \sum_s \int dx \int dv \frac{m_s v^2}{2} f_s \label{eqn:energy.fpc}$$ is conserved to better than a few tenths of a percent. Changes in the electrostatic energy $W_\phi = \int dx E^2/8 \pi$ as well as energy in the three plasma populations, $W_j = \int dx \int dv m_j v^2 f_j/2$, from their initial values are shown in Fig. \[fig:energy\], with energy normalized by $T_e$. In run 1, most of the energy damped from the electric field is equally partitioned between the two electron populations, with little energy transferred to the ions. For run 2, an instability is clearly triggered, with significant losses of energy from the electrons and both $W_\phi$ and $W_i$ growing from their initial values. A more virulent instability is triggered in run 3, with the bump electron population losing a significant fraction of its energy to the core electron population, with little energy transferred to the ions. Field-Particle Correlations {#sec:fpc} =========================== While tracking the change in energy is sufficient to identify instabilities in systems with complete knowledge of spatial and velocity structure, we seek the signature of unstable behavior given limited, single-point measurements of a system of the type available to spacecraft in the solar wind. The application of field-particle correlations to such measurements allows for a local observation of secular energy transfer. We define the field-particle correlation for a discrete set of measurements of $f_s(x,v,t)$ and $E(x,t)$ with timestep $dt$, taken at a single point $x=x_0$ as $$C_E(x_0,v,t_i,N)=-\frac{1}{N}\sum_{j=i}^{i+N} \frac{q_sv^2}{2}\frac{\partial f_s(x_0,v,t_j)}{\partial v} E(x_0,t_j). \label{eqn:FPC.fpc}$$ The correlation averages the field-particle interaction term in the Vlasov equation, the third term in Eqn. \[eqn:vlasov.ld\], over a time interval of length $\tau = N dt$. As the ballistic term, the second in Eqn. \[eqn:vlasov.ld\], does not lead to net energy transfer,[@Howes:2016] the product in $C_E$ represents the energy density transferred at one point in phase space between the electric field and velocity distribution. By averaging over a selected time interval $\tau$, oscillatory energy transfer between $E$ and $f_s$ is removed, leaving only the secular energy transfer. To track the accumulated change of the phase-space energy density, we integrate the correlation over time, defining $\Delta w_s(x,v,t,N)\equiv \int_0^t dt' C_E(x,v,t',N)$. ![The accumulated change in the electron phase-space energy density for damped, counter-propagating Langmuir waves calculated using the perturbed electron distribution in panel a, Eqn. 2 from Klein & Howes 2016[@Klein:2016a] and the total electron distribution in panel b, Eqn. \[eqn:FPC.fpc\]. The resonant velocities for the system are indicated by dashed grey lines.[]{data-label="fig:compare"}](f3.eps){width="16.5cm"} Unlike Eqn. 2 in Klein & Howes 2016,[@Klein:2016a] we include the full distribution function in our definition of $C_E$, rather then only the perturbed component $\delta f_s$. Previous studies had focused on particular cases where the equalibria $F_{s0}$ were even with respect to $v=0$, ensuring that they would not contribute to net energy transfer. For the cases under consideration in this work, the equilibrium electron distributions have odd components and therefore may contribute to a secular transfer of energy between the fields and distributions. To show that the two forms of the correlation obtain similar results when $F_{s0}$ is even, we apply both correlations to single-point field and distribution data from a Landau damped counter-propagating Langmuir wave simulation, case 1 in Klein & Howes 2016, and plot $\Delta w_e$ at $x=0$ with $\tau \omega_{pe} = 6.28$ in Fig. \[fig:compare\]. We see that both correlations produce qualitatively similar structure in the accumulated phase-space energy density, especially in regards to production of a plateau surrounding the resonant velocities of the system, $|v_{\rm res}|= 2.86 v_{te}$, which serves as the key velocity-space signature of Landau damping. ![image](f4.eps){width="15.5cm"} With our correlation defined, we next select an appropriate correlation interval $\tau$ for the three simulations. By averaging over a particular interval, we remove the transfer of energy between the fields and particles which oscillates with frequency of the order $\omega \sim 2 \pi/ \tau$, leaving the secular, or non-oscillatory, component. Plotted in Fig. \[fig:tau\] are velocity integrated correlations, $\int dv C_E(x,v,t,\tau)$, between $E$ and the three plasma populations for a range of correlation lengths $\tau \omega_{pe} \in [0,30]$ at a single spatial location $x=0$. There is significant oscillatory transfer for small $\tau$ correlations for run 1, panels a-c of Fig. \[fig:tau\]. This oscillatory transfer is reduced for longer intervals, with nearly all of the oscillations removed for $\tau \omega_{pe} = 5.64$, a correlation length corresponding to the period of Langmuir waves supported by the system with frequency $\omega = 1.11 \omega_{pe}$. This $\tau$ leaves the velocity integrated correlations for all three distribution functions nearly monotonic, while slightly longer correlations reintroduce some oscillatory behavior. As the Langmuir wave is the least damped, finite frequency linear mode supported by this system, correlating over its period is physically justified. For run 2, panels d-f, correlating over the interval $\tau \omega_{pe} = 5.15$, which corresponds to the Langmuir wave frequency $\omega = 1.21 \omega_{pe}$, removes significant oscillatory energy transfer. Averaging over the time scale associated with the least damped modes, rather than the unstable modes, is motivated by the fact that the unstable modes of this system have zero frequency; see Fig \[fig:linear\]. For run 3, we see that there does not exist a single value of $\tau$ for which the oscillatory energy transfer is completely removed. This due to the fact that the system supports both a weakly damped Langmuir wave as well as a finite-frequency unstable acoustic mode. Correlating over the Langmuir period retains some of the acoustic oscillations, while correlating over the acoustic period retains some of the Langmuir oscillations. We choose $\tau \omega_{pe}=7.11$, corresponding to the period of the growing acoustic mode with $\omega = 0.88 \omega_{pe}$ and acknowledge that some oscillatory contributions from Langmuir waves will persist. Velocity-Space Structure of Energy Transfer {#sec:method} =========================================== With an appropriate interval $\tau$ selected, we calculate the velocity dependent field-particle correlation for the three simulations. By retaining the velocity dependence, we are able to address the question of where the energy transfer occurs in phase space, and use the structure of this energy transfer to characterize the nature of the underlying instability. ![image](f5.eps){width="15.5cm"} For the stable, counter-streaming electron case, run 1, the phase-space energy transfer obtained from the field-particle correlation is a fairly regular function of velocity. At a given point in coördinate space, for example $x = 0 \lambda_{De}$ shown in the first row of Fig. \[fig:C\_single\], one of the electron populations gains energy from, while the other population loses energy to, the electric field. The energy transfer to the ions from the electric field is an odd function of velocity, meaning that when the correlation is integrated over $v$, there is no net energy transfer to the ions. There is no evidence for the dependence of the phase-space energy transfer on drift velocities (black vertical dashed lines) or the resonant velocities of either the Langmuir (green lines) or acoustic (magenta) modes for any of the three populations. Increasing the speed of the counter-streaming electrons for run 2, shown in panels e-h of Fig. \[fig:C\_single\], the structure of the energy transfer is altered. The sign of the field-particle correlation changes at $v_{dej}$ due to a change in sign of ${\partial }f_{ej}/{\partial }v$. Unlike for case 1, this change in sign occurs for electrons which exchange significant energy with the electric field. For the ions, a small, even component in the field-particle correlation arises, yielding a net transfer of energy from the fields to the ions, as seen in panel h. This transfer of energy to the ions, which does not depend on either the Langmuir or acoustic resonant velocities, serves as a phase-space signature for the fluid instability that arises for this system. For the bump-on-tail instability, case 3 shown in panels i-l of Fig. \[fig:C\_single\], the velocity-space structure of the field-particle correlation is significantly different. As we have correlated over the unstable acoustic period, the oscillatory structure from the weakly damped Langmuir mode is evident. We also see in the bump electron population that the energy transfer changes sign across the acoustic resonance and at later time across the Langmuir resonance. This resonant structure is the signature of the transfer of energy from the bump population to the core electron population as mediated by the electric field. We note that the resonant structure is maintained for other choices of $\tau$, including an interval corresponding to the Langmuir wave period, not shown. The velocity-integrated correlation confirms that the core population has a net gain of energy, and as expected for this instability, we see that the electrons that receive the energy are near the resonant velocities. This explicit dependence of the phase-space energy transfer on resonant coupling between the fields and particles serves as a distinct signature of kinetic instabilities when compared to the fluid instability in run 2. [f6.eps]{} The field-particle correlations presented in Fig. \[fig:C\_single\] were calculated at a single point in coördinate space. An obvious question arises as to how the correlation and the associated phase-space energy transfer change as a function of position within the simulation. To asses this question, we calculate $C_E$ at five other points within the simulation and present the velocity integrated accumulated change of the phase-space energy density, $\int dv \Delta w_s$, in Fig. \[fig:int\_C\]. For the stable case, run 1, we see that the energy transfer to or from the two electron populations changes sign and amplitude as a function of the position in the simulation, with the transfer passing through zero at the two nodes of the initial standing wave pattern at $\pm 7.9 \lambda_{De}$. The correlation with the ions maintains an even velocity space structure such that the ions continue to neither lose nor gain net energy from the electric field regardless of spatial position. For the unstable counter-streaming electrons, run 2, the same pattern of shifting sign and amplitude for the energy transfer to and from the electron holds. The ions gain energy from the electric field regardless of which electron population is gaining or losing energy. The amplitude of the ion energy gain changes with the amplitude of the electron field-particle correlation, going to zero at the nodes and having its largest value at the anti-nodes. This phase space structure serves as a distinct signature of growing fluid instabilities. For the bump-on-tail instability, run 3, there is no regular spatial variation in the transfer of energy between the beam and core electron populations; the core gains energy at the expense of the beam, with the energy density accumulating at nearly the same rate regardless of spatial position, as expected for a kinetic instability. Conclusion {#sec:conc} ========== Field-particle correlations of the form defined in Eqn. \[eqn:FPC.fpc\], modified from Klein & Howes 2016[@Klein:2016a] to account for drifting equilibrium distributions, are applied to a set of simulations where fluid and kinetic instabilities are present. The structure of the resulting correlations, which can be interpreted as the secular phase-space energy density transferred between the fields and distributions, can be used to identify the presence of instabilities as well as to characterize the mechanisms driving the unstable growth. The form of the correlation allows for such characterizations to be made from observations at a single point, or a few points, in coördinate space, as opposed to requiring knowledge of spatially integrated quantities typically not accessible to experimental measurements. We consider simplified 1D-1V electrostatic systems in an attempt to characterize field-particle correlations as measurements of secular energy transfer in advance of future work applying such correlations to systems of higher dimensionality, where magnetization, turbulence, and inhomogeneities may complicate the interpretation of the correlation. By determining signatures of basic plasma physics phenomena responsible for energy transfer between fields and distributions, such as Landau damping and one-dimensional instabilities, we lay the foundation for the determination of the velocity distribution signatures of more complex interactions, such as cyclotron[@Stix:1992] and transit time damping,[@Barnes:1966] heating by large amplitude, stochastic fluctuations,[@Chandran:2010a] and magnetic reconnection,[@Yamada:2010] which have all been proposed to play a role in the dissipation of turbulent fluctuations. Future work will also consider the effects of solar wind expansion, electron conduction, and other inhomogeneous mechanisms on the plasma to clearly identify the role of such energy transfer mechanisms in the solar wind. By constructing the correlation to be obtained from single-point measurements, we allow for the identification of such damping mechanisms from in situ observation of the solar wind on current and future missions including *Deep Space Climate Observatory (DSCOVR)*, *MMS*,[@Burch:2016] and *Solar Probe Plus*.[@Fox:2015] The author would like to thank Gregory Howes, Justin Kasper, and Jason TenBarge for insightful discussions regarding aspects of this work. This research was supported by the NASA HSR grant NNX16AM23G. [10]{} R. [Bruno]{} and V. [Carbone]{}, Living Rev. Solar Phys. [**2**]{}, 4 (2005). Notable exceptions to the single-point limitation are the are the *Cluster*,[@Escoubet:2001] *THEMIS*, and *Magnetospheric Multiscale (MMS)*[@Burch:2016] missions, which are comprised of four or five spacecraft arranged in particular geometric configurations. G. I. [Taylor]{}, Royal Society of London Proceedings Series A [**164**]{}, 476 (1938). K. G. [Klein]{}, G. G. [Howes]{}, and J. M. [TenBarge]{}, Astrophys. J. Lett. [**790**]{}, L20 (2014). K. G. [Klein]{} and G. G. [Howes]{}, Astrophys. J. Lett. [**826**]{}, L30 (2016). G. G. [Howes]{}, K. G. [Klein]{}, and T. C. [Li]{}, J. Plasma Phys. (under review). L. D. Landau, J. Phys.(USSR) [**10**]{}, 25 (1946), . N. A. [Krall]{} and A. W. [Trivelpiece]{}, , McGraw-Hill, 1973. T. H. [Stix]{}, , American Institute of Physics, 1992. R. D. Hazeltine and F. L. Waelbroeck, , Westview, 2004. B. D. [Fried]{} and S. D. [Conte]{}, , Academic Press, 1961. A. [Barnes]{}, Phys. Fluids [**9**]{}, 1483 (1966). B. D. G. [Chandran]{}, B. [Li]{}, B. N. [Rogers]{}, E. [Quataert]{}, and K. [Germaschewski]{}, Astrophys. J. [**720**]{}, 503 (2010). M. [Yamada]{}, R. [Kulsrud]{}, and H. [Ji]{}, Reviews of Modern Physics [**82**]{}, 603 (2010). J. L. Burch, T. E. Moore, R. B. Torbert, and B. L. Giles, Space Science Reviews [**199**]{}, 5 (2016). N. J. [Fox]{} et al., Space Sci. Rev. (2015). C. P. [Escoubet]{}, M. [Fehringer]{}, and M. [Goldstein]{}, Annales Geophysicae [**19**]{}, 1197 (2001). V. Angelopoulos, Space Science Reviews [**141**]{}, 5 (2008). [^1]: Notable exceptions to the single-point limitation are the are the *Cluster*,[@Escoubet:2001] *THEMIS*,[@Angelopoulos:2008] and *Magnetospheric Multiscale (MMS)*[@Burch:2016] missions, which are comprised of four or five spacecraft arranged in particular geometric configurations.
--- abstract: 'In this mostly expository note we take advantage of homotopical and algebraic advances to give a modern account of power operations on the mod 2 homology of $\mathbb{E}_{\infty}$-ring spectra. The main advance is a quick proof of the Adem relations utilizing the Tate-valued Frobenius as a homotopical incarnation of the total power operation. We also give a streamlined derivation of the action of power operations on the dual Steenrod algebra.' author: - Dylan Wilson bibliography: - 'Bibliography.bib' nocite: '[@*]' title: Mod 2 power operations revisited ---
--- author: - Tobias Sehnke - Matthias Schultalbers - Rolf Ernst bibliography: - 'D:/Promotion/02\_Bibliothek/04\_Literaturverzeichnis/Literatur.bib' title: | Temporal Properties in Component-Based Cyber-Physical Systems\ Appendix --- In this document, we provide supplementary material to [@Sehnke2018], which includes a more detailed description of the requirement transformations, outlined in Section 4.2 of the paper. For this purpose, we also provide a formal description of the temporal semantics model. The Temporal Semantics Model ============================ Events and Signals ------------------ The temporal semantics model represents software as a composed set of components $C=\left\{c_1,c_2,\ldots\right\}$. An implementation of a component consists of a set of behaviors and ports, also called interfaces. Each behavior assigns values to output ports $\underline{\mathbf{y}}$ based on inner states and the values on the input ports $\underline{\mathbf{u}}$. The sampling ports $\underline{\mathbf{s}}$ and actuation ports $\underline{\mathbf{z}}$ provide a link to the physical environment. Throughout this document we use $\underline{\mathbf{x}}$ as a placeholder for any port, while $\mathbf{\underline{x}_{(i,j)}}$ addresses the $j^{th}$ interface of the component $c_i$. Each component consists of one or more executable units called runnables, which are assigned to schedulable units called a tasks $\tau$. Each occurrences at an interface $\underline{\mathbf{x}}$ is described by a data event $x^k \forall k\in\{1,\ldots,n \}$. An event is defined as the triple $x^k = \left (v_x^k, \hat{t}_x^k, t^k_x \right)$ whereas $v^k_x$ is the *value*, $\hat{t}^k_x$ the *tag* or timestamp and $t^k_x$ the so called *logical timestamp*. The logical timestamp describes the temporal context of the physical state that is represented by the value. The ordered set of events that occur at the interface $\mathbf{\underline{x}}$ is called a signal $x = \left(x^1,\ldots,x^n \right) $. To each signal a set of signal paths $e_{x}=\{e_{x}^1,\ldots,e_{x}^n\}$ can be attributed, which describe the information flow to the corresponding interface of a signal. More specifically, a signal path $e_{x}^m \forall m\in\{1,\ldots,n \}$ is an ordered tuple, whose elements can be read, write, sampling or actuator interfaces. The causal relation of events is called the causal chain. To describe this more specifically for a given signal $x$ with a signal path $e^m_x$, which connects a sampling interface $\mathbf{\underline{s}}$ to the port of the signal $\mathbf{\underline{x}}$. Then, the causal chain $P^{(m,k,i)}_x$ describes a set of events $P^{(m,k,i)}_x = \left(s^r,\ldots,x^k\right)$, which are causally related to the event $x^k$. This set includes exactly one event for each interface in $e^m_x$. The set of all causal chains that can be assigned to an event is described by $P_x$. If a component changes the temporal context of information, we call this behavior algorithmic delay. The sum of all algorithmic delays in a signal paths is annotated as $d_x^k \forall k\in\{1,\ldots,n \}$, whereas we generally assume that $d_x$ is constant. Given a causal chain $P^{(m,k,i)}_x = \left(s^r,\ldots,x^k\right)$, which relates an event $x^k$ to a sampling event $s^r$, we can compute the logical timestamp formally as the sum of the tag $\hat{t}_s^r$ and the sum of algorithmic delays $d_x^k$ $$t_x^k = \hat{t}_s^r + d_x^k \quad \forall \left( k,r \right) \in P_x. \label{eq:lgtimes}$$ The behavior of real-time systems can be measured by the *latency* $h$ and the *data event distance* $\Delta \hat{t}$. The latency describes the difference between two tags in a causal event chain. The the latency is the difference between the tags $\hat{t}_s^r$ and $\hat{t}_x^k$: $$h_x^k = \hat{t}_x^k - \hat{t}_s^r \quad \forall \left(k,r \right) \in P_x. \label{eq:latency}$$ It is used to describe the age of information. The data event distance describes difference between the occurrences of two events at the same interface $$\Delta \hat{t}_x^k = \hat{t}_x^k - \hat{t}_x^{k-1} \quad \forall k \in \left \{1,\ldots,n \right\}. \label{eq:dataeventdistance}$$ It is used as a measure for the sampling of information. Signal Properties ----------------- In the following we provide formal definitions for the signal properties *sampling rate*, *bandwidth*, *aliasing*, *time delay* and *synchronicity*. We define these properties for individual events and then derive a description for signals. We also show how these properties can be related to the known real-time measurements. #### Logical Data Age The logical data age $a_{x}^k$ of an event $x^k$ is the difference between the tag $\hat{t}_{x}$ and the logical timestamp $t_{x}$: $$a_{x}^k = \hat{t}^k_{x} - t^k_{x} \label{eq:age}$$ For the entire signal the logical data age can be described as absolute value $a_x$, if $a_x^k$ is constant. Else, it can be described using a bound of the form $a_x^- \leq a_x^k \leq a_x^+ \forall k \in \left\{ 1,\ldots,n\right \}$. The logical data age is similar to the latency. The difference to the latency is that the logical data age also represents algorithmic delays. The relationship between these properties can be obtained by inserting (\[eq:lgtimes\]) and (\[eq:latency\]) into (\[eq:age\]). Thereby we obtain the expression: $$\begin{aligned} a_x^k = \hat{t}_x^k-\hat{t}_s^j +d_x^k = h_x^k + d_x^k. \end{aligned} \label{eq:calcage}$$ Based on this relationship, we can also determine the logical timestamp from a known latency and occurrence of an event $$t_x^k = \hat{t}_x^k - a_x^k = \hat{t}_x^k - h_x^k - d_x^k. \label{eq:calclogical}$$ This expression can be obtained by inserting (\[eq:calcage\]) into the definition of the logical timestamp (\[eq:lgtimes\]). #### Data Synchronicity The synchronicity of data $\zeta_{x_1,x_2}$ describes the difference of the logical timestamps of two values that are computed simultaneously, such that: $$\zeta_{x_1,x_2}^k = t^k_{x_1} - t^k_{x_2} \quad \forall k \in \left\{ 1,\ldots,n\right \}. \label{eq:syncorg}$$ This property is again described as a property of an event. Similar to the logical data age, we can express it for the whole signal using a bounded or an absolute expression. The data synchronicity can also be expressed as the difference of the latency of the latencies and the delays in the following form: $$\zeta_{x_1,x_2}^k = \left(h^k_{x_1} - d^k_{x_2}\right)-\left(h^k_{x_2}-d^k_{x_2}\right) \quad \forall k \in \left\{ 1,\ldots,n\right \}. \label{eq:sync}$$ This expression is obtained by inserting (\[eq:calclogical\]) into (\[eq:sync\]) and assuming that the tag is equal. #### Logical Sampling Rate The logical sampling rate $\Delta t_{x}^k$ of an event $x^k$ measures the difference of its logical timestamp to the logical timestamp of its preceding event: $$\Delta t_{x}^k = t_x^k-t_x^{k-1} \quad \forall k \in \left\{ 1,\ldots,n\right \} \label{eq:samplingrate}$$ For the entire signal the logical data age can again be described either by an absolute value or by bounds. The logical sampling rate $\Delta t_x^k$ can be expressed as a function of the data event distance and the difference of the latencies in the following form: $$\Delta t_x^k = t_x^k - t_x^{k-1} = \Delta \hat{t}_x^k - \left(h_x^k - h_x^{k-1}\right) \label{eq:calcsr}$$ This expression is obtained by inserting (\[eq:calclogical\]) and (\[eq:dataeventdistance\]) into (\[eq:samplingrate\]), assuming that the algorithmic delay is constant. #### Logical Band Limit Given a signal, whose values can be described in a spectrum. Then the logical band limit $$l_{x} = 1/(2 f_{x}^{\max}) \label{eq:highesfreq}$$ describes the highest frequency $f_{x}^{\max}$ in which a signal $x$ can have an amplitude that is nonzero. If there exists no spectrum (e.g. if the signal represents a discrete state), the band limit describes a lower bound on the time, in which the signal does not change its values. As signals can generally not represent frequencies that are larger than their sampling frequency, the band limit of a signal is bounded by the logical sampling rate. An additional bound is provided by the filter operations in the components, described by $g_y$. Thus, the band limit for each pairing $\left(\mathbf{\underline{y}}, \mathbf{\underline{u}}\right)$ and $\left(\mathbf{\underline{u}}, \mathbf{\underline{y}}\right)$ in a signal path is defined by $$l_u = \max \left\{l_y, \Delta t_u \right\}, \qquad l_{y} = \max {\left \{g_{y}, \Delta t_{y}\right\}}, \label{eq:bandlimicomp}$$ which means that is has to be generally determined iteratively. #### Logical Aliasing Given a signal which is sampled uniformly, aliasing occurs when data is undersampled. This occurs exactly when the sampling rat of a read interface is larger than the band limit of the sender. This occurs exactly when for any pair $\left(\mathbf{\underline{y}}, \mathbf{\underline{u}}\right)$ in the signal path $e_{x}$ the expression $$l_{y} \geq \Delta t_{u} \label{eq:alias}$$ is not true. Relation of Signal- and Timing Requirements =========================================== In the following we discuss the relation between signal requirements and timing requirements. This enables the transformation of specified requirements into standard formats given that the respective signal paths are known. We assume that constraints on a signal property $v_x^k$ are formulated in a bounded form $$v_x^-\leq v_x^k\leq v_x^+$$ and that delays and filter parameters are constant. This assumption is realistic, when dealing with control systems which are often periodic. A bounded logical data age constraint of the form $a_x^-\leq a_x^k\leq a_x^+$ will be satisfied if the condition $$a_x^- - d_x \leq h_x^k \leq a_x^+ - d_x$$ holds for the corresponding set of causal chains $P_x$. This property can be derived by replacing $a_x^k$ with (\[eq:calcage\]) and subtracting $d_x$. The key aspect of this statement is that a requirement on the logical data age provides a constrained bound on the latency of the respective event-chain. A synchronicity constraint of the form $\zeta_{x_2,x_1}^- \leq \zeta_{x_2,x_1}^k \leq \zeta_{x_2,x_1}^+$ will be satisfied if the condition $$\zeta_{x_2,x_1}^- + d_{x_2} - d_{x_1} \leq h_{x_2}^k - h_{x_1}^k \leq \zeta_{x_2,x_1}^+ + d_{x_2} - d_{x_1}$$ holds for the corresponding causal chains $P_{x_1}, P_{x_2}$. We obtain this property by replacing $\zeta_{x_2,x_1}^k$ in the synchronicity constraint by (\[eq:calclogical\]) and subtracting of the delays $d_{x_1}$ and $d_{x_1}$. Thus, to ensure that the data is synchronous according to the constraint, it is necessary to ensure that the relative latency of the corresponding event chains stays inside of certain bounds. In AUTOSAR, this can be addressed by a constraint on the synchronicity of event chains. A logical sampling rate constraint of the form $\Delta t_x^- \leq \Delta t_x^k \leq \Delta t_x^+$ will be satisfied if the condition $$\Delta t_x^- \leq \left(\hat{t}_x^k - \hat{t}_x^{k-1}\right) - \left(h_{{x}}^k - h_{{x}}^{k-1} \right) \leq \Delta t_x^+$$ holds for all events in the corresponding causal chain $P_{x}$. To obtain this expression we replace the expression $\Delta t_x^k$ with (\[calcsr\]). Note, that the logical sampling rate constraint addresses a simultaneous requirement on the difference of the latencies and the difference of the tags of two consecutive events. Thus, a band limit constraint of the form $l_x^-\leq l_x^k\leq l_x^+$ can only be satisfied if the condition $$l_x^- \geq \Delta t \geq \Delta \hat{t}_x^k - \Delta h_{x}^k \label{eq:blreq}$$ is true. In order to enable that a signal may have a specified band limit, it is necessary that the signal is sampled fast enough to represent this frequency. This is because frequencies can not be represented below the sampling rate. Therefore, the sampling rate provides a lower bound of the band limit, which can be concluded from (\[eq:bandlimicomp\]). Hence we require $\Delta t_x<l_x^-$ is true. Note, that the sampling rate in itself can not lower the band limit. This means that an upper bound can only be enforced by the cut-off frequencies of the filters in the signal path. Given a no-aliasing constraint on the interface $\mathbf{\underline{x}}$ and the respective signal flow $e_{x}$. Lets assume that we can derive a subset from each signal path in $e_x$ of the form $B_x = (\mathbf{\underline{s}}, \mathbf{\underline{y}},\ldots, \mathbf{\underline{u}_{x}})$, which includes interfaces referenced to sampling and resampling behaviors and the specified interface itself. Then the no aliasing constraint will be satisfied if for any pair $(\mathbf{\underline{y}},\mathbf{\underline{u}}) \in B_x$ the condition: $$l_{y} \geq \Delta t_{u} \geq \Delta \hat{t}_{u}^k - \Delta h_{u}^k$$ holds for all events in the respective casual chain. According to (\[eq:alias\]) aliasing will occur if the constraint $l_{y} \leq \Delta t_{u}$ is not satisfied for any pair $\left(\mathbf{\underline{y}}, \mathbf{\underline{u}}\right)$ in a signal path. Generally the band limit of a signal can only be changed without aliasing by filtering. Also the maximum logical sampling rate can only increase along a signal path. Therefore, we only need to ensure, that the components that filter the signals read their input values with a sampling rate is not larger than the band limit of the last resampling operation. Given this requirement, the aliasing requirement can be converted into a constraint on the sampling rate for the respective event chains. Note, that our approach requires that the band limit of the sampling interface can be determined.
--- abstract: 'Contrary to the general belief, there has recently been quite a few examples of unitary evolution of quantum cosmological models. The present work gives more examples, namely Bianchi type VI and type II. These examples are important as they involve varying spatial curvature unlike the most talked about homogeneous but anisotropic cosmological models like Bianchi I, V and IX. We exhibit either explicit example of the unitary solutions of the Wheeler-DeWitt equation, or at least show that a self-adjoint extension is possible.' --- [**Unitary evolution for anisotropic quantum cosmologies: models with variable spatial curvature** ]{}\ Sachin Pandey[^1],    Narayan Banerjee[^2] [*Department of Physical Sciences,  \ Indian Institute of Science Education and Research Kolkata,  \ Mohanpur Campus, Mohanpur, West Bengal-741246, India*]{}\ 1.0cm PACS numbers: 04.20.Cv., 04.20.Me. Keywords: quantum cosmology, unitary evolution, Bianchi models. Introduction ============ A quantum description of the universe should emerge from a quantum theory of gravity which still eludes the reach in a generally accepted form. Quantum cosmology is a moderately ambitious programme where quantum mechanical principles are employed in a gravitational system in the absence of a more general quantum theory of gravity. Of course quantum cosmology has its own motivation, such as looking for a resolution of the problem of singularity at the birth of the universe. The basic framework for quantum cosmology is provided by the Wheeler-DeWitt equation[@dewitt; @wheeler; @misner]. Amongst the infinitely many possible metric, only a particular form is normally chosen by hand from the consideration of symmetry. This is the usual minisuperspace which reduces the degrees of freedom to a finite number and thus makes the problem tractable. There are quite a few reviews which discuss the development of the subject and some of its conceptual problems[@wilt; @halli; @nelson1].\ One major problem is that the quantization of anisotropic models are believed to give rise to a non-unitary evolution of the wave function resulting in a nonconservation of probability. It is interesting to note that this non-unitarity is often apt to be invisible in the absence of a properly oriented scalar time parameter in the scheme of quantization[@lidsey; @nelson2]. In a relativistic theory, time itself is a coordinate and fails to be the scalar parameter against which the evolution should be studied. In fact the problem of the proper identification of time in quantum cosmology is a subject by itself and dealt with by many[@kuchar1; @isham; @rovelli; @anderson].\ A novel idea about the identification of time through the evolution of a fluid present in the model appeared to work very well. The method, where the fluid variables are endowed with dynamical degrees of freedom through some thermodynamic potentials[@schutz1; @schutz2], was suggested by Lapchinskii and Rubakov[@rubakov]. It has been shown that the time parameter that emerges out of the fluid evolution has the required monotonicity as well as the correct orientation[@sridip1]. This Schutz formalism is now very widely used in quantizing cosmological models[@sridip1; @alvarenga1; @alvarenga2; @alvarenga3; @barun; @almeida; @sridip2; @sridip3].\ Until very recently, the non-conservation of probability in anisotropic models had almost been generally accepted as a pathology, and had been ascribed to the hyperbolicity of the Hamiltonian[@alvarenga3]. Not that the anisotropic models are of utmost importance so far as the observed universe is concerned, but this feature of non-unitarity renders the quantization scheme vulnerable. Also, the formation of structure in the universe indeed requires a small but finite anisotropy of $\frac{\Delta \rho}{\rho} \sim 10^{-5}$.\ There has now been a new turn in this picture. Majumder and Banerjee[@barun] showed that a suitable ordering of operators can lead to a alleviation of the problem, meaning that the probability is conserved except for a small period of time. Later it was clearly shown by Pal and Banerjee[@sridip1; @sridip2] that the said non-unitarity can actually be attributed to either an ordering of operators or to a bad choice of variables. With a suitable ordering, examples of unitary evolution were exhibited in Bianchi I, V and IX models. The degree of difficulty in integration allowed only a few cases of choice of $\alpha$ which determines the equation of state ($ P = \alpha \rho$) for which the desired unitarity was established. However, even a few examples are good enough to indicate that the problem is not actually pathological and can be cured. Very recently an example of a unitary evolution for a Kanotowki-Sachs model has been given by Pal and Banerjee[@sridip3]. It was also shown by Pal[@sridip4] that this unitarity is achieved not at the cost of anisotropy itself.\ Except for the Kantowski-Sachs cosmology, all other examples of the anisotropic Bianchi models stated have one unifying feature, they are all of constant spatial curvature. The motivation for the present work is to show that the possiblity of a self adjoint extension and hence a unitary evolution is not a characteristic of models with a constant spatial curvature, this is in fact more general and can be extended to models with variable spatial hypersurfaces as well. Two specific examples, namely Bianchi II and VI are dealt with in the following sections. Section 2 deals with the formalism and takes up the example of the Bianchi VI model. Section 3 deals with the Bianchi II model. The last section includes a summary and a discussion of the results obtained.\ The formalism and Bianchi VI models =================================== We start with the standard Einstein-Hilbert action for gravity along with a perfect fluid given by $$\label{action} {\mathcal A} = \int_M d^4x\sqrt{-g}R +\int_M d^4x\sqrt{-g}P,$$ where $R$ is the Ricci Scalar, $g$ is the determinant of the metric and $P$ is the pressure of the ideal fluid. The first term corresponds to the gravity sector and the second term is due to the matter sector. Here we have ignored the contributions from boundary as it would not contribute to the variation. The units are so chosen that $16\pi G =1$.\ A Bianchi VI model is given by the metric $$\begin{aligned} ds^2 = n^2(t)dt^2-a^2(t)dx^2-e^{-mx}b^2(t)dy^2-e^xc^2(t)dz^2, \label{metric-6}\end{aligned}$$ where the lapse function $n$ and $a, b, c$ are functions of time $t$ and $m$ is a constant.\ From the metric given above, we can write the Ricci Scalar as $$\begin{aligned} \label{ricci-6} \sqrt{-g}R= e^{\frac{(1-m)x}{2}} \bigg[\frac{d}{dt}[\frac{2}{n}(\dot{a}bc +\dot{b}ca+a\dot{c}b)] -\frac{2}{n}[\dot{a}\dot{b}c +\dot{b}\dot{c}a+\dot{c}\dot{a}b+\frac{n^2bc}{4a}(m^2-m+1)]\bigg].\end{aligned}$$ Using this, we can find the action for the gravity sector from equation (\[action\]) which is given as $$\label{action-grav} {\mathcal A}_g=\int dt \bigg[-\frac{2}{n}[\dot{a}\dot{b}c+\dot{b}\dot{c}a+\dot{c}\dot{a}b+\frac{n^2bc}{4a}(m^2-m+1)]\bigg],$$ where an overhead dot indicates a derivative with respect to time.\ Now we make a set of transformation of variables as $$\begin{aligned} a(t)=e^{\beta_0}, \\ b(t)=e^{\beta_0+\sqrt{3}(\beta_+-\beta_-)}, \\ c(t)=e^{\beta_0-\sqrt{3}(\beta_+-\beta_-)}.\end{aligned}$$ This introduces a constraint $a^2=bc$, but the model is still remains Bianchi Type VI without any loss of the typical characteristics of the model. Such type of transformation of variables has been extensively used in the literature[@sridip1; @barun; @alvarenga3]. One can now write the Lagrangian density of the gravity sector as $${\mathcal L}_g = -6\frac{e^{3\beta_0}}{n}[\dot{\beta_0^2}-(\dot{\beta_+}-\dot{\beta_-})^2 +\frac{e^{-2\beta_0}n^2(m^2-m+1)}{12}]. \label{7}$$ Here $\beta_0$ ,$\beta_+$ and $\beta_-$ has been treated as coordinates. So corresponding Canonical momentum will be $p_0$, $p_+$ and $p_-$ where $p_{i} = \frac{\partial {\mathcal L}_g}{\partial \dot{\beta_{i}}}$. It is easy to check that one has $p_+ =-p_-$. Hence we can write the corresponding Hamiltonian as $${\mathcal H}_g=-n e^{-3\beta_0}[\frac{1}{24}(p_0^2-p_+^2-12(m^2-m+1)e^{4\beta_0})]. \label{8}$$ With the widely used technique, developed by Lapchinskii and Rubakov[@rubakov] by using the Schutz formalism of writing the fluid parameters in terms of thermodynamic variables[@schutz1; @schutz2], the action the fluid sector can be written as $$\label{action-matter} {\mathcal A}_f =\int dt {\mathcal L}_{f}\\ = \int dt \left[n^{-\frac{1}{\alpha}}e^{3\beta_{0}}\frac{\alpha}{\left(1+\alpha\right)^{1+\frac{1}{\alpha}}}\left(\dot{\epsilon}+\theta\dot{S}\right)^{1+\frac{1}{\alpha}}e^{-\frac{S}{\alpha}}\right].$$ Here $\epsilon, \theta, S$ are thermodynamic potentials. A constant volume factor $V$ comes out of the integral in both of (\[action-grav\]) and (\[action-matter\]). This $V$ is inconsequential as it can be absorbed in the subsequent variational principle. With a canonically transformed set of variables $T,\epsilon^{\prime}$ in place of $S, \epsilon$, one can finally write down the Hamiltonian for the fluid sector as $${H}_f = n e^{-3\beta_0}e^{3(1-\alpha)\beta_0}p_T. \label{43}$$ The canonical transformation is given by the set of equations $$\begin{aligned} \label{canonical} T&=&-p_{S}\exp(-S)p_{\epsilon}^{-\alpha -1},\\ p_{T}&=&p_{\epsilon}^{\alpha+1}\exp(S),\\ \epsilon^{\prime}&=&\epsilon+\left(\alpha+1\right)\frac{p_{S}}{p_{\epsilon}},\\ p_{\epsilon}^{\prime}&=&p_{\epsilon},\end{aligned}$$ This method and the canonical nature of the transformation are comprehensively discussed in reference [@sridip1].\ The net or the super Hamiltonian is $$H= H_g + H_f = -\frac{ne^{-3\beta_0}}{24}[p_0^2-p_+^2-12(m^2-m+1)e^{4\beta_0}-e^{3(1-\alpha)\beta_0}p_T] . \label{44}$$ Using the Hamiltonian constraint $H=0$, which can be obtained by varying the action ${\mathcal A}_{g}+ {\mathcal A}_{f}$ with respect to the lapse function $n$, one can write the Wheeler-DeWitt equation as $$[e^{3(\alpha-1)\beta_0}\frac{\partial^2}{\partial \beta_0^2}-e^{3(\alpha-1)\beta_0}\frac{\partial^2}{\partial \beta_+^2}+12(m^2 - m + 1)e^{(3\alpha+1)\beta_0}]\psi =24i\frac{\partial}{\partial T}\psi. \label{45}$$ This equation is obtained after we promote the momenta to the corresponding operators given by $p_{i}=-i\frac{\partial}{\partial {\beta}_{i}}$ in the units of $\bar{h}=1$.\ It is interesting to note that for a particular value of $m=m_0$ where $m_0$ is a root of equation $m^2-m+1 = 0$, the spatial curvature vanishes and the equation (\[45\]) reduces to the corresponding equation for a Bianchi Type I model[@sridip1]. We shall discuss the solution of the Wheeler-DeWitt equation in two different cases, namely $\alpha = 1$ and $\alpha \neq 1$.\ Stiff fluid: $\alpha = 1$ ------------------------- For a stiff fluid ($P=\rho$), the equation (\[45\]) becomes simple and easily separable. It looks like $$\bigg[\frac{\partial^2}{\partial \beta_0^2}-\frac{\partial^2}{\partial \beta_+^2}+12(m^2-m+1)e^{4\beta_0}\bigg]\psi =24i\frac{\partial}{\partial T}\psi . \label{19}$$ Wih the separation ansatz $$\psi = e^{i2 k_+\beta_+}\phi(\beta_0)e^{-iET}, \label{20}$$ one can write $$\frac{\partial^2 \phi}{\partial \beta_0^2}+(4k_+^2-24E+4N^2 e^{4\beta_0})\phi=0,$$ where $N^2=3(m^2-m+1)$. After making the change in variable as $q = N e^{2\beta_0}$, above equation can be written as $$q^2\frac{\partial^2 \phi}{\partial q^2}+q\frac{\partial \phi}{\partial q}+[q^2 - (6E-k_+^2)]\phi=0.$$ Solution of this equation can be written in terms of Bessel’s functions as $$\phi(q) = J_{\nu} (q) \label{phi_q},$$ where $\nu =\sqrt{6E-k_+^2}$. Now for the construction of the wave packet, we need to fix $\nu$. If we take $\epsilon= -\nu^2 =k_+^2-6E$ then wave packet can have following expression $$\Psi = \Phi (q) \zeta(\beta_+) e^{i\epsilon T/6}.$$ where $$\zeta(\beta_+)=\int dk_+ e^{-(k_+ -k_{+0})^2} e^{i (2k_+\beta_+ - \frac{k_+^2}{6} T)}$$ The norm indeed comes out to be positive and finite (for the detals of the calculations, we refer to work of Pal and Banerjee [@sridip3]). Thus one indeed has a unitary time evolution.\ General perfect fluid: $\alpha \neq 1$ -------------------------------------- Now we shall take the more complicated case of $\alpha \neq 1$ and try to solve the Wheeler-DeWitt equation (\[45\]). We use a specific type of operator ordering with which equation (\[45\]) takes the form $$\bigg[e^{\frac{3}{2} (\alpha-1)\beta_0}\frac{\partial}{\partial\beta_0}e^{\frac{3}{2} (\alpha-1)\beta_0}\frac{\partial}{\partial\beta_0}-e^{3(\alpha-1)\beta_0}\frac{\partial^2}{\partial\beta_+^2} + 12(m^2-m+1)e^{(3\alpha+1)\beta_0}\bigg]\Psi=24i\frac{\partial}{\partial T}\Psi. \label{25}$$ Now with the standard separation of variable as, $$\Psi(\beta_0,\beta_+ ,T) =\phi(\beta_0) e^{ik_+\beta_+}e^{-iET},$$ the equation for $\phi$ becomes $$\bigg[e^{\frac{3}{2} (\alpha-1)\beta_0}\frac{\partial}{\partial\beta_0}e^{\frac{3}{2} (\alpha-1)\beta_0}\frac{\partial}{\partial\beta_0}+e^{3(\alpha-1)\beta_0}k_+^2+12(m^2-m+1)e^{(3\alpha+1)\beta_0}-24E\bigg]\phi=0. \label{26}$$ For $\alpha \neq 1$ we make a transformation of variable as $$\chi =e^{-\frac{3}{2} (\alpha-1)\beta_0},$$ and write equation (\[26\]) as $$\frac{9}{4}(1-\alpha)^2\frac{\partial^2\phi}{\partial \chi^2}+\frac{k_+^2}{\chi^2}\phi +12(m^2-m+1)\chi^{\frac{2(3\alpha+1)}{3(1-\alpha)}}\phi-24E\phi =0. \label{27}$$ We define some parameters as $$\begin{aligned} \sigma =\frac{4k_+^2}{9(1-\alpha)^2}, \\ E' = \frac{32}{3(1-\alpha)^2}E,\\ M^2 =\frac{16(m^2-m+1)}{3(1-\alpha)^2}. \label{28}\end{aligned}$$ Equation (\[27\]) can now be written as $$-\frac{\partial^2\phi}{\partial \chi^2}-\frac{\sigma^2}{\chi^2}\phi -M^2\chi^{\frac{2(3\alpha+1)}{3(1-\alpha)}}\phi=-E'\phi. \label{29}$$ Above equation can be compared to $-{\mathcal H}_g=-\frac{d^2}{d\chi^2}+V(\chi)$ with $V(\chi)=-\frac{\sigma^2}{\chi^2} -M^2\chi^{\frac{2(3\alpha+1)}{3(1-\alpha)}}$ which is a continuous and real valued function on the half line, and one can show that the Hamiltonian $H_g$ admits self-adjoint extension as ${\mathcal H}_g$ has equal deficiency indices. For a systematic and detail description of the self-adjoint extension we can refer to the text by Reed and Simons[@reed].\ So it can be said that for perfect fluid with $\alpha \neq 1$ Bianchi VI quantum models do admit a unitarity evolution.\ $\alpha=-\frac{1}{3}$ --------------------- We take a specific choice, where $\rho+3P =0$, as an example. This equation of state will make equation (\[29\]) much simpler. With $\alpha=-1/3$, the term $-M^2\chi^{\frac{2(3\alpha+1)}{3(1-\alpha)}}$ becomes a constant ($M^{2}$). Equation (\[29\]) becomes $$-\frac{\partial^2\phi}{\partial \chi^2}-\frac{\sigma^2}{\chi^2}\phi =-(E'-M^2)\phi, \label{30}$$ which is in fact a well known Schrodinger equation of a particle with mass $m=1/2$ in an attractive inverse square potential. Solution to above can be given as, $$\begin{aligned} \phi_a(\chi)=\sqrt{\chi}[AH_{i\beta}^{(2)}(\lambda \chi)+BH_{i\beta}^{(1)}(\lambda \chi)], \\ \phi_b(\chi)=\sqrt{\chi}[AH_{\alpha}^{(2)}(\lambda \chi)+BH_{\alpha}^{(1)}(\lambda \chi)], \label{31}\end{aligned}$$ for $\sigma >1/4$ and $\sigma < 1/4$ and $\beta = \sqrt{\sigma-1/4}$ and $\beta = \sqrt{1/4-\sigma}$ respectively. Here both $\alpha$ and $\beta$ are real numbers and in both cases the energy spectra is given as $$E'=M^2-\lambda^2.$$ Self-adjoint extension guarantees that $|B/A|$ takes a value so as to conserve probability and make the model unitarity. The details of the calculations are omitted, as the analysis is similar to that described in reference [@sridip2]. Bianchi II models: ================== Bianchi Type II model is given the line element $$ds^2=dt^2-a^2(t)dr^2-b^2(t)d\theta^2-[a^2(t) \theta^2 +b^2(t)]d\phi^2+2a^2(t)\theta dr d\phi, \label{51}$$ and the process is a bit more involved for the presence of the non-diagonal terms in the metric.\ The Ricci scalar $R$ in this case is given by $$R = -\frac{a^2}{2 b^4} - \frac{4\dot{a}\dot{b}}{a b} -\frac{2{\dot{b}}^2}{b^2} -\frac{2\ddot{a}}{a} -\frac{4\ddot{b}}{b}.$$ If we define a new variable $\beta=a b$ as prescribed in [@alvarenga3], then Lagrangian density for gravity sector looks like $$\begin{aligned} {\mathcal L}_g =\frac{2\beta^2\dot{a}^2}{a^3}-\frac{2\dot{\beta}^2}{a}-\frac{a^5}{2\beta^2}, \label{52} \end{aligned}$$ and the corresponding Hamiltonian density for gravity sector can be written as $$H_g=\frac{a^3p_a^2}{8\beta^2}-\frac{a}{8}p_{\beta}^2+\frac{a^5}{2\beta^2}. \label{53}$$ Using Schutz’s formalism and proper identification of time as we did before, the Hamiltonian density for fluid sector can be written as $$H_f = a^{\alpha}\beta^{-2\alpha}p_T. \label{54}$$ The super Hamiltonian can now be written in following form $$H = H_g + H_f = \frac{a^3p_a^2}{8\beta^2}-\frac{a}{8}p_{\beta}^2+\frac{a^5}{2\beta^2} +a^{\alpha}\beta^{-2\alpha}p_T . \label{55}$$ As an example we take up the case of a stiff fluid given by $\alpha=1$.\ After promoting the momenta by operators as usual, the Wheeler-DeWitt equation $H\Psi=0$ takes following form $$-\frac{a^2}{8}\frac{\partial^2 \psi}{\partial a^2}+\frac{\beta^2}{8}\frac{\partial^2 \Psi}{\partial \beta^2}+\frac{a^4}{2}\Psi = i \frac{\partial \Psi}{\partial T}. \label{57}$$ Using a separation of variables $$\Psi =e^{-iET}\phi(a)\psi(\beta),$$ we get following equations for $\psi$ and $\phi$ respectively $$\begin{aligned} -\frac{d^2\psi}{d\beta^2}+\frac{8k}{\beta^2}\psi=0, \label{58}\\ a^2\frac{d^2\phi}{da^2}-4a^4\phi-8(k-E)\phi=0. \label{59}\end{aligned}$$ With $\phi=\frac{\phi_0}{\sqrt{a}}$ and $\chi=a^2$, last equation can be written as $$-\frac{d^2\phi_0}{d\chi^2}-\frac{\sigma}{\chi^2}\phi_0=-\phi_0 ,\label{60}$$ where $\sigma=[\frac{3}{16}-2(k-E)].$\ Now equations (\[58\]) and (\[60\]) are governing equations for Bianchi Type II with a stiff fluid.\ Equations for both $\psi$ and $\phi$ can be mapped to a Schrodinger equation for a particle in an inverse square potential. In order to get a solution we actually have ensure an attractive regime, which requires $k\leq 0$ , $E \leq k-3/32$. We see that both the equations are that for inverse square potentials, and thus a self-adjoint extension is possible. This case is actually very similar to the Bianchi IX model as discussed in refefernce [@sridip2]. So we do not discuss this in detail. Discussion and conclusion ========================= The present work deals with two examples of anisotropic quantum cosmological models with varying spatial curvature, namely Bianchi VI and II. We show that there is indeed a possibility of finding unitary evolution of the system. The earlier work on anisotropic models with constant spatial curvature[@sridip1; @sridip2] disproved the belief that anisotropic quantum cosmologies generically suffer from a pathology of non-unitarity. The present work now strongly drives home the fact that this feature is not at all a charactristic of models with constant spatial curvature. It was also shown before that the unitarity is not achieved at the cost of anisotropy itself[@sridip4]. One can now indeed work with quantum cosmologies far more confidently, as there is actually no built-in generic non-conservation of probability in the models.\ Very recently it has been shown that in fact all homogeneous models, isotropic or anisotropic, quite generally have a self-adjoint extension[@sridip5]. The present work gives two more examples, and consolidates the result proved in reference [@sridip5]. The extension, however, is non-unique in anisotropic models.\ Thus the standard canincal quantization of cosmological models via Wheeler-DeWitt equation still proves to be useful in the absence of a more general quantum theory of gravity. The more challenging work will now be the quantization of inhomogeneous cosmological models. 1.50cm [**Acknowledgment**]{} The authors thank Sridip Pal for stimulating discussions. SP thanks the CSIR (India) for financial support. 3.0cm [99]{} B.S. DeWitt, Phys. Rev., [**160**]{}, 1113 (1967). “Superspace and the nature of quantum geometrodynamics” in [*Batelle Recontres*]{}, Benjamin, New York (1968). C.W. Misner, Phys. Rev., [**186**]{}, 1319 (1969). D. L. Wiltshire, arXiv:gr-qc/0101003. J. J. Halliwell, in [*Quantum Cosmology and Baby Universes*]{}, edited by S. Coleman, J.B. Hartle, T. Piran and S. Weinberg (World Scientific, Singapore, 1991). N. Pinto-Neto and J.C. Fabris, Class. Quant. Grav. [**30**]{}, 143001 (2013). J.E. Lidsey, Phys. Lett B[**352**]{}, 207 (1995). N. Pinto-Neto, A.F. Velasco and R. Collistete Jr, Phys. Lett A[**277**]{}, 194 (2000). K.V. Kuchar, in [*Conceptual problems in quantum gravity*]{}, edited by A. Ashtekar and J. Stachel (Birkhause, Boston, 1991). C.J. Isham in [*Integrable Systems, Quantum Groups and Quantum Field Theory*]{}, edited by L.A. Ibort, M.A. Rodriguez (Kluwer, Dordrecht, 1993). C. Rovelli, arXiv:gr-qc/0903.3832. E. Anderson, arXiv:gr-qc/1009.2157. B.F. Schutz, Phys. Rev. D [**2**]{}, 2762 (1970). B.F. Schutz, Phys. Rev. D [**4**]{}, 3559 (1971). V.G. Lapchinskii and V.A. Rubakov, Theor. Math. Phys. [**33**]{}, 1076 (1977). S. Pal and N. Banerjee, Phys. Rev. D [**90**]{}, 104001 (2014). F.G. Alvarenga and N.A. Lemos, Gen. Relativ. Gravit. [**30**]{}, 681 (1998). F.G. Alvarenga, J.C. Fabris, N.A. Lemos and G.A. Monerat, Gen. Relativ. Gravit. [**34**]{}, 651 (2002). F.G. Alvarenga, A.B. Batista, J.C. Fabris, N.A. Lemos and S.V.B. Goncales, Gen. Relativ. Gravit. [**35**]{}, 1639 (2003). B. Majumder and N. Banerjee, Gen. Relativ. Gravit. [**45**]{}, 1 (2013). C.R. Almeida, A.B. Batista, J.C. Fabris and P.R.L.V. Moniz, arXiv:1501.04170 S. Pal and N. Banerjee, Phys. Rev. D [**91**]{}, 044042 (2015). S. Pal and N. Banerjee, Class.Quant.Grav. [**32**]{}, 205005 (2015) S. Pal, Class.Quant.Grav. [**33**]{}, 045007 (2016). C. Bastos, O. Bertolami, N.C. Dias and J.N. Prata, Phys. Rev. D, [**78**]{}, 023516 (2008). M. Reed and B. Simon, [*Methods of Modern Mathematical Physics*]{}, 2nd Edition, Volume 2, (Academic Press, INC. 1975). S. Gopalakrishnan, [*Self-Adjointness and the Renormalization of Singular Potentials*]{}, BA (Hons) thesis, Amherst College, 2006. A. M. Essin and D. J. Griffiths, Am. J. Phys. [**74**]{}, 109 (2005). K. S. Gupta and S. G. Rajeev, Phys. Rev. D [**48**]{}, 5940 (1993). S. Pal and N. Banerjee, arXiv:1601.00460. [^1]: E-mail: sp13ip016@iiserkol.ac.in [^2]: E-mail: narayan@iiserkol.ac.in
--- abstract: 'Energetics and quantized conductance in jellium-modeled nanowires are investigated using the local-density-functional-based shell correction method, extending our previous study of uniform-in-shape wires \[C. Yannouleas and U. Landman, J. Phys. Chem. B [**101**]{}, 5780 (1997)\] to wires containing a variable-shaped constricted region. The energetics of the wire (sodium) as a function of the length of the volume-conserving, adiabatically shaped constriction, or equivalently its minimum width, leads to formation of self-selecting magic wire configurations, i.e., a discrete configurational sequence of enhanced stability, originating from quantization of the electronic spectrum, namely, formation of transverse subbands due to the reduced lateral dimensions of the wire. These subbands are the analogs of shells in finite-size, zero-dimensional fermionic systems, such as metal clusters, atomic nuclei, and $^3$He clusters, where magic numbers are known to occur. These variations in the energy result in oscillations in the force required to elongate the wire and are directly correlated with the stepwise variations of the conductance of the nanowire in units of $2e^2/h$. The oscillatory patterns in the energetics and forces, and the correlated stepwise variation in the conductance are shown, numerically and through a semiclassical analysis, to be dominated by the quantized spectrum of the transverse states at the narrowmost part of the constriction in the wire.' address: ' School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332-0430 ' author: - 'Constantine Yannouleas, Eduard N. Bogachek, and Uzi Landman' date: 'Physical Review B [**57**]{}, 4872 \[1998\]' --- [Energetics, forces, and quantized conductance in jellium-modeled metallic nanowires]{}     \ Introduction ============ Understanding of the physical origins and systematics underlying the variations of materials properties with size, form of aggregation, and dimensionality are some of the main challenges in modern materials research, of ever increasing importance in the face of the accelerated trend toward miniaturization of electronic and mechanical devices. [@mart1; @issp; @avou; @nano] Interestingly, it has emerged that concepts and methodologies developed in the context of isolated gas-phase clusters and atomic nuclei are often most useful for investigations of finite-size solid-state structures. In particular, it has been shown most recently [@land4; @barn] through first-principles molecular dynamics simulations that as metallic (sodium) nanowires are stretched to just a few atoms in diameter, the reduced dimensions, increased surface-to-volume ratio, and impoverished atomic environment, lead to formation of structures, made of the metal atoms in the neck, which can be described in terms of those observed in small gas-phase sodium clusters; hence they were termed [@land4; @barn] as supported [*cluster-derived structures (cds)*]{}. The above prediction of the occurrence of “magic-number” cds’s in nanowires, due to characteristics of electronic cohesion and atomic bonding in such structures of reduced dimensions, are directly correlated with the energetics of metal clusters, where magic-number sequences of cluster sizes, shapes and structural motifs due to electronic and/or geometric shell effects, have been long predicted and observed. [@heer; @yann1; @mart] These results lead one directly to conclude that other properties of nanowires, derived from their energetics, may be described using methodologies developed previously in the context of clusters. Indeed, in a previous letter, [@yann5] we showed that certain aspects of the mechanical response (i.e., elongation force) and electronic transport (e.g., quantized conductance) in metallic nanowires can be analyzed using the local-density-approximation (LDA) -based shell correction method (SCM), developed and applied previously in studies of metal clusters. [@yann1; @yann2] Specifically, we showed that in a jellium-modelled, volume-conserving, and uniform in shape nanowire, variations of the total energy (particularly terms associated with electronic subband corrections) upon elongation of the wire lead to [*self-selection*]{} of a sequence of stable “magic” wire configurations (MWC’s, specified by a sequence of the wire’s radii), with the force required to elongate the wire from one configuration to the next exhibiting an oscillatory behavior. Moreover, we showed that due to the quantized nature of electronic states in such wires, the electronic conductance varies in a quantized step-wise manner (in units of the conductance quantum $g_0=2e^2/h$), correlated with the transitions between MWC’s and the above-mentioned force oscillations. In this paper, we expand our LDA-based treatment to wires of variable shape, that is allowing for a constricted region. From this investigation, we conclude that the above self-selection principles and the direct correlations between the oscillatory patterns in the energetic stability, forces, and stepwise variations of the quantized conductance maintain for the variable-shaped wire as well, with the finding that underlying these oscillatory patterns and correlations are the contributions from the narrowmost region of the wire. Furthermore, this finding is analyzed and corroborated through a semiclassical analysis. Prior to introducing the model studied in this paper, it is appropriate to briefly describe certain previous theoretical and experimental investigations, which form the background and motivation for this study. Atomistic descriptions, based on realistic interatomic interactions, and/or first-principles modelling and simulations played an essential role in discovering the formation of nanowires, [@land1] and in predicting and elucidating the microscopic mechanisms underlying their mechanical, spectral, electronic and transport properties. These predictions [@land1; @land2; @land3] \[particularly those pertaining to generation of nanowires through separation of the contact between two materials bodies; size-dependent evolution of the wire’s mechanical response to elongation transforming from multiple-slips for wider wires to a succession of stress accumulation and fast relief stages leading to a sequence of structural instabilities and order-disorder transformations localized in the neck region when its diameter shrinks to about 15 Å; consequent oscillations of the elongation force and the calculated high value of the resolved yield stress ( $\sim$ 4 GPa for Au nanowires; which is over an order of magnitude that of the bulk), as well as anticipated electronic quantization effects on transport properties [@land1; @boga1]\] have been corroborated in a number of experiments using scanning tunneling and force microscopy, [@land1; @pasc1; @oles; @pasc2; @smith; @rubi; @stal] break junctions, [@krans] and pin-plate techniques [@land2; @costa] at ambient environments, as well as under ultrahigh vacuum and/or cryogenic conditions. Particularly pertinent to our current study are experimental observations of the oscillatory behavior of the elongation forces and the correlations between the changes in the conductance and the force oscillations; see especially the simultaneous measurements of force and conductance in gold nanowires in Ref.  , where in addition the predicted “ideal” value of the critical yield stress has also been measured (see also Ref. ). The LDA-jellium-based model introduced in our previous paper [@yann5] and extended to generalized wire shapes herein, while providing an appropriate solution within the model’s assumptions (see section II), is devoid by construction of atomic crystallographic structure and does not address issues pertaining to nanowire formation methods, atomistic configurations, and mechanical response modes \[e.g., plastic deformation mechanisms, interplanar slip, ordering and disordering mechanisms (see detailed descriptions in Refs.  and , and a discussion of conductance dips in Ref. ), defects, mechanical reversibility, [@rubi; @land2] and roughening of the wires’s morphology during elongation [@land3]\], nor does it consider the effects of the above on the electron spectrum, transport properties, and dynamics. [@barn] Nevertheless, as shown below, the model offers a useful framework for linking investigations of solid-state structures of reduced dimensions (e.g., nanowires) with methodologies developed in cluster physics, as well as highlighting certain nanowire phenomena of mesoscopic origins and their analogies to clusters. In this context, we note that several other treatments related to certain of the issues in this paper, but employing free-electron models, have been pursued most recently. [@ruit; @staf] In both of these treatments an infinite confining potential on the surface of the wire is assumed and only the contribution from the kinetic energy of the electrons to the total energy is considered, neglecting the exchange-correlation and Hartree terms, and electrostatic interactions due to the positive ionic (jellium) background. A comprehensive discussion of the limitations of such free-electron models in the context of calculations of electronic structure and energetics (e.g., surface energies) of metal surfaces can be found in Ref.  . In section II.A., we outline the LDA-based Shell Correction Method, describe the jellium model for variable-shaped nanowires, and derive expressions for the energetics of such nanowires (density of states, energy, and force). Numerical results pertaining to energetics, force, and electronic conductance, calculated as a function of elongation for variable-shaped sodium nanowires, are given in section II.B., including a discussion on the main finding that the contribution from the narrowmost part of the constriction underlies the properties of these quantities and the correlations between them. These correlations between the energetic and transport properties and their dependence on the narrowmost part of the nanowire are further analyzed in section III, using a semiclassical treatment. We summarize our results in section IV. Density Functional Description of Jellium Nanowire ================================================== Theory ------ ### Shape of Constriction Consider a jellium nanowire with circular symmetry about the axis of the wire ($z$ axis). The wire may contain a constricted region (see Fig. 1), that is a section of length $L$ where the cross-sectional radius $a(z)$ varies along the axis as $$a(z) = a_0 + (R_0-a_0) f(z)\;,\;\ -L/2 \leq z \leq L/2~, \label{az}$$ with $f(-z)=f(z)$ (the $z=0$ plane passes through the middle of the wire) and $f(\pm L/2)=1$. $R_0=a(\pm L/2)$ is the uniform radius outside the constricted section, and $a_0 \equiv a(0)$. In this paper, we take a parabolic shape $f(z)=(2z/L)^2$ for the description of the constricted region \[a wire of uniform cross section throughout corresponds to $f(z)=1$\]. We also assume that elongation of the wire occurs in the constricted region while maintaining its volume constant (this is supported by MD simulations), namely by requiring that $$2 \int_0^{L/2} a^2(z) dz= R_0^2 L_0~, \label{vol}$$ for given values of $R_0$ and $L_0$ \[hereafter we will denote the pair of parameters ($R_0$,$L_0$) by ${\cal O}$; we further assume that $R_0 \ll L_0$\]. For the parabolic shape assumed in this paper, the smallest cross-sectional radius is determined for any given value of $L_0 \leq L \leq 5 L_0$ from Eqs. (\[az\]) and (\[vol\]) as $$a_0 = \frac{R_0}{4} \left[-1 + (30 \frac{L_0}{L} -5)^{1/2} \right]~, \label{a0}$$ i.e., $a_0=R_0$ for $L=L_0$, and $a_0=0$ (i.e., breakage of the wire) for $L=5 L_0$. ### Shell Correction Method The Shell Correction Method we employ is based on the LDA theory. In the Shell Correction Method [@yann1; @yann2; @yann3; @yann4] (SCM), the total LDA energy, $E_T(L,{\cal O})$, for any configuration of the wire (specified by $L$ and ${\cal O}$) is separated as, $$E_T(L,{\cal O}) = \widetilde{E}(L, {\cal O}) + \Delta E_{\text{sh}} (L,{\cal O})~, \label{etot}$$ where $\widetilde{E}(L, {\cal O})$ varies smoothly as a function of the system size ($L$) while $\Delta E_{\text{sh}} (L,{\cal O})$ varies in an oscillatory manner with $L$, as a result of the quantization of the electronic states. $\Delta E_{\text{sh}} (L,{\cal O})$ is usually called a shell correction in the nuclear [@stru; @bm] and cluster [@yann1; @yann2] literature; we continue to use here the same terminology with the understanding that the electronic levels in the nanowire form subbands, which are the analog of electronic shells in clusters where the size of the system is usually given by specifying the number of atoms $N$. The SCM method, which has been shown to yield results in excellent agreement with experiments [@yann1; @yann3; @yann4] and self-consistent LDA calculations [@yann1; @yann2] for a number of cluster systems, is equivalent to a Harris functional ($E_{\text{harris}}$) approximation to the Kohn-Sham LDA with the input density, $\rho^{\text{in}}$, obtained through variational minimization of an extended Thomas-Fermi (ETF) energy functional, $E_{\text{ETF}}[\rho]$. The Harris functional is given by the following expression, $$\begin{aligned} E&&_{\text{harris}} [\rho^{\text{in}}] = E_I + \sum_{i=1}^{\text{occ}} \epsilon_i^{\text{out}} \nonumber \\ && - \int \! \left\{ \frac{1}{2} V_H [ \rho^{\text{in}} ({\bf r})] + V_{\text{xc}} [ \rho^{\text{in}} ({\bf r})] \right\} \rho^{\text{in}} ({\bf r}) d{\bf r} \nonumber \\ && + \int \! {\cal E}_{\text{xc}} [ \rho^{\text{in}} ({\bf r})] d{\bf r}~, \label{enhar}\end{aligned}$$ where $V_H$ is the Hartree (electronic) repulsive potential, $E_I$ is the repulsive electrostatic energy of the ions, and $E_{\text{xc}} \equiv \int {\cal E}_{{xc}} [\rho] d {\bf r}$ is the exchange-correlation (xc) functional [@gunn] \[the corresponding xc potential is given as $V_{\text{xc}}({\bf r}) \equiv \delta E_{\text{xc}} [\rho] / \delta \rho({\bf r})$\]. $\epsilon_i^{\text{out}}$ are the eigenvalues (non-self-consistent) of the single-particle Hamiltonian, $$\widehat{H} = - \frac{\hbar^2}{2m_e} \nabla^2 + V_{\text{in}}~, \label{hin}$$ with the mean-field potential given by $$V_{\text{in}} [\rho^{\text{in}} ({\bf r})] = V_H [\rho^{\text{in}} ({\bf r})] + V_{\text{xc}} [\rho^{\text{in}} ({\bf r})] + V_I ({\bf r})~, \label{mfpot}$$ $V_I ({\bf r})$ being the attractive potential between the electrons and ions. In electronic structure calculations where the corpuscular nature of the ions is included (i.e., all-electron or pseudo-potential calculations), $\rho^{\text{in}}$ may be taken as a superposition of atomic-site densities. In the case of jellium calculations, we have shown [@yann1; @yann2] that an accurate approximation to the KS-LDA total energy is obtained by using the Harris functional with the input density, $\rho^{\text{in}}$, in Eq. (\[enhar\]) evaluated from a variational Extended-Thomas-Fermi (ETF)-LDA calculation. The ETF-LDA energy functional, $E_{\text{ETF}} [\rho]$, is obtained by replacing the kinetic energy term, $T[\rho]$, in the usual LDA functional, namely in the expression, $$\begin{aligned} E&&_{\text{LDA}}[\rho]=T[\rho] \nonumber \\ && + \int \left\{ \frac{1}{2} V_H [\rho({\bf r})] + V_I({\bf r}) \right\} \rho({\bf r}) d\/ {\bf r} \nonumber \\ && + \int {\cal E}_{\text{xc}} [\rho({\bf r})]d\/ {\bf r} + E_I~, \label{enlda}\end{aligned}$$ by the ETF kinetic energy, given to the 4th-order gradients as follows, [@hodg] $$\begin{aligned} && \frac{2m_e}{\hbar^2} T_{\text{ETF}}[\rho] = \frac{2m_e}{\hbar^2} \int t_{\text{ETF}}[\rho] d {\bf r} = \nonumber \\ && = \int \! \left\{ \frac{3}{5} (3\pi^2)^{2/3} \rho^{5/3} + \frac{1}{36} \frac{(\nabla \rho)^2}{\rho} + \frac{1}{270} (3 \pi^2)^{-2/3} \rho^{1/3} \right. \nonumber \\ && \times \left. \left[ \frac{1}{3} \left( \frac{\nabla \rho} {\rho} \right)^4 - \frac{9}{8} \left( \frac{\nabla \rho} {\rho} \right)^2 \frac{\Delta \rho} {\rho} + \left( \frac{\Delta \rho} {\rho} \right)^2 \right] \right\} d {\bf r}~. \label{t4th}\end{aligned}$$ The optimal ETF-LDA total energy is then obtained by minimization of $E_{\text{ETF}} [\rho]$ with respect to the density. In our calculations, we use for the trial densities parametrized profiles $\rho ({\bf r};\; \{\gamma_i\})$ with $\{ \gamma_i \}$ as variational parameters (the ETF-LDA optimal density is denoted as $\widetilde{\rho}$). The single-particle eigenvalues, $\{\epsilon_i^{out}\}$, in Eq. (\[enhar\]) are obtained then as the solutions to the single-particle Hamiltonian of Eq. (\[hin\]) with $V_{\text{in}}$ replaced by $V_{\text{ETF}}$ \[given by Eq. (\[mfpot\]) with $\rho^{\text{in}} ({\bf r})$ replaced by $\widetilde{\rho} ({\bf r})$\]. Hereafter, these single-particle eigenvalues will be denoted by $\{ \widetilde{\epsilon}_i \}$. In our approach, the smooth contribution in the separation (\[etot\]) of the total energy is given by $E_{\text{ETF}} [\widetilde{\rho}]$, while the shell correction, $\Delta E_{\text{sh}}$, is simply the difference $$\begin{aligned} \Delta E_{\text{sh}} && = E_{\text{harris}}[\widetilde{\rho}] - E_{\text{ETF}} [\widetilde{\rho}] \nonumber \\ && = \sum_{i=1}^{\text{occ}} \widetilde{\epsilon}_i - \int \! \widetilde{\rho}({\bf r}) V_{\text{ETF}} ({\bf r}) d\/ {\bf r} - T_{\text{ETF}} [\widetilde{\rho}]~. \label{dsh}\end{aligned}$$ ### Adiabatic Assumption The volume density of the positive background is given by $\rho^+_v = 3/(4\pi r_s^3)$, where $r_s$ is the Wigner-Seitz radius characteristic to the material, and thus the number of positive charges in the constriction is $$N^+ ({\cal O})=3R_0^2 L_0/(4 r_s^3)~. \label{numpos}$$ Since the nanowire contains a constricted region of variable cross-sectional radius $a(z)$ \[see Eq. (\[az\])\], we define a linear (i.e., density per unit length of the nanowire) background density $\rho^+_l (z;L,{\cal O}) = 3 a^2 (z)/(4 r_s^3)$, which when integrated over the total length of the wire yields $N^+ ({\cal O})$ \[see Eq. (\[numpos\])\]. Correspondingly, the variational electronic volume density $\widetilde{\rho} ({\bf x}; L, {\cal O}) \equiv \widetilde{\rho} (r, z ; L, {\cal O})$, and in our calculations it takes the form, $$\widetilde{\rho} (r,z; L, {\cal O}) = \frac{\widetilde{\rho}_0(z)}{ \left[ 1+ \exp \left(\frac{r-r_0(z)}{\alpha(z)}\right) \right]^{\gamma(z)}}~, \label{rho}$$ with $\widetilde{\rho}_0 (z)$, $\alpha(z)$, and $\gamma(z)$ as $z$-dependent variational parameters. In the ETF calculation, $\widetilde{\rho}$ is determined variationally at a given $z$ as the one associated with a uniform cylinder of radius $a(z)$ (adiabatic assumption), under the normalization condition for local charge neutrality, namely, $2 \pi \int dr [r \widetilde{\rho} (r,z; L, {\cal O})] = \rho^+_l (z; L, {\cal O})$ \[which fixes the 4th parameter $r_0 (z)$ in Eq. (\[rho\])\]. The optimized $\widetilde{\rho}$ allows then calculation of the smooth contribution for any length of the constriction, $\widetilde{E} (L, {\cal O}) \equiv E_{\text{ETF}} (L, {\cal O})$ in Eq. (\[etot\]). The calculation of the shell-correction term, $\Delta E_{\text{sh}} (L, {\cal O})$, in Eq. (\[etot\]) proceeds by evaluating first the density of states in the nanowire. Assuming an adiabatic separation of the “fast” transverse and the “slow” longitudinal variables, [@boga1; @glaz; @imry] the electronic wave functions in the classically allowed regions may be written as $$\begin{aligned} \Psi_{nm\epsilon} (r,\phi,z; && L, {\cal O}) \propto \psi_{nm} (r; z, L, {\cal O}) e^{i m \phi} \nonumber \\ && \times e^{i \int^z dz^\prime k^{nm}_\perp (z^\prime ; \epsilon, L, {\cal O})}~, \label{wvf}\end{aligned}$$ where $k_\perp^{nm}$ is the local wave number along the axial ($z$) direction of the nanowire $$k^{nm}_\perp (z, \epsilon; L, {\cal O}) = \left[ \frac{2m_e}{\hbar^2} [ \epsilon - \widetilde{\epsilon}_{nm} (z; L, {\cal O}) ] \right]^{1/2}~, \label{kperp}$$ and $\widetilde{\epsilon}_{nm}$ is the (transverse) local eigenvalue spectrum at $z$. To calculate this spectrum for a wire of a configuration specified by $(L, {\cal O})$ for any value of $z$, the eigenvalues of a cylindrical wire with a (uniform) radius $a(z)$ are calculated from the two-dimensional Schrödinger equation $$\begin{aligned} -\frac{\hbar^2}{2m_e} \left[ \frac{d^2~}{dr^2} \right. && \left. +\frac{1}{r} \frac{d~}{dr} - \frac{m^2}{r^2} \right] \psi + V_{\text{ETF}}(r; z,L, {\cal O}) \psi \nonumber \\ && = \widetilde{\epsilon}_{nm} (z, L, {\cal O}) \psi~. \label{scheq}\end{aligned}$$ The linear (per unit length), one-dimensional density of states at $z$, $D_l (z, \epsilon; L, {\cal O})$, is given by $$\begin{aligned} D_l (z, \epsilon; L, {\cal O}) && = \frac{2}{\pi} \sum_{nm} \frac{\partial k_{\perp}^{nm} (z, \epsilon; L, {\cal O})} {\partial \epsilon} \nonumber \\ && \times \Theta[ \epsilon - \widetilde{\epsilon}_{nm}(z; L, {\cal O})]~, \label{elden}\end{aligned}$$ where spin degeneracy has been included, and $\Theta$ is the Heaviside step function. From Eq. (\[kperp\]), we obtain $$\begin{aligned} D_l (z, \epsilon; L, {\cal O}) && = \left( \frac{2m_e}{\pi^2 \hbar^2} \right)^{1/2} \sum_{nm} \left[ \epsilon - \widetilde{\epsilon}_{nm} (z; L, {\cal O}) \right]^{-1/2} \nonumber \\ && \times \Theta[ \epsilon - \widetilde{\epsilon}_{nm}(z; L, {\cal O}) ]~. \label{elden2}\end{aligned}$$ We may now define an integrated density of states in the constriction $$D (\epsilon; L, {\cal O}) = \int_{-L/2}^{L/2} dz D_l (z, \epsilon; L, {\cal O})~. \label{eldt}$$ The total number of states up to energy $\epsilon$ in the constricted region of the wire is given by $$\begin{aligned} N&&^{-} (\epsilon; L, {\cal O}) = \int_0^\epsilon d\epsilon^\prime D (\epsilon^\prime; L, {\cal O}) \nonumber \\ && = \frac{2}{\pi} \int_{-L/2}^{L/2} dz \sum_{nm} \sqrt{ \frac{2m_e}{\hbar^2} [\epsilon - \widetilde{\epsilon}_{nm} (z; L, {\cal O}) ] } \nonumber \\ && \times \Theta[ \epsilon - \widetilde{\epsilon}_{nm}(z; L, {\cal O}) ]~. \label{totnum}\end{aligned}$$ Since the total number of electrons in the constricted region is $N^+ ({\cal O})$ \[see Eq. (\[numpos\])\], the Fermi energy, $\epsilon_F (L, {\cal O})$, for a wire with a configuration specified by ($L, {\cal O}$) is given from Eq. (\[totnum\]), i.e., $$N^{-}(\epsilon_F; L, {\cal O}) = N^+ ({\cal O})~. \label{fermi}$$ Using the above and Eq. (\[dsh\]), the shell-correction term, $$\Delta E_{\text{sh}} (L, {\cal O}) \equiv E_{\text{harris}} [\widetilde{\rho}; L, {\cal O}] - E_{\text{ETF}} [\widetilde{\rho}; L, {\cal O}]~, \label{harr}$$ may be calculated as $$\begin{aligned} \Delta && E_{\text{sh}} (L, {\cal O}) = \int_0^{\epsilon_F (L, {\cal O})} d\epsilon [\epsilon D(\epsilon; L, {\cal O})] \nonumber \\ && -2 \pi \int_{-L/2}^{L/2} dz \int_0^\infty dr r \widetilde{\rho} (r,z; L, {\cal O}) V_{\text{ETF}} (r,z; L, {\cal O}) \nonumber \\ && -2 \pi \int_{-L/2}^{L/2} dz \int_0^\infty dr r t_{\text{ETF}} [\widetilde{\rho} (r,z; L, {\cal O})]~, \label{dshw}\end{aligned}$$ where $V_{\text{ETF}}$ is the ETF potential (Hartree, exchange-correlation, and electron attraction to the positive background) and $t_{\text{ETF}}$ is the volume density of the ETF kinetic-energy functional \[see Eq. (\[t4th\])\]. In actual calculations, we invert the order of integration in the first term of Eq. (\[dshw\]), which then takes the form $$\begin{aligned} &&\frac{2}{3\pi} \int_{-L/2}^{L/2} dz \sum_{nm} [ \epsilon_F + 2 \widetilde{\epsilon}_{nm} (z; L, {\cal O}) ] \nonumber \\ && \times \sqrt{ \frac{2m_e}{\hbar^2} [\epsilon_F - \widetilde{\epsilon}_{nm} (z; L, {\cal O}) ] } \; \Theta[ \epsilon_F - \widetilde{\epsilon}_{nm}(z; L, {\cal O}) ]~. \label{sumi}\end{aligned}$$ Note that Eq. (\[fermi\]) implies a common Fermi level for the whole constriction for a given $L$ (i.e., $\epsilon_F$ is not a local property). Therefore, Eq. (\[sumi\]) is not equivalent to integration of the corresponding uniform wire result derived by us in Ref.  over the $z$-coordinate, since there $\epsilon_F$ varies with the wire’s cross-sectional radius. Having calculated the smooth and shell-correction contributions to the total energy, as a function of $L$, the total elongation force may be evaluated as the derivative of the total energy with respect to $L$, i.e., $F_T = - dE_T/dL$, and the contributions to it from the smooth and shell-correction terms are given by $\widetilde{F} = - d{\widetilde{E}}/dL$, and $\Delta F_{\text{sh}}= - d \Delta E_{\text{sh}}/dL$. Results ------- In this section, we report results for the elongation of a Sodium nanowire ($r_s=4$ a.u.), starting with an initial cylindrical constriction of length $L_0=80$ a.u. and radius $R_0=25$ a.u. In Fig. 2 , we show electronic-potential profiles, $V_{\text{ETF}}[r; a(z), L, {\cal O}]$, for a particular constriction with $\Delta L /L_0 =1.125$ ($\Delta L=L-L_0$). We display here the potential profiles calculated at the narrowmost part of the constriction \[$a_0 \equiv a(0) = 12.62$ a.u.\] and at its end \[i.e., at $a(L/2)=R_0$\] (in this paper, all the subsequent numerical results we will discuss relate to constrictions with the same set of ${\cal O}$-parameters, namely, $L_0=80$ a.u. and $R_0=25$ a.u.). We found that for other values of $z$ (i.e., for $ 0 < |z| < L/2$), the potential assumes profiles intermediate between the two profiles shown here, namely, the depth of the potential well remains practically unaltered, while its width follows the enlargement of the jellium-background radius $a(z)$, from $a_0$ to $a(L/2)$. From the three components, $V_H$, $V_I$, and $V_{\text{xc}}$, which contribute to the total $V_{\text{ETF}}$ \[see Eq. (\[mfpot\])\], we found that for all these potential profiles, calculated for different values of $z$ along the constriction, the xc contribution is the dominant one, amounting to approx. $-5.4$ eV, while the total electrostatic contribution, $V_H + V_I$, is much smaller, resulting in a characteristic “winebottle” profile familiar from LDA studies of spherical clusters. [@wine] Fig. 2 also displays the transverse local eigenvalues $\widetilde{\epsilon}_{nm}$ associated with the two potential profiles. Naturally, a wider potential profile yields a larger number of such eigenvalues below the Fermi level. To illustrate the nature of the electronic spectrum in the nanowires, and its dependencies on the characteristics of the wire, i.e., shape and length, we show in Fig. 3(a) densities of states, $D (\epsilon; L, {\cal O})$, calculated for a variable-shaped wire for two wire lengths (and consequently two minimal cross-sectional radii of the constricted region). The density of states for a uniform wire with a radius equal to that of the unconstricted region of the variable-shaped ones \[shown in Fig.  3(a)\] is displayed in Fig. 3(b). Two “classes” of features are noted for the variable-shaped wires: (i) those associated with the narrowmost constricted region (marked by numbers) whose radius, $a_0$, varies upon elongation, and (ii) those associated with the maximal radius of the constriction (and with the unconstricted part of the wire) which remains constant throughout the elongation of the wire. Identification of the latter class of features (several of which are marked by arrows) is facilitated through comparison with the density of states for the corresponding uniform wire \[Fig. 3(b)\]. We observe here that, for the broader (and thus shorter) wire \[lower curve in Fig. 3(a)\], six of the features (peaks) in the density of states coming from the spectrum of transverse energy levels at the narrowmost region of the constriction are located below the Fermi level, $\epsilon_F$ \[all the peaks in the density of states occur at the energies of the transverse levels; e.g., compare the location of the peaks in the lower curve in Fig. 3(a) with the corresponding spectrum on the left side of Fig. 2\]. On the other hand, for the much narrower (and thus longer) constricted wire, only one of these peaks is below $\epsilon_F$ \[see upper curve in Fig. 3(a)\]. When plotting the density of states at $\epsilon_F$ versus the elongation (or equivalently the minimal radius of the constricted region), these variations lead to an oscillatory pattern, as peaks in the density of states are shifted above the Fermi level, one after the other as the wire is being elongated. These variations are also portrayed in the energetics of the wire (shown in Fig. 4), and in the stepwise behavior of the quantized conductance through the wire versus length (see Fig. 5 below). From Fig. 4, we observe that the magnitude of the smooth ETF contribution, $\widetilde{E}$, to the total energy, $E_T$, of the wire is dominant, with the shell-correction contribution, $\Delta E_{\text{sh}}$, exhibiting an oscillatory pattern, with local minima at a set of wire lengths (and correspondingly a set of minimal cross-sectional radii) which we term “magic wire configurations” (MWC’s), i.e., wire configurations with enhanced energetic stability. When added to the smooth contribution, these shell-correction features lead to local minima of the total energy toward the end of the elongation (and consequently, narrowing) process, while for thicker wires (i.e., $\Delta L/L_0 \leq 2.5$ in Fig. 4) they are expressed as inflection points of the total energy (in this context, see the total-force curve, $F_T$, in Fig. 5, where the local minimum in $E_T$ corresponds to the point with $F_T=0$ marked by an arrow). We note here that the occurrence of local minima in the total energy results from a balance between $\Delta E_{\text{sh}}$ and $\widetilde{E}$, with the latter increasing (that is acquiring less negative values) as the constriction elongates due to the increasing contribution from the surface of the constriction. Comparison of the magnitudes of the shell corrections in a variable-shaped wire and in a uniform one \[i.e., one with $f(z) \equiv 1$ in Eq. (\[az\]), whose case was discussed in Ref. \] shows that the amplitudes of the oscillations in the latter case are much larger (over an order of magnitude). The reason for this difference is that in the constant-radius wire the quantization into the transverse subbands is uniform along the wire, while in the variable-shaped case the subband spectrum is different in various parts of the constriction. While the oscillatory pattern is dominated by the spectrum at the narrowmost region (see also Section III below), the amplitudes are influenced by the transverse-mode spectra from other parts of the constriction. Consequently, the number of local minima in the total energy, $E_T$, (and thus the number of wire configurations, i.e., lengths, for which the total force, $F_T$, vanishes) is larger for a uniform wire than for a variable-shaped one. Additionally, we suggest that for materials with relatively smaller surface energies a larger number of local minima may occur. From the total energy, and the smooth and shell-correction contributions to it, we obtain the total “elongation force” (EF), $F_T$, and the corresponding components of it, $\widetilde{F}$ and $\Delta F_{\text{sh}}$. These results are displayed in Fig. 5, along with the conductance of the wire evaluated, in the adiabatic approximation (i.e., no mode mixing [@glaz]) and neglecting tunneling effects (assuming unit transmission coefficients for all the conducting modes), using the Landauer expression, [@landa; @imry2] $$G(L, {\cal O}) = g_0 \sum_{nm} \Theta [\epsilon_F - \widetilde{\epsilon}_{nm} (z=0; L, {\cal O})]~, \label{land}$$ where $g_0=2 e^2/h$, and the spectrum of the transverse modes is evaluated (for each constriction length) at the narrowmost part of the constriction, $z=0$. Tunneling contributions (see e.g., Ref. ), mode-mixing and non-adiabaticity may affect the sharpness of the conductance steps, and/or introduce some interference related features, particularly near the transitions between the conductance plateaus. These effects, which can be included in more elaborate evaluations of the conductance, [@lang; @brand; @todo] do not modify the conclusions of our study. Also included in this figure is a plot describing the variation of the minimal cross-sectional radius $a_0$ with the length of the constriction \[see Eq. (\[a0\])\]. As evident from Fig. 5, the oscillations in the force resulting from the shell-correction contributions are prominent. In $\Delta F_{\text{sh}}$, we observe that the locations of the zeroes of the force situated at the right of the force maxima occur for values of $\Delta L /L_0$ which coincide with the locations of local minima in the shell-correction contribution to the energy of the wire (i.e, for a sequence of minimal cross-sectional radii corresponding to MWC’s). In the total force, $F_T$, only one of these points (where $F_T=0$) remains \[i.e., the one corresponding to the local minimum in the total energy towards the end of the elongation process (see Fig. 4)\], for the reasons discussed above in connection with the energetics of the wire. Nevertheless, the oscillations in the total force correlate well with those in the total energy of the wire, which as discussed above originate from the subband spectrum at the narrowmost part of the constriction (see also section III). Also, the locations of the local maxima in the total force correlate with the stepwise variations in the conductance signifying the sequential decrease in the number of transverse subbands (calculated at the narrowmost section of the wire) below $\epsilon_F$ (i.e., conducting channels) as the constricted part of the wire elongates (and thus narrows). Additionally, we note that the magnitude of the total force is comparable to measured ones (i.e., in the nanonewton range). The magnitude of the total force in sodium nanowires (not measured to date) is expected to be smaller than that found for gold nanowires, [@rubi; @stal] due mainly to differences in the electron densities and surface energies of the materials. Semiclassical Analysis ====================== As discussed above, the total energy of the wire is characterized by local minima and inflection points occuring for a set of wire lengths, or equivalently a set of minimal cross-sectional radii of the constriction, and are reflected in the oscillatory patterns of the elongation force. These features correspond to the oscillatory shell-correction contributions and originate from the spectrum of transverse modes at the narrowmost part of the constriction. Moreover, these patterns correlate with the locations of the quantized conductance steps, which are determined by the transverse-mode spectrum at the narrowmost region (i.e., the number of conducting modes below $\epsilon_F$, and their degeneracies). To further investigate the origins of these correlations, we present in this section a semiclassical analysis of the density of states, energetics, forces, and conductance in a free-electron nanowire modeled via an infinite confining potential on the surface of the wire. As in the above (see Fig. 1), we model the constricted region of the wire as a section with a slowly (adiabatically) varying shape. Dividing the constriction into thin cylindrical slices, the solution of the Schrödinger equation for each slice is of the form, $$\psi = {\cal A} J_m (\kappa r) e^{im \phi} e^{ip_\perp z}~, \label{bess}$$ where ${\cal A}$ is a normalization constant, $p_\perp$ is the electron momentum along the axis of the wire, $J_m (\kappa r)$ is the Bessel function of order $m$, and $\kappa = (2m_e \epsilon -p_\perp^2)^{1/2}/\hbar$. Consider first a uniform cylindrical wire with a constant cross-sectional radius $a$. With the infinite wall boundary condition assumed here, the single-particle electronic energy levels in the wire are expressed in terms of the roots of the Bessel functions, $\gamma_{nm}$, as $$\epsilon_{nm,p_\perp} = \frac{\hbar^2 \gamma_{nm}^2}{2 m_e a^2} + \frac{p_\perp^2}{2m_e}~. \label{ebes}$$ Here we remark that in the semiclassical approximation the electron performs a complicated trajectory inside the wire. All the semiclassical trajectories are tangent to the caustic surfaces of a set of concentric cylinders inside the wire. [@kell] Quantization of the electronic states leads to selection of only a certain subset of trajectories associated with a ceratin set of radii $r_m$ of the caustic surfaces, corresponding to allowed values of the azimuthal quantum numbers, $m$, i.e, $\kappa r_m = m$; this description is closely related to the semiclassical periodic orbit theory. [@brac] In the course of developing semiclassical methods, Keller and Rubinow [@kell] have demonstrated that the Debye asymptotic expansion [@jahn] of the Bessel functions ($1 \ll m < \kappa r$) provides an accurate approximation to the eigenfunction $J_m (\kappa r)$, i.e., $$\begin{aligned} J_m (r) && \sim ( \frac{2}{\pi} )^{1/2} ( \kappa^2 r^2 - m^2)^{-1/4} \nonumber \\ \times && \sin \left[ (\kappa^2 r^2 - m^2)^{1/2} - m \arccos \left( \frac{m}{\kappa r} \right) + \frac{\pi}{4} \right]~. \label{bes2}\end{aligned}$$ This approximation is valid in the region between the caustic cylindrical surface and the boundary surface of the wire; in the region inside the caustic surface ($m > \kappa r$) the solution decays exponentially. In this approximation, the equation for the asymptotic values of the Bessel-function zeroes has the form, $$(\gamma_{nm}^2 - m^2)^{1/2} - m \arccos \left( \frac{m}{\gamma_{nm}} \right) = \pi \left( n - \frac{1}{4} \right)~. \label{zero}$$ First we calculate the density of states whose evaluation involves, after integration over $p_\perp$, double sums over the quantum numbers $n$ and $m$; $n=1,\;2,$ ..., $m=0,\;\pm1,\; \pm2,$ ... \[see Eq. (\[elden2\])\]. Applying sequentially the Poisson summation formula to both sums and separating the oscillatory terms (note that in our semiclassical approximation $\kappa a \gg 1$) in complete analogy with Refs. , we obtain for the density of states (per unit length), $$\begin{aligned} && D_l^{\text{osc}} (\epsilon) = \frac{2}{\pi a \epsilon_a} \nonumber \\ && \times \! \sum_{M=2}^{\infty} \sum_{Q=1}^{M/2} \frac{1}{M} \sin \!\! \left( \frac{\pi Q}{M} \right) \cos \!\! \left[ 2 M K a \sin \!\! \left( \frac{\pi Q}{M} \right ) + \frac{\pi M}{2} \right] \nonumber \\ && + \frac{2 \sqrt{2} } {\pi a \epsilon_a^{3/4} \epsilon^{1/4} } \sum_{M=1}^{\infty} \frac{1}{M^{1/2}} \sin \left[ 2 \pi M K a \! + \! \frac{\pi}{4} \right] ~, \label{sdosc}\end{aligned}$$ where $\epsilon_a = \hbar^2/(2m_e a^2)$, and $K$ is the electron wave vector. The two terms in Eq. (\[sdosc\]) correspond to the contribution from the point where the phase is stationary and from the end-points in the sum (integral) over $m$ (see discussion in Ref. ). While the second oscillatory term in Eq.(\[sdosc\]) has a smaller amplitude than the first one \[by a factor of $(K a)^{1/2}$\], it corresponds to an important class of electronic states, with $m \approx K a$, localized near the surface of the wire (the so-called whispering gallery states [@boga]). Until now we discussed a uniform wire with a constant cross-sectional radius. In a wire with a variable shape, the cross-sectional radii depend on $z$, as discussed in connection with Eq. (\[az\]). Substituting the $z$-dependence of the radii in Eq. (\[sdosc\]), i.e., replacing $a$ by $a(z)$, we need to perform an integration over $z$ \[see Eq. (\[eldt\])\]. This integration involves evaluation of integrals of the form, $$I = \Re \int_{-L/2}^{L/2} g(z) e^{i \alpha K a(z)} dz~, \label{inte}$$ where for the first term in Eq. (\[sdosc\]) $g(z)=a(z)$ and $\alpha = 2 M \sin (\pi Q /M)$, and for the second one $g(z)=\sqrt{a(z)}$ and $\alpha=2 \pi M$. The fast oscillatory character of the exponential factor \[i.e., $K a(z) \gg 1$ for all $z$\] relative to the slow variation of $g(z)$ allows us to use the standard stationary phase method, [@erde] obtaining $$\begin{aligned} I \approx && \left[ \frac{2 \pi} { \alpha K a^{\prime\prime} (0) } \right]^{1/2} g(0) \; \Re \{ \exp[ i \alpha K a(0) + i \pi/4 ] \} \nonumber \\ && + \frac{2} { \alpha K a^\prime (L/2) } \; g(L/2) \; \Re \left\{ -i \exp[i \alpha K a(L/2) ] \right\}~, \label{inte2}\end{aligned}$$ where $z=0$ is the stationary (extremum) point, the second term is the contribution from the end-points of the integral, and primes denote differentiation with respect to $z$. Using the above, and after simple algebraic manipulations, we obtain for the oscillatory part of the density of states, $$\begin{aligned} &&D^{\text{osc}} (\epsilon) \nonumber \\ && = \frac{2}{\pi} \sum_{M=2}^{\infty} \sum_{Q=1}^{M/2} \left \{ \frac{1}{ M^{3/2} } \left[ \sin \left( \frac{\pi Q}{M} \right) \right]^{1/2} \left[ \frac{\pi} { K a^{\prime\prime} (0) } \right]^{1/2} \right. \nonumber \\ && \times \frac{2 m_e a(0)}{\hbar^2} \cos \left[ 2M K a(0) \sin \left( \frac{\pi Q} {M} \right) + \frac{\pi}{2} \left( M+\frac{1}{2} \right) \right] \nonumber \\ && \left. + \frac{1}{M^2} \frac{2 m_e a(L/2)} {\hbar^2 K a^\prime (L/2)} \sin \!\! \left[ 2 M K a(L/2) \sin \!\! \left( \frac{\pi Q}{M} \right) + \frac{\pi M}{2} \right] \right\} \nonumber \\ && + \frac{2 \sqrt{2}}{\pi \epsilon^{1/4}} \sum_{M=1}^{\infty} \left\{ \frac{1}{M} \left[ \frac{1}{K a^{\prime\prime} (0) } \right]^{1/2} \frac{1}{a(0)} \right. \nonumber \\ && \times \left( \frac{2 m_e a^2(0)}{\hbar^2} \right)^{3/4} \cos [ 2 \pi M K a(0) ] \nonumber \\ && - \frac{1}{\pi M^{3/2}} \frac{1}{K a^\prime (L/2) a(L/2)} \left( \frac{2 m_e a^2 (L/2)}{\hbar^2} \right)^{3/4} \nonumber \\ && \times \left. \cos \left[ 2 \pi M K a(L/2) + \frac{\pi}{4} \right] \right\}. \label{sdosc2}\end{aligned}$$ The density of states of the wire contains oscillatory contributions from the narrowmost cross-section of the wire \[first and third term in Eq. (\[sdosc2\])\] and from the wire’s-end cross-sections (second and fourth terms). The amplitudes of the latter oscillations is smaller. Having obtained the expression for the oscillatory part of the density of states, we can calculate now the semiclassical approximation to the grand-canonical thermodynamic potential $\Omega$ \[see Appendix A; at zero temperature, $\Omega = \int (\epsilon - \epsilon_F) D(\epsilon) d\epsilon$\]. Restricting ourselves for brevity to the largest contribution \[that is, to the first term in Eq. (\[sdosc2\]) corresponding to the main contribution from the narrowmost part of the wire\], we get for the oscillatory part of $\Omega$, $$\begin{aligned} && \Omega^{\text{osc}} \approx \nonumber \\ && \frac{2 \epsilon_F } { \sqrt{\pi k_F a^{\prime\prime}(0)} \; a(0) } \sum_{M=2}^{\infty} \sum_{Q=1}^{M/2} \frac{1}{M^{7/2}} \sin^{-3/2} (\pi Q/M) \nonumber \\ && \times \cos \!\! \left[ 2 M k_F a(0) \sin (\pi Q/M) + \frac{\pi}{2} \left( M + \frac{1}{2} \right) \right]~. \label{omosc}\end{aligned}$$ From this expression, the oscillating part of the force as a function of the length of the constricted region \[i.e., $a(0)$ in general depends on $L$, see e.g., Eq. (\[a0\])\] is given by $$F^{\text{osc}} (L) = - \frac{\partial \Omega^{\text{osc}}}{\partial a(0)} \frac{\partial a(0)}{\partial L}~, \label{fdsem}$$ which upon substitution of (\[omosc\]) yields, $$\begin{aligned} && F^{\text{osc}} (L) \approx \nonumber \\ && \frac{4 \epsilon_F k_F^{1/2} [\partial a(0)/ \partial L]} { \sqrt{\pi a^{\prime\prime} (0) } \; a(0) } \sum_{M=2}^{\infty} \sum_{Q=1}^{M/2} \frac{1}{M^{5/2}} \sin^{-1/2} (\pi Q/M) \nonumber \\ && \times \cos \!\! \left[2 M k_F a(0) \sin (\pi Q/M) + \frac{\pi}{2} \left( M - \frac{1}{2} \right) \right]~. \label{fsem}\end{aligned}$$ The expression for the conductance of the wire following the Landauer formula involves evaluation of the number of transverse states in the narrowmost part of the wire. Following Ref. , $$\begin{aligned} && G \approx \left( \frac{2 e^2}{h} \right) \!\! \frac{ [k_F a(0)]^2} {4} \left \{ 1 - \frac{2}{k_F a(0)} + \frac{8}{\sqrt{\pi}} \frac{1}{ [k_F a(0)]^{3/2} } \right. \nonumber \\ && \times \sum_{M=2}^{\infty} \sum_{Q=1}^{M/2} \frac{1}{M^{3/2}} \sin^{1/2} (\pi Q/M) \nonumber \\ && \times \left. \cos \!\! \left[ 2M k_F a(0) \sin(\pi Q/M) + \frac{\pi}{2} \left( M - \frac{1}{2} \right) \right] \right\}~, \label{gsem}\end{aligned}$$ which can be expressed as a function of the length of the constriction \[see e.g., Eq. (\[a0\]); we remark here that our semiclassical treatment is valid for any adiabatic wire shape\]. The non-oscillating contribution (coming from the first two terms in the curly brackets) describes the Sharvin [@shar] conductance of the constriction and the Weyl [@weyl] semiclassical corrections, and the third term describes conductance quantum oscillations as a function of $a(0)$. From a comparison of the expression for the oscillatory contribution to the force \[Eq.(\[fsem\])\] with the oscillatory contribution to the conductance \[Eq. (\[gsem\])\], the direct correlation between the two is immediately evident, and both depend on the spectrum of transverse modes (conducting channels) at the narrowmost part of the wire. This is in agreement with the results shown in Fig. 5 obtained through the LDA-SCM method. Conclusions and Discussion ========================== In this paper, we extended our investigations [@yann5] of energetics, conductance, and mesoscopic forces in a jellium modelled nanowire (sodium) using the local-density-functional-based shell correction method to variable-shaped wires, i.e., containing a constricted region modeled here by a parabolic dependence of the cross-sectional radii in the constriction on $z$ (see Fig. 1). The results shown above, particularly, the oscillations in the total energy of the wire as a function of the length of the variable-shaped constricted region (and correspondingly its narrowmost width), the consequent oscillations in the elongation force, the corresponding discrete sequence of magic wire configurations, and the direct correlation between these oscillations and the stepwise quantized conductance of the nanowires, originate from quantization of the electronic states (i.e., formation of subbands) due to the reduced lateral (transverse) dimension of the nanowires. These results are in correspondence with our earlier LDA-SCM investigation of jellium-modeled uniform nanowires. [@yann5] Moreover, in the current study of a wire with a variable (adiabatic) shaped constriction, we found that the oscillatory behavior of the energetic and transport properties are governed by the subband quantization spectrum (termed here electronic shells) at the narrowmost part of the constriction. This characteristic is supported and corroborated by our semiclassical analysis (section III). We reiterate here that such oscillatory behavior, as well as the appearance of “magic numbers” and “magic configurations” of enhanced stability, are a general characteristic of finite-size fermionic systems and are in direct analogy with those found in simple-metal clusters (as well as in $^3$He clusters [@yann4] and atomic nuclei[@stru; @bm]), where electronic shell effects on the energetics [@heer; @yann1; @yann2; @yann3] (and most recently shape dynamics [@yann6] of jellium modelled clusters driven by forces obtained from shell-corrected energetics) have been studied for over a decade. While these calculations provide a useful and instructive framework, we remark that they are not a substitute for theories where the atomistic nature and specific atomic arrangements are included [@land1; @land2; @land3; @land4; @barn] in evaluation of the energetics (and dynamics) of these systems (see in particular Refs. , where first-principles molecular-dynamics simulations of electronic spectra, geometrical structure, atomic dynamics, electronic transport and fluctuations in sodium nanowires have been discussed). Indeed, the atomistic structural characteristics of nanowires [@land1; @land2; @land3] (including the occurrence of cluster-derived structures of particular geometries[@land4; @barn]), which may be observed through the use of high resolution microscopy, [@kizu] influence the electronic spectrum and transport characteristics, as well as the energetics of nanowires and their mechanical properties and response mechanisms. In particular, the mechanical response of materials involves structural changes through displacement and discrete rearrangenent of the atoms. The mechanisms, pathways, and rates of such structural transformations are dependent on the arrangements and coordinations of atoms, the magnitude of structural transformation barriers, and the local shape of the wire, as well as possible dependency on the history of the material and the conditions of the experiment (i.e., fast versus slow extensions). Further evidence for the discrete atomistic nature of the structural transformations is provided by the shape of the force variations (compare the calculated Fig. 3(b) in Ref.  and Fig. 3 in Ref.  with the measurements shown in Figs. 1 and 2 in Ref. ), and the interlayer spacing period of the force oscillations when the wire narrows. While such issues are not addressed by models which do not include the atomistic nature of the material, the mesoscopic (in a sense universal) phenomena described by our model are of interest, and may guide further research in the area of finite-size systems in the nanoscale regime. Such further investigations include the occurrence of magic configurations (i.e., sequences of enhanced stability specified by number of particles, size, thickness or shape) in clusters, dots, wires, and thin films of normal, as well as superconducting metals, and the effect of magnetic fields which can influence the energetics in such systems (e.g., leading to magnetostriction effects) through variations of the subband spectra, in analogy with magnetotransport phenomena in nanowires [@boga0]. Several directions for improving the model (while remaining within a jellium framework) are possible. These include: (i) consideration of more complex shapes. For example, in our current model the elongation is distributed over the entire constriction throughout the process, while a more realistic description should include a gradual concentration of the elongation, and consequent shape variation, to the narrower part of the constriction as found through molecular-dynamics simulations; [@land1; @land3] (ii) use of a stabilized-jellium description [@perd] of the energetics of the nanowire in order to give it certain elements of mechanical stability. In this context, note also that from the total energy shown in Fig. 4(c), and the corresponding total force \[Fig. 5(c)\], it is evident that in our current model, except for the region of large elongation close to the breaking point (i.e., $\Delta L/L_0 \geq 2.5$), the wire is unstable against spontaneous collapse (that is shortening), i.e., there are no energetic barriers against such process, while both experiments [@rubi] and MD simulations [@land2] show that compression of such wires requires the application of an external force. Improvements of the model in these directions are most desirable in light of the aforementioned experimental [@rubi] and MD-simulations [@land2] observations that the total oscillating forces for elongation and compression of nanowires are of opposite signs (i.e., negative and positive, respectively), while our current (equilibrium model) is limited to certain aspects of the tensile part of an elongation-compression cycle; (iii) inclusion of bias voltage effects in calculations of the energetics and conductance of nanowires. [@lang; @lang2] While such effects may be expected to have little influence (particularly on the energetics) at small voltages, they could be of significance at larger ones. Work in these directions is in progress in our laboratory. This research was supported by a grant from the U.S. Department of Energy (Grant No. FG05-86ER45234) and the AFOSR. Useful comments by W.D. Luedtke are gratefully acknowledged. Calculations were performed at the Georgia Institute of Technology Center for Computational Materials Science. A {#a .unnumbered} = In this Appendix, we discuss briefly a semiclassical treatment of temperature effects on the oscillatory behavior of the force and conductance in nanowires. The grand-canonical thermodynamic potential at finite temperature, $T$, is given by $$\Omega = - k_B T \sum_i \ln \left[ 1 + \exp \left( \frac{\mu-\epsilon_i}{k_B T} \right) \right]~, \label{omeg}$$ where $i$ denotes $(n,m,p_{\perp})$, and $\mu$ is the chemical potential. From Eq. (\[omeg\]), the finite temperature expressions for $\Omega^{\text{osc}}$, $F^{\text{osc}}$, and $G^{\text{osc}}$ differ from those given for the zero-temperature limit in Eqs. (\[omosc\]), (\[fsem\]), and (\[gsem\]), respectively, by a multiplicative factor in the sums of these equations. This factor is given by [@note] \[tdep\] $$\Psi (X_{MQ}) = \frac{X_{MQ}}{\sinh ( X_{MQ} )}~, \label{psi}$$ where $$X_{MQ} = \frac{2 \pi M k_B T a(0) \sin( \pi Q / M) } {\hbar v_F}~, \label{xmq}$$ with $v_F$ being the Fermi velocity. For $T=0$, $\Psi(x)=1$. Note that the temperature dependence given in Eq. (\[tdep\]) is valid for systems with $k_F a(0) \gg 1$, and leads to reduction of the oscillation amplitudes when $2 \pi M k_B T \geq \Delta \epsilon$, where $\Delta \epsilon = \hbar v_F /[a(0) \sin (\pi Q/M)]$ is an effective energy-level spacing of the electrons contributing to the oscillatory parts of the thermodynamic potential, force, and conductance. see articles in [*Large Clusters of Atoms and Molecules*]{}, edited by T.P. Martin (Kluwer, Dordrecht, 1996). see articles in [*Proceedings of the 8th International Symposium on Small Particles and Inorganic Clusters, Copenhagen, 1996*]{}, edited by H.H. Anderson, Z. Phys. D [**40**]{}, 1-578 (1997). see articles in [*Atomic and Nanometer-Scale Modifications of Materials: Fundamentals and Applications*]{}, edited by Ph. Avouris (Kluwer, Dordrecht, 1993). see articles in [*Nanowires*]{}, edited by P.A. Serena and N. Garcia (Kluwer, Dordrecht, 1997). U. Landman, R.N. Barnett, and W.D. Luedtke, Z. Phys. D [**40**]{}, 282 (1997). R.N. Barnett, and U. Landman, Nature [**387**]{}, 788 (1997). W.A. De Heer, Rev. Mod. Phys. [**65**]{}, 611 (1993). C. Yannouleas and U. Landman, in Ref. , p. 131. T.P. Martin, Phys. Rep. [**273**]{}, 199 (1996). C. Yannouleas and U. Landman, J. Phys. Chem. [**101**]{}, 5780 (1997). C. Yannouleas and U. Landman, Phys. Rev. B [**48**]{}, 8376 (1993); Chem. Phys. Lett. [**210**]{}, 437 (1993). U. Landman, W.D. Luedtke, N. Burnham, and R.J. Colton, Science [**248**]{}, 454 (1990). U. Landman, W.D. Luedtke, B.E. Salisbury, and R.L. Whetten, Phys. Rev. Lett. [**77**]{}, 1362 (1996). U. Landman, W.D. Luedtke, and J. Gao, Langmuir [**12**]{}, 4514 (1996). E.N. Bogachek, A.M. Zagoskin, and I.O. Kulik, Fiz. Nizk. Temp. [**16**]{}, 1404 (1990) \[Sov. J. Low Temp. Phys. [**16**]{}, 796 (1990)\]. J.I. Pascual, J. Mendez, J. Gomez-Herrero, J.M. Baro, N. Garcia, and V.T. Binh, Phys. Rev. Lett. [**71**]{}, 1852 (1993). L. Olesen, E. Laegsgaard, I. Stensgaard, F. Besenbacher, J. Schiotz, P. Stoltze, K.W. Jacobsen, and J.N. Norskov, Phys. Rev. Lett. [**72**]{}, 2251 (1994). J.I. Pascual, J. Mendez, J. Gomez-Herrero, J.M. Baro, N. Garcia, U. Landman, W.D. Luedtke, E.N. Bogachek, and H.-P. Cheng, Science [**267**]{}, 1793 (1995). D.P.E. Smith, Science [**269**]{}, 371 (1995). G. Rubio, N. Agrait, S. Vieira, Phys. Rev. Lett. [**76**]{}, 2302 (1996). A. Stalder and U. Durig, Appl. Phys. Lett. [**68**]{}, 637 (1996). J.M. Krans, J.M. van Ruitenbeek, V.V. Fisun, I.K. Yanson, and L.J de Jongh, Nature [**375**]{}, 767 (1995). J.L. Costa-Kramer, N. Garcia, P. Garcia-Mochales, and P.A. Serena, Surface Science [**342**]{}, 11 144 (1995). J.M. van Ruitenbeek, M.H. Devoret, D. Esteve, and C. Urbina, to be published. In this study, a uniform wire (i.e., with the cross-section independent of the location on the symmetry axis) has been investigated. Electron-density spillout effects were approximated in an ad hoc manner. C.A. Stafford, D. Baeriswyl, and J. Bürki, to be published. In this study, a wire of non-uniform shape has been studied using a free-electron model and assuming a constant bulk value of the wire’s Fermi energy for all wire configurations. The last assumption implies neglect of the strong screening effects in metals as discussed in Ref. , where, within the context of a free-electron model, it is noted that this can lead to substantial deviations (that are largest for small cross sectional radii) of the electronic density from bulk values. We also note that the expression given in this paper for the surface energy is at variance (i.e., a factor of five too large) with the correct result for the infinite-barrier free-electron model given in Ref.  (see pp. 266-267; we thank Dr. N.D. Lang for pointing out this discrepancy). Ref.  includes also a critical discussion pertaining to limitations of the free-electron model. N.D. Lang, in [*Solid State Physics*]{}, edited by H. Ehrenreich, F. Seitz, and D. Turnbull (Academic Press, New York, 1973), Vol. 28, p. 225. C. Yannouleas and U. Landman, Phys. Rev. B [**51**]{}, 1902 (1995); Phys. Rev. Lett. [**78**]{}, 1424 (1997); J. Chem. Phys. [**107**]{}, 1032 (1997). C. Yannouleas and U. Landman, J. Chem. Phys. [**105**]{}, 8734 (1996); Phys. Rev. B [**54**]{}, 7690 (1996). V.M. Strutinsky, Nucl. Phys. A [**95**]{}, 420 (1967); [*ibid.*]{} [**122**]{}, 1 (1968); Å. Bohr and B.R. Mottelson, [*Nuclear Structure*]{}, Vol. II (Benjamin, Reading, MA, 1975). Here we use the Gunnarsson-Lundqvist xc functional \[see O. Gunnarsson and B.I. Lundqvist, Phys. Rev. B [**13**]{}, 4274 (1976)\]. C.H. Hodges, Can. J. Phys. [**51**]{}, 1428 (1973). L.I. Glazman, G.B. Lesovik, D.E. Khmel’nitskii, and R.I. Shekhter, Pisma Zh. Eksp. Teor. Fiz. [**48**]{}, 218 (1988) \[JETP Lett. [**48**]{}, 238 (1988)\]. A. Yacoby and Y. Imry, Phys. Rev. B [**41**]{}, 5341 (1990). C. Yannouleas and R.A. Broglia, Phys. Rev. B [**44**]{}, 5793 (1991); C. Yannouleas, R.A. Broglia, M. Brack, and P.-F. Bortignon, Phys. Rev. Lett. [**63**]{}, 255 (1989); M. Brack, Phys. Rev. B [**39**]{}, 3533 (1989); W. Ekardt, Phys. Rev. B [**29**]{}, 1558 (1984). R. Landauer, Philos. Mag. [**21**]{}, 863 (1970). Y. Imry, in [*Directions in Condensed Matter Physics*]{}, edited by G. Grinstein and G. Mazenko (World Scientific, Singapore, 1986), p. 101. E.N. Bogachek, A.G. Scherbakov, and U. Landman, Phys. Rev. B [**53**]{}, R13 246 (1996). N.D. Lang, Phys. Rev. B [**52**]{}, 5335 (1995). M. Brandbyge, K.W. Jacobsen, and J.K. Norskov, Phys. Rev. B [**55**]{}, 2637 (1997). T.N. Todorov and A.P. Sutton, Phys. Rev. Lett. [**70**]{}, 2138 (1993). J.B. Keller and S.I. Rubinow, Ann. of Phys. (N.Y.) [**9**]{}, 24 (1960); in this context, see also R. Landauer, [*Phase integral approximations in quantum mechanics*]{}, Ph. D. Thesis, Harvard (1950). M. Brack and R.K. Bhaduri, [*Semiclassical Physics*]{} (Addison-Wesley, Reading, MA, 1997). E. Jahnke, F. Emde, and F. Loesch, [*Tables of Higher Functions*]{} (McGraw-Hill, New York, 1960). R.B. Dingle, Proc. Roy. Soc. (London) Ser. A [**212**]{}, 47 (1952). E.N. Bogachek and G.A. Gogadze, Zh. Eksp. Teor. Fiz. [**63**]{}, 1839 (1972) \[Sov. Phys. JETP [**36**]{}, 973 (1973)\]. A. Erdelyi, [*Asymptotic Expansions*]{} (Dover, New York, 1956). E.N. Bogachek, M. Jonson, R.I. Shekhter, and T. Swahn, Phys. Rev. B [**50**]{}, 18 341 (1994); see in particular Eqs. (15) and (17), with $T_0=1$, $\alpha = 0$. Yu. V. Sharvin, Zh. Eksp. Teor. Fiz. [**48**]{}, 984 (1965) \[Sov. Phys. JETP [**21**]{}, 655 (1965)\]. V.I. Falko and G.B Lesovik, Solid State Commun. [**84**]{}, 835 (1992); J.A. Torres, J.I. Pascual, and J.J. Saenz, Phys. Rev. B [**49**]{}, 16 581 (1994); E.N. Bogachek, A.G. Scherbakov, and U. Landman, Phys. Rev. B [**56**]{}, 1065 (1997). C. Yannouleas and U. Landman, submitted for publication. T. Kizuka, K. Yamada, S. Degachi, M. Naruse, and N. Tanaka, Phys. Rev. B [**55**]{}, R7398 (1997). J.P. Perdew, H.Q. Tran, and E.D. Smith, Phys. Rev. B [**42**]{}, 11 627 (1990). N.D. Lang, Phys. Rev. B [**45**]{}, 13 599 (1992); [*ibid.*]{} [**49**]{}, 2067 (1994). Such functional dependence on the temperature of the amplitude of oscillations of thermodynamic and transport properties in metals is a typical one; see e.g., A.A. Abrikosov, [*Fundamentals of the Theory of Metals*]{} (Elsevier, Amsterdam, 1988).
--- abstract: 'Sources of X-rays such as active galactic nuclei and X-ray binaries are often variable by orders of magnitude in luminosity over timescales of years. During and after these flares the surrounding gas is out of chemical and thermal equilibrium. We introduce a new implementation of X-ray radiative transfer coupled to a time-dependent chemical network for use in 3D magnetohydrodynamical simulations. A static fractal molecular cloud is irradiated with X-rays of different intensity, and the chemical and thermal evolution of the cloud are studied. For a simulated $10^5\,\mathrm{M}_\odot$ fractal cloud, an X-ray flux $<0.01$ergcm$^{-2}$s$^{-1}$ allows the cloud to remain molecular, whereas most of the CO and H$_2$ are destroyed for a flux of $\geq1$ergcm$^{-2}$s$^{-1}$. The effects of an X-ray flare, which suddenly increases the X-ray flux by $10^5\times$ are then studied. A cloud exposed to a bright flare has 99% of its CO destroyed in 10-20 years, whereas it takes $>10^3$ years for 99% of the H$_2$ to be destroyed. CO is primarily destroyed by locally generated far-UV emission from collisions between non-thermal electrons and H$_2$; He$^+$ only becomes an important destruction agent when the CO abundance is already very small. After the flare is over, CO re-forms and approaches its equilibrium abundance after $10^3$-$10^5$ years. This implies that molecular clouds close to Sgr A$^\star$ in the Galactic Centre may still be out of chemical equilibrium, and we predict the existence of clouds near flaring X-ray sources in which CO has been mostly destroyed but H is fully molecular.' author: - | Jonathan Mackey,$^{1,2,3}$[^1] Stefanie Walch,$^{3}$ Daniel Seifried,$^{3}$ Simon C.O. Glover,$^{4}$\ $^1$Centre for AstroParticle Physics and Astrophysics, DIAS Dunsink Observatory, Dunsink Lane, Dublin 15, Ireland\ $^2$Dublin Institute for Advanced Studies, Astronomy & Astrophysics Section, 31 Fitzwilliam Place, Dublin 2, Ireland\ $^{3}$I. Physikalisches Institut, Universität Köln, Zülpicher Str. 77, 50937 Köln, Germany\ $^{4}$Zentrum für Astronomie, Institut für Theoretische Astrophysik, Universität Heidelberg, Albert-Ueberle-Str. 2, D-69120 Heidelberg, Germany\ $^5$Astronomical Institute, Czech Academy of Sciences, Boční II 1401, 141 00 Prague, Czech Republic\ $^{6}$Max-Planck-Institut für Kernphysik, P.O. Box 103980, D-69029 Heidelberg, Germany bibliography: - './refs.bib' date: 'Submitted 27 March 2018; accepted 25 March 2019' title: 'Non-Equilibrium Chemistry and Destruction of CO by X-ray Flares' --- \[firstpage\] astrochemistry – radiative transfer – methods: numerical – ISM: clouds – galaxies: ISM – X-rays: ISM – X-rays: general Introduction {#sec:introduction} ============ Heating and ionization by X-rays and cosmic rays is known to be a key process in setting the temperature and ionization state of interstellar gas [@SpiTom68; @ShuSte85; @DalMcC72]. X-rays with energy $>1$keV can propagate deeper into molecular clouds than ultraviolet (UV) or optical radiation because their interaction cross-section is smaller and decreases with increasing photon energy. The ionizations induced by X-rays that are absorbed in a molecular cloud can strongly affect the chemical balance of the cloud by heating it and increasing the electron fraction [@LepMcC83; @MalHolTie96]. The sources of X-rays, especially non-thermal sources related to X-ray binaries or active galactic nuclei (AGN), tend to be strongly variable on timescales from minutes to years depending on the size of the emitting region. Even mostly inactive black-hole sources such as Sgr A$^\star$ in the Galactic Centre occasionally have giant flares where the X-ray luminosity increases by a factor of $10^3-10^6$ for a few years at a time. @PonTerGol10 studied X-ray reflection from molecular clouds around the Galactic Centre in the iron K-shell lines. They find that the luminosity of Sgr A$^*$ has been at $L_\mathrm{x}\lesssim 10^{35}$ ergs$^{-1}$ for the past 60-90 years, but that a bright flare with $L_\mathrm{x}\approx 1.4\times10^{39}$ ergs$^{-1}$ occurred about 100 years ago, with a duration of at least 10 years [see also @Sunyaev1993; @KoyMaeSon96; @Sunyaev1998; @Churazov2017a]. The scattering of X-rays in molecular clouds has been studied using Monte-Carlo radiative-transfer simulations [@OdaAhaWat11; @MolKhaSun16; @WalCheTer16; @MolKhaSun16] and shown to be a powerful diagnostic of the incident X-ray flux on a cloud. The inferred luminosity is still far below the Eddington luminosity for Sgr A$^*$, but is $\gtrsim10^4$ times brighter than its current luminosity in X-rays. X-ray binaries are also powerful sources during their active periods [e.g. GRS 1915+105 with $L_\mathrm{x}\approx 10^{39}$ ergs$^{-1}$ for $\sim$ 10 years, see @Punsly2013]. This shows that molecular clouds close to black holes or luminous X-ray binaries are subject to occasional bright X-ray irradiation, which may affect their thermal and chemical state [@Krivonos2017; @Churazov2017c]. If these flares are frequent enough [@Churazov2017b], then the clouds could spend most of their time out of chemical and thermal equilibrium [@Moser2016]. Bright X-ray sources are also usually sites of efficient cosmic-ray (CR) production. For example, the link between CR production and supernova remnants is now well established [@Aha13], the Galactic Centre is a bright and diffuse source of $\gamma$-rays produced by CRs [@HESS17], and *FERMI* has detected hundreds of AGN at 0.1-100GeV energy [@AckAjeAll11]. Like X-rays, CRs propagate deep into molecular clouds, but their interaction with atoms produce $\gamma$-rays as a by-product of nuclear reactions. For both X-ray and CR interaction with matter the main ionizing and heating agents are so-called *secondary electrons*, produced when high-energy photons or cosmic rays ionize a heavy element. These electrons have large kinetic energy, comparable to that of the ionizing photon, and so they ionize and heat molecules and atoms as they lose energy through collisional interactions [e.g. @MalHolTie96]. This means that the effects of an elevated CR energy density and of an elevated X-ray radiation field can be difficult to distinguish, and one must either look deeply into the abundances of rare chemical species or consider the different attenuation of CRs and X-rays with column density. X-rays propagate in straight lines at the speed of light and are simply attenuated, whereas CRs follow trajectories determined by the local magnetic field and on large enough scales their propagation follows a diffusion equation [@Girichidis2016; @PfrPakSch17]. Under the assumption that X-rays are unimportant for the chemistry, @CasWalTer98 showed that the electron fraction and CR ionization rate within a dense cloud can be inferred from abundance ratios of HCO$^+$, CO, and DCO$^+$ (the deuterated form of HCO$^+$). @VauHilCec14 studied a molecular cloud being impacted by the W28 supernova remnant, using the observed molecular lines to constrain the CR ionization rate to be $>100$ times the background Galactic rate. @ClaGloRag13 compared observations of the Galactic Centre cloud G0.253+0.016 with simulations using different CR ionization rates, finding that it too should have a CR energy density $>100$ times the background Galactic value. Investigating extreme environments, @BisPapVit15 studied how CO is destroyed in molecular clouds as the CR energy density increases, using chemical equilibrium calculations of photodissociation regions (PDR). They found that the number ratio of CO to H$_2$ decreases strongly with increasing CR energy density, because CO is effectively destroyed by He$^+$ ions created by CR ionization. This was followed up with 3D simulations of fractal clouds exposed to different CR energy densities [@BisVanPap17], confirming their previous results. @GonOstWol17 also studied PDR chemistry with elevated CR energy density, finding that grain-assisted recombination of He$^+$ limits the effectiveness of CO destruction by CRs. @MeiSpaIsr06 studied X-ray dominated regions (XDR) and PDRs including elevated CR ionization and heating rates. For a cloud exposed to high X-ray flux, the XDR is most of the cloud volume, the PDR traces the cloud surface, and CRs affect both the surface and interior of a cloud. They found that line ratios of HCN, CO and HCO$^+$ can be used, with high-J lines of CO, to distinguisgh between X-ray- and CR-irradiated clouds. Subsequently, @MeiSpaLoe11 found that OH, OH$^+$, H$_2$O, H$_2$O$^+$, and H$_3$O$^+$ can also be used to discriminate CR and X-ray irradiation. Most previous chemical studies of X-ray irradiated MCs assume chemical and/or thermal equilibrium, similar to PDR models [e.g. @MalHolTie96; @MeiSpa05; @HocSpa10]. The codes developed for these projects therefore cannot capture the time-dependent chemistry and thermodynamics that occurs within a molecular cloud irradiated by a time-dependent X-ray radiation field. A recent departure from this is the study of @CleBerObe17, who investigated variable H$^{13}$CO$^+$ emission (observed in a protstellar disk) as a consequence of a time-varying X-ray irradiation. So far there are no studies of the time-dependent chemistry of, for example, the molecular clouds near Sgr A$^\star$, which arises from the recent flare. Here we introduce a non-equilibrium code that couples X-ray irradiation to chemistry and thermodynamics (and potentially hydrodynamics) of molecular gas, using a simplified chemical network of 17 species. The treatment of X-ray radiation, the chemical network, and coupling to the <span style="font-variant:small-caps;">flash</span> code is described in Section \[sec:methods\]. Tests of the network using one-dimensional, constant-density slabs are presented in Section \[sec:tests\]. Section \[sec:fractal\] introduces the modelling of a fractal cloud in 3D using the <span style="font-variant:small-caps;">flash</span> code, embedded in a homogeneous and isotropic background radiation field. The equilibrium state of the gas for different X-ray radiation intensities is obtained, and the states compared with each other. In Section \[sec:flare\] the equilibrium state is disturbed by X-ray flares of duration 1 to 100 years, and we show the time-dependent effects of the flares on the chemical abundances and gas temperature during and after the flare event. Our results are discussed in Section \[sec:discussion\] and our conclusions presented in Section \[sec:conclusions\]. Algorithms and methods {#sec:methods} ====================== X-ray transport and absorption {#sec:xrays} ------------------------------ In previous works using the SILCC simulation framework [@WalGirNaa15; @Girichidis2016; @Gatto2017; @Peters2017] the X-ray flux was assumed to be constant and was simply scaled with the background interstellar UV radiation field (ISRF). Here, we develop a fully self-consistent X-ray absorption module and introduce the algorithms used for X-ray radiative transfer and absorption. We split the X-ray radiation field into $N_E$ energy bins, equally spaced in $\log E$, and calculate a mean cross-section for each bin, $\langle\sigma_i\rangle$. One-dimensional radiative transfer is very simple and requires little explanation. For 3D simulations in this paper we consider only an isotropic external radiation field to study the effects of X-rays on the chemistry of molecular clouds, similar to assuming an isotropic background interstellar UV radiation field [e.g. @Dra78]. We use the <span style="font-variant:small-caps;">TreeRay/Optical depth</span> algorithm [@WunWalDin18] for three-dimensional radiative transfer, implemented in the <span style="font-variant:small-caps;">flash</span> code [@Fryxell2000], described in more detail below. Modifying <span style="font-variant:small-caps;">TreeRay/Optical depth</span> to handle anisotropic radiation fields is a relatively simple extension. The term “flux” can mean different things depending on context: when we say X-ray flux, denoted $F_X$, we mean (i) uni-directional energy flux of radiation for one-dimensional slab-symmetric calculations, and (ii) $4\pi J_X$ (where $J_X$ is the angle-averaged mean intensity) for three-dimensional simulations. In both cases it is the X-ray energy flux available to be absorbed at a point. We integrate over a given energy range, usually 0.1-10keV, and so the units are \[ergcm$^{-2}$s$^{-1}$\]. In the nomenclature of @RoeAbeBel07, the 1D simulations have uni-directional flux, and the 3D simulations isotropic flux. We also quote the X-ray energy density, $E_\mathrm{rad}$, for clarity and for ease of comparison with other potentially relevant energy densities, such as cosmic rays, FUV radiation, thermal energy, etc. For the one-dimensional flux, $E_\mathrm{rad} = F_X/c$ ($c$ is the speed of light), and for three-dimensional calculations $E_\mathrm{rad} = 4\pi J_X/c$. ### X-ray absorption cross-section {#sec:xray:xsec} X-rays are mainly absorbed by ions of heavy elements (especially iron) because their large cross section more than makes up for their trace abundance. However, calculating the absorption by each ion individually is computationally expensive, as it requires knowledge of the abundance and ionization stage of many heavy ions and is therefore only feasible for detailed PDR/XDR codes [e.g. @MeiSpa05; @FerPorVan13]. @PanCabPin12 used a mean cross-section that takes account of all of the heavy elements in a single analytic function, which we also use: $$\sigma_\mathrm{x} = 2.27\times10^{-22} E_\gamma^{-2.485} \;\mathrm{cm}^2 \label{eqn:sigma}$$ per H nucleus, where $E_\gamma$ is the photon energy in keV. This cross-section was also used by @ShaGlaShu02 and is based approximately on results of @MorMcC83 for a gas of solar metallicity with abundances in table 1 of this paper [typically within 0.1 dex of updated values from @AspGreSau09]. It assumes that the temperature is low enough that heavy atoms are not significantly ionized, and so the dominant absorbers at large energy are those heavy atoms with K and L shells and corresponding large cross sections. The approximate formula does not capture resonances or sharp jumps in cross section at K or L shell edges [e.g. @DeABre12]. @MorMcC83 and @WilAllMcC00 show that H and He contribute significantly to the cross section up to the oxygen K-shell edge at $\sim0.5$keV, and that a power law with slope $\sim-2.5$ is a good approximation to the total cross section in the range $0.1-10$keV. Our cross section is therefore reliable as long as the electron fraction is small, and fails first at low energies ($\lesssim0.5$keV) as the ionization fraction increases. For highly ionized gas the approximate cross section becomes unreliable and a more accurate treatment would be required, but our aim here is to model molecular clouds and so this regime is not relevant. The cross section is only valid at or near solar metallicity and does not scale simply with metallicity because H and He contribute significantly for $E_\gamma\lesssim0.5$keV. @MorMcC83 show that the cross section shows only marginal changes even when most heavy elements are completely depleted onto grains. For an energy bin, $i$, in the energy range $E_a<E_\gamma<E_b$, with $E_m = 0.5(E_a+E_b)$, and defining $\sigma_m\equiv \sigma_\mathrm{x}(E_m)$, we define the mean cross section $\langle\sigma_i\rangle$ using the relation $$\exp \left(-\frac{\langle\sigma_i\rangle}{\sigma_m}\right) = \frac{1}{E_b-E_a}\int_{E_a}^{E_b} \exp \left(-\frac{\sigma_\mathrm{x}(E)}{\sigma_m}\right) dE \;. \label{eqn:xsec}$$ This formula averages the attenuation factor over the energy bin, and this is used to obtain an appropriate $\langle\sigma_i\rangle$. This provides a better estimate of the energy absorbed than using a simple average of $\sigma_\mathrm{x}$. The constant $\sigma_m$ is chosen so that the exponent is of order unity over most of the integral, but in principle a different value could be used. A similar averaging was used by @MacLim10 to improve photon (and hence energy) conservation in photo-ionization calculations. We stress that computational requirements force us to minimize the number of bins, $N_E$, and so it is always the case that $\sigma_\mathrm{x}$ changes significantly within the energy bin because of its strong scaling with energy. There is no way to avoid some level of inaccuracy when choosing $\langle\sigma_i\rangle$ without making assumptions about the shape of the X-ray spectrum. ### One-dimensional radiative transfer For uni-directional flux the equation of radiative transfer is very simple, having a source at infinity with flux entering the simulation domain, $F_{X,0}$, and only absorption everywhere else (i.e. scatterings are not considered). For an energy bin $i$, the X-ray flux, $F_{X,i}$, at a point $x$ is simply $$F_{X,i}(x) = F_{X,0} \exp\left\{-\tau_{i}(x)\right\} \;,$$ where $\tau_{i}(x)\equiv \int_{-\infty}^{x} n_\mathrm{H}(x^\prime) \langle\sigma_i\rangle dx^\prime$ is the optical depth along the ray to point $x$, and $n_\mathrm{H}$ is the local number density of H nuclei. ### Three-dimensional radiative transfer In the 3D <span style="font-variant:small-caps;">flash</span> simulations we use the <span style="font-variant:small-caps;">TreeRay/Optical depth</span> algorithm [@WunWalDin18], which is similar to the [Treecol]{} method developed by @Clark2012. The <span style="font-variant:small-caps;">TreeRay/Optical depth</span> algorithm computes the mean column density of any given species in every time step and for each cell of the computational domain using a <span style="font-variant:small-caps;">Healpix</span> tessellation [@Gorski2005] with $N_{\rm pix}$ pixels for each grid cell, using an Oct-tree method. We modified the tree solver such that it can be used to calculate the X-ray optical depth between each grid cell and the boundary of the computational domain. As a result, we obtain the columns and fluxes for every grid cell. Here we use $N_{\rm pix} = 48$ and a geometric opening angle criterion [@Barnes1986] with an opening angle of $\theta_{\rm lim} = 0.5$. We consider that the simulation domain is embedded in a uniform and isotropic external X-ray radiation field with mean intensity $J_\nu$, where $\nu$ is frequency. For an isotropic 3D radiation field the intensity, $I_\nu$, is equal to $J_\nu$, and so all rays entering the simulation domain satisfy this equality. For an X-ray energy bin, denoted $i$, the external mean intensity can be denoted $J_{0,i}$, and the fluxes $F_{0,i}\equiv4\pi J_{0,i}$ are input parameters to our calculations. The intensity along a ray, labelled $n$, from the edge of the simulation domain to a grid cell located at $\mathbf{r}$ can be obtained by solving the equation of radiative transfer with zero emissivity, as in the 1D case above: $$I_{X,i}^n(\mathbf{r}) = J_{0,i} \exp \left( -\tau_{i}^n \right) \;,$$ where $\tau_{i}^n\equiv \int n_\mathrm{H}(\mathbf{r}^\prime) \langle\sigma_i\rangle d\mathbf{r}^\prime$ is now the optical depth along the ray. For a given number of rays, $N$, uniformly covering $4\pi$ steradians, the mean intensity at $r$ is simply the average value of $I_{X,i}^n(\mathbf{r})$: $$J_{X,i}(\mathbf{r}) = \frac{1}{N}\sum_{n=1}^N I_{X,i}^n(\mathbf{r}) = \frac{J_{0,i}}{N} \sum_{n=1}^N \exp \left( -\tau_{i}^n \right)$$ The local attenuated flux at $\mathbf{r}$ is then $$F_{X,i}(\mathbf{r})\equiv 4\pi J_{X,i}(\mathbf{r}) = \frac{F_{0,i}}{N} \sum_{n=1}^N \exp \left( -\tau_{i}^n \right) \;. \label{eqn:xrayflux}$$ From this we can calculate a local rate of X-ray energy absorption, $H_\mathrm{x}$ (ergs$^{-1}$) per H nucleus using $$\label{EQ_HX} H_\mathrm{x} = \sum_{i=1}^{N_E} F_{X,i} \langle\sigma_i\rangle \;,$$ where the sum is over all energy bins. We use isolated boundary conditions for the <span style="font-variant:small-caps;">Optical depth</span> module, which means that the simulation domain is bathed in a uniform and isotropic (but potentially time-varying) X-ray radiation field. The X-ray optical depths are calculated between the target cell and the boundary of the simulation domain, so that every cell contributes to attenuating the radiation field seen at a given point. Such a setup is not always appropriate for X-ray radiation fields, which are often dominated by point sources [e.g. @Ponti2015], but it is an improvement on a 1D slab (see section \[sec:ms05\]) because it allows us to consider a more realistic density field. We also run our calculations in the limit of infinite speed of light. The column densities of total gas, CO and H$_2$ are necessary to compute the (self-) shielding of gas from the ISRF, whereas the X-ray attenuation factors, $\exp (-\tau_{i}^n)$, depend only on the total gas column density. We therefore calculate the attenuated X-ray flux for each of the $N_E$ X-ray energy bins arriving at every cell using Eq. \[eqn:xrayflux\] and use it as an input for the chemical network. The radiative transfer is completed before the chemistry update in <span style="font-variant:small-caps;">flash</span>, and so we need to store the attenuation factors $$\frac{1}{N}\sum_{n=1}^N \exp (-\tau_{i}^n) \label{eqn:attenuation}$$ for each X-ray energy bin, $i$, at every grid cell. This is accomplished by adding $N_E$ scalar fields to the grid. Within the chemistry network, the local X-ray absorption rate is calculated using Eq. \[EQ\_HX\]. Chemical Network {#sec:chem} ---------------- We use a chemical network based largely on the NL99 network of @GloCla12, which combines a model for hydrogen chemistry taken from @GloMac07a [@GloMac07b] and a model for CO chemistry introduced by @NelLan99. We also include a number of modifications and updated reaction rates as suggested by more recent work [e.g. @GonOstWol17]. The X-ray reactions and rates are taken largely from @Yan97 and @MeiSpa05 [hereafter MS05]. The number fraction of species $\mathrm{Q}$ with respect to the total number of hydrogen nuclei is denoted $y(\mathrm{Q})$, and $Y_\mathrm{R}$ is the fractional abundance by number of nuclei of *element* $\mathrm{R}$ with respect to hydrogen. For example, $y(\mathrm{H}_2)\in[0,0.5]$ because $Y_\mathrm{H}\equiv1$, and $y(\mathrm{CO})\in[0,\mathrm{min}(Y_\mathrm{C},Y_\mathrm{O})]$. Note in particular that the electron fraction, $y(\mathrm{e^-})$, can be larger than unity with this definition. Species Treatment ----------------- -------------------------- H conservation eqn. H$^+$ ODE solve H$_2$ ODE solve OH$_\mathrm{x}$ ODE solve C conservation eqn. C$^+$ ODE solve CO ODE solve CH$_\mathrm{x}$ ODE solve HCO$^+$ ODE solve He conservation eqn. He$^+$ ODE solve M conservation eqn. M$^+$ ODE solve O equilibrium O$^+$ equilibrium H$_2^+$ instantly reacts further H$_3^+$ equilibrium e$^-$ conservation eqn. : Species calculated in our chemical network.[]{data-label="tab:species"} The chemical species that we solve for are listed in Table \[tab:species\]. The non-equilibrium species solved for are H$_2$, H$^+$, CO, C$^+$, CH$_\mathrm{x}$, OH$_\mathrm{x}$, HCO$^+$, He$^+$, and M$^+$. Following @NelLan99, CH$_\mathrm{x}$ is a proxy species for simple hydrocarbons CH, CH$_2$, CH$_3$, etc., and similarly OH$_\mathrm{x}$ for OH, H$_2$O, etc. Intermediate molecular ions CH$^+$, CH2$^+$, OH$^+$, etc., are also included in CH$_\mathrm{x}$ and OH$_\mathrm{x}$, as appropriate, as well as the neutral species. We assume that each CH$_\mathrm{x}$ and OH$_\mathrm{x}$ molecule only contains one H atom for accounting purposes, but this makes no difference because the abundance of the species is very low compared to hydrogen. M is a proxy element for metals (e.g. N, Mg, Si, S, Fe) that can be the primary source of electrons in molecular gas at large column density. We assume that M is a two-ionization-stage atom, tracking M$^+$ as a species, and neutral M with a conservation equation. The abundances of neutral atomic species H, He, C, are also computed using conservation equations, and we assume that the abundance of doubly (and more highly) ionized species is negligible. Oxygen is also treated as a two-ionization-stage atom, and its ionization fraction is assumed to be the equilibrium value (after accounting for the fraction of O that is in OH$_\mathrm{x}$ and CO) because of the rapid charge exchange reactions with H and H$^+$ [@StaSchKim99]. The equilibrium abundance of H$_3^+$ is calculated from the local chemical abundances and temperature, and used in the network following @NelLan99. In total there are 9 species in the network that are solved by the ODE solver [@Brown1989], 5 species tracked by conservation equations (H, He, C, M, e$^-$) and 4 species (O, O$^+$, H$_2^+$, H$_3^+$) tracked by assuming equilibrium abundances or instantaneous further reaction. All of these contribute to gas heating and cooling. Species $Y_\mathrm{R}$ --------- -------------------- H 1.0 He 0.1 C $1.4\times10^{-4}$ O $3.4\times10^{-4}$ M $1.0\times10^{-5}$ : Elemental abundances in the gas phase by number with respect to hydrogen nuclei, $Y_\mathrm{R}$.[]{data-label="tab:elements"} The elemental abundances are listed in Table \[tab:elements\]. The metal abundance can be set somewhat arbitrarily because it covers a number of different elements, although we take reaction rates appropriate for silicon throughout the paper. @MalHolTie96 considered Si, Fe, S, and Ni, with the most abundant being Si ($3.5\times10^{-6}$) and S ($1.0\times10^{-5}$). @NelLan99 used a rather low value of $Y_\mathrm{M}=2\times10^{-7}$, whereas @BisPapVit15 use $Y_\mathrm{M}=4\times10^{-5}$ as the sum of the abundances of all relevant gas-phase metal abundances, and @GonOstWol17 used Si as a proxy for all metals with $Y_\mathrm{Si}=1.7\times10^{-6}$. The metal abundance is important at high column densities because it determines the electron fraction once C$^+$ has recombined. The collisional reactions are listed in Table \[tab:reactions\], and photo-reactions in Table \[tab:photoreactions\] in Appendix \[app:network\]. An analysis of the differences between results with and without the @GonOstWol17 additional reactions is also presented in App. \[app:network\]. A noteworthy addition is that we follow @GonOstWol17 in including grain recombination reactions for C$^+$, He$^+$, M$^+$, as well as H$^+$. In addition, in view of the potential importance of He$^{+}$ ions in the CO chemistry of X-ray-irradiated gas, it is worthwhile highlighting the difference in our treatment of He$^{+}$ recombination. @GonOstWol17 use the case B radiative recombination rate from @HumSto98, while we attempt to account for the fact that in gas which is optically thick to ionizing photons, the actual radiative recombination rate lies between the case A and case B rates owing to absorption of helium recombination photons by atomic hydrogen [@Ost89]. In addition, we also account for dielectronic recombination of He$^{+}$, a process neglected by @GonOstWol17. At low temperatures, this process is unimportant, but in hot gas ($T \sim 10^{5}$K), it comes to dominate the total He$^{+}$ recombination rate. ### H$_2^+$ and H$_3^+$ abundance and reactions There are four formation channels for H$_2^+$: cosmic ray ionization of H$_2$ (\#56 in Table \[tab:photoreactions\]), charge exchange between He$^+$ and H$_2$ (\#27 in Table \[tab:reactions\]), charge exchange between H$_2$ and H$^+$ (\#18 in Table \[tab:reactions\]) and X-ray ionization of H$_2$ (\#66 in Table \[tab:photoreactions\]). H$_2^+$ is considered to react immediately once it is formed and, following the discussion in MS05, it has three further reaction pathways: 1. dissociative recombination with an electron to 2H plus 10.9eV of heat (\#43 in Table \[tab:reactions\]); 2. charge exchange with H to produce H$_2$ and H$^+$ and 0.94eV of heating (\#17 in Table \[tab:reactions\]); and 3. further reaction with H$_2$ to produce H$_3^+$ and H (with subsequent recombination or reaction with other species), with net heating of 8.6eV per H$_3^+$ ion production (\#32 in Table \[tab:reactions\]). The creation rate of these products is given by the H$_2^+$ formation rate multiplied by the fraction of the H$_2^+$ ions that follow each pathway. H$_2^+$ can also be photodissociated by the interstellar radiation field, but this process is competitive with processes (ii) and (iii) above only when $n / G_{0} < 1$ [@Glo03]. Since $n / G_{0} \gg 1$ in typical molecular cloud conditions, we are justified in neglecting this process in the models presented in this paper. We assume that H$_3^+$ has its equilibrium abundance at all times. Its only significant creation channel[^2] is through H$_2^+$ (\#32 in Table. \[tab:reactions\]), and it is destroyed by: 1. reaction with C to form CH$_\mathrm{x}$ (\#21 in Table. \[tab:reactions\]) 2. reaction with O to form OH$_\mathrm{x}$ (\#22 in Table. \[tab:reactions\]), and further with an electron to produce O + 3H (\#23 in Table. \[tab:reactions\]); 3. reaction with CO to form HCO$^+$ and H$_2$ (\#24 in Table. \[tab:reactions\]); 4. dissociative recombination with an electron (\#20 in Table. \[tab:reactions\]); and 5. charge exchange with M to form H$_2$ + H + M$^+$ (\#19 in Table. \[tab:reactions\]). The equilibrium abundance is obtained by balancing the creation rate with the destruction rates listed. X-ray heating, ionization and dissociation {#sec:xraychem} ------------------------------------------ X-rays are absorbed by dust and gas, affecting both components through the following processes, most of which we include. They are described in more detail below: 1. Dust heating, following @Yan97. 2. Dust destruction and charging by X-rays. 3. Direct ionization of an atom/molecule by X-rays. This is generally only important for elements that have a K-shell, because these elements have much larger direct ionization cross-sections than lighter elements. For H, H$_2$, and He it is negligible [e.g. @DalYanLiu99]. 4. Secondary ionization of atoms/molecules through collisions with the fast (keV) electrons that are produced by a direct X-ray ionization. This is the main ionization channel for H, H$_2$, and He. 5. Secondary ionization/dissociation of atoms/molecules through FUV radiation that is locally generated by H$_2$ molecules, which are collisionally excited by fast electrons. This provides important photodissociation channels for molecules (except H$_2$) and photoionization channels for atomic species with low ionization energy (e.g. C). 6. Coulomb heating of the gas arising from energy exchange between the fast electrons and other charged particles in the gas. 7. Heating through dissociation of molecules and ionization of atoms (these rates are typically already in the chemical model, and the X-rays only increase the heating rate). For the dust we consider only heating (i), ignoring ionization and dust destruction (ii). This is reasonable for the molecular clouds that we consider, but would not be suitable for strongly irradiated, hot gas. We also do not consider direct ionization/dissociation by X-rays (iii), but only secondary ionizations through collisional (iv) and FUV (v) processes. All of the other processes are included as described below. ### Dust heating The dust temperature, $T_\mathrm{D}$, in an X-ray irradiated gas is calculated following @Yan97 and MS05 as $$T_\mathrm{D} = 1.5\times10^{2}\left(\frac{H_\mathrm{x}}{10^{-18}\,\mathrm{erg\,s}^{-1}}\right)^{0.2} \;\mathrm{K}.$$ We take the maximum of this temperature and the radiative equilibrium temperature resulting from FUV irradiation [which is calculated following @GloCla12]. There is evidence for dust temperatures between 125-150K in the circumnuclear disk of the Galactic Centre via detection of the J$=4-3$, v$_2=1$ vibrationally excited transition of HCN, which @Mills2013 argue is excited by local IR radiation from hot dust grains. In our 3D simulations described later the dust temperature ranges from 10 to 70K. ### Coulomb heating Secondary electrons are produced when an X-ray photon is absorbed by a heavy element, resulting in ionization and the ejection of an electron with kinetic energy comparable to the photon energy. The absorbed X-ray power per H nucleus, $H_\mathrm{x}$ (ergs$^{-1}$), is transferred to these hot electrons, and subsequently goes partly into heating the gas and partly into ionizations [@DalYanLiu99]. The fraction that goes into heating is determined in part by the electron abundance in the gas, because the heating arises from energy exchange through Coulomb interactions between the hot electron and the thermal electrons (and, to a lesser extent, thermal ions). For small electron fractions, most of the X-ray energy goes into ionizations, but the heating fraction increases towards unity as the electron fraction increases [@DalYanLiu99]. The heating fraction is also dependent on the energy of the hot electron (and hence the energy of the X-ray photon), because higher-energy electrons are much more likely to cause ionizations in a collisional interaction than lower-energy electrons. We follow MS05 in implementing the results of @Yan97 and @DalYanLiu99 to model these processes. The Coulomb heating rate by secondary electrons is obtained from the tables of @DalYanLiu99 using the local abundances of electrons, H, and H$_2$. The local heating rate, $\Gamma_\mathrm{x}$ (ergcm$^{-3}$s$^{-1}$) is given by $$\Gamma_\mathrm{x} = \eta n_\mathrm{H} H_\mathrm{x} \;,$$ where $\eta$ is a heating efficiency obtained from tables in @DalYanLiu99. The efficiency depends on $y(\mathrm{e^-})$, $y(\mathrm{H})$, $y(\mathrm{H}_2)$ and $y(\mathrm{He})$. Coulomb heating becomes more efficient as $y(\mathrm{e^-})$ increases, and the fit of @DalYanLiu99 becomes invalid for $y(\mathrm{e^-})>0.1$. We therefore assume that, for $y(\mathrm{e^-})>0.1$, the fraction of absorbed X-ray energy that goes to Coulomb heating, $\eta$, scales linearly with the electron fraction, starting from the @DalYanLiu99 value at $y(\mathrm{e^-})=0.1$ and reaching 100 per cent for $y(\mathrm{e^-})\geq1$, i.e., $$\eta[y(\mathrm{e^-})] = \eta(0.1) + \frac{1-\eta(0.1)}{0.9}\left(\min[1,y(\mathrm{e^-})]-0.1\right) \;,$$ where the minimum operator ensures $\eta \leq1$ even when $y(\mathrm{e^-})>1$. This interpolation is important for ensuring that the ODE solver converges in highly ionized gas. ### Secondary collisional ionization The hot electrons ionize and dissociate, as well as heat, the gas. H is ionized with rate $\zeta(\mathrm{H})$ per H atom per second, and He with rate $\zeta(\mathrm{He})$ per He atom per second. Molecular hydrogen, H$_2$, is dissociated (with rate $\zeta_\mathrm{D}(\mathrm{H}_2)$ per H$_2$ molecule per second) or ionized to H$_2^+$ (with rate $\zeta(\mathrm{H}_2)$ per H$_2$ molecule per second). These collisional ionization and dissociation rates by secondary electrons are calculated by interpolating the tables of @DalYanLiu99 for $y(\mathrm{e^-})\leq0.1$. As for the heating rates above, for $y(\mathrm{e^-})>0.1$ we take the @DalYanLiu99 rates at $y(\mathrm{e^-})=0.1$ and make them proportional to the abundance of the neutral species being ionized (or dissociated) so that the rate has the correct limit as full ionization is approached, e.g. $$\zeta(\mathrm{H})y(\mathrm{H}) = \frac{H_\mathrm{x}}{W_{\mathrm{H}}(y(\mathrm{e^-})=0.1)}y(\mathrm{H})^{1 - (0.1/y(\mathrm{e^-}))^3} \;.$$ Here $W_{\mathrm{H}}$ is the mean energy per H ionization from @DalYanLiu99. This is an ad-hoc extrapolation of the @DalYanLiu99 tables but is not important for the results presented in this work because we are not studying highly ionized plasmas. It does, however, ensure that the ODE solver converges for all values of $y(\mathrm{e^-})$. The rates for reactions \#62, \#63, \#66 and \#67 from Table \[tab:photoreactions\] are calculated using this formula and the tables from @DalYanLiu99. C is ionized by secondary electrons, with a rate 3.92 times that of H according to appendix D3.2 of MS05. We generalise their equation to the following: $$\zeta(\mathrm{C})y(\mathrm{C}) = \frac{\zeta(\mathrm{H})y(\mathrm{H})+\zeta(\mathrm{H}_2)y(\mathrm{H}_2)}{y(\mathrm{H})+y(\mathrm{H}_2)}3.92 y(\mathrm{C}) \;.$$ This has the correct limiting values when H is fully atomic and fully molecular, and is the equation used for reactions \#64, \#65, \#68-71 in Table \[tab:photoreactions\]. Similarly CO, CH$_\mathrm{x}$, OH$_\mathrm{x}$, HCO$^+$ can be collisionally ionized and destroyed by secondary electrons. For CO, CH$_\mathrm{x}$, and HCO$^+$ we use the same scaling factor as for C (3.92), whereas for OH$_\mathrm{x}$ we use a scaling factor of 2.97 appropriate for oxygen (MS05). For M, we use the same scaling factor as for silicon, 6.67. For simplicity we assume that ionization of all the carbon-bearing molecules produces C$^+$, OH$_\mathrm{x}$ produces O and H$^+$, and HCO$^+$ produce C$^+$ and H$^+$ and O. Ionization of M produces M$^+$. These factors of 3.92 for C, 2.97 for O, and 6.67 for Si were obtained by integrating the cross sections over the range 0.1$-$10keV to obtain an average value (see MS05), whereas in reality they should vary as a function of energy bin. In all of our calculations, however, these reactions are negligible compared with dissociation by the locally generated FUV field and so such an approximate treatment can be accepted. For future work that consistently includes the transition to highly ionized plasmas one would need to improve this aspect of our chemical model [cf. @DeAMigBre12], ideally considering the energy-dependent cross-section of each ion. ### Secondary ionization by locally generated FUV radiation A local FUV radiation field is generated by collisional excitation of H$_2$ and H by hot electrons [@PraTar83; @GreLepDal87; @MalHolTie96]. In our network, this contributes to the ionization of C and M [rates from @MalHolTie96; @Yan97], and to the dissociation of CH$_\mathrm{x}$, OH$_\mathrm{x}$, HCO$^+$, and CO [@Yan97]. The @GreLepDal87 rate for CO destruction per second is fitted with $$R^\mathrm{FUV}_\mathrm{CO}y(\mathrm{CO}) = 2.7 \sqrt{y(\mathrm{CO})\frac{T}{10^3\,\mathrm{K}}} \zeta(\mathrm{H_2}) y(\mathrm{H_2}) \;,$$ and this is often used [e.g. @MalHolTie96 MS05]. This does not scale linearly with $y(\mathrm{CO})$ as $y(\mathrm{CO})\rightarrow0$, which causes numerical problems for the ODE solver (the destruction timescale goes to zero as $y(\mathrm{CO})\rightarrow0$). The physical reason for this scaling is that the process is photon limited: photons are produced at a rate that depends on $\zeta(\mathrm{H_2})$ and $n(\mathrm{H_2})$, and are then primarily absorbed by CO. We instead use the UMIST12 [@McEWalMar12] rate for reaction \#74 in Table \[tab:photoreactions\] because it has a more numerically stable asymptotic behaviour, although it may be less accurate for $T>50$K than the @MalHolTie96 rate (T. Millar, private communication), and it probably underestimates the rate at which CO is destroyed as the CO abundance goes to zero: $$R^\mathrm{FUV}_\mathrm{CO}y(\mathrm{CO}) =210.0 \left(\frac{T}{300\,\mathrm{K}}\right)^{1.17} y(\mathrm{CO}) \zeta(\mathrm{H_2}) y(\mathrm{H_2}) \;. \label{eqn:crphot}$$ For other species we follow previous authors [@MalHolTie96; @Yan97 MS05] using the following functional form for reactions \#72, \#73, \#75 and \#76 in Table \[tab:photoreactions\]: $$R^\mathrm{FUV}_\mathrm{x}y(\mathrm{x}) = \left[p_\mathrm{m} \zeta(\mathrm{H_2}) y(\mathrm{H_2}) + p_\mathrm{a} \zeta(\mathrm{H})y(\mathrm{H})\right] \frac{y(\mathrm{x})}{1-w} \;, \label{eqn:xrfuv}$$ where $p_\mathrm{m}$ relates to the cross-section of species $x$ for dissociation/ionization by Lyman-Werner photons, and $p_\mathrm{a}$ by Lyman-$\alpha$ photons. The values of $p_\mathrm{m}$ and $p_\mathrm{a}$ used are given in Table \[tab:crphot\]. The grain albedo, $w$, is taken to be 0.5 for all energies [@MalHolTie96; @PanCabPin12]. Cosmic rays also produce secondary electrons and a local FUV field in the same way, and so reactions \#59, \#60 and \#61 have the same form. @HeaBosVan17 have recently calculated updated rate coefficients for $p_\mathrm{m}$ (their table 20). Their new values are similar to what we use here. In particular their updated value for C is 520 (scaled to our normalisation) compared with our value of 510. This and the CO rate (Equation \[eqn:crphot\]), for which @HeaBosVan17 refer to [@GreLepDal87], are the key ones for our work. For the others, the rate for M is so large that it remains ionized to the largest column densities considered, and our treatment of CHx and OHx is very approximate and so a factor of $\sim2$ difference in $p_\mathrm{m}$ does not impact on our results. Species $p_\mathrm{m}$ $p_\mathrm{a}$ Reference ----------------- ---------------- ---------------- ----------- C 510 0 1,5 M 4230 10500 2,4 OH$_\mathrm{x}$ 508 87.6 6,3 CH$_\mathrm{x}$ 730 35 6,4 : Constants for destruction of species by FUV radiation generated by hot electrons exciting molecular ($p_\mathrm{m}$) and atomic ($p_\mathrm{a}$) hydrogen (see Eq. \[eqn:crphot\]). Values for OH and CH are used for OH$_\mathrm{x}$ and CH$_\mathrm{x}$, respectively, and values for Si are used for M. Values for $p_\mathrm{a}$ are already multiplied by $\epsilon_\mathrm{L}=0.1$, following @LepDal96. Most $p_\mathrm{m}$ values are taken from the UMIST12 database [@McEWalMar12] and are multiplied by 2 because they are relative to a cosmic-ray/X-ray ionization rate per H$_2$ molecule, whereas we use an ionization rate per H nucleus. The $p_\mathrm{m}$ value for M is attributed to Rawlings (1992, private communication) in @McEWalMar12. In the fourth column, the first reference is for $p_\mathrm{m}$ and the second for $p_\mathrm{a}$. References: 1 @GreLepDal87; 2 @McEWalMar12; 3 @LepDal96; 4 @Yan97; 5 @MalHolTie96; 6 @GreLepDal89. []{data-label="tab:crphot"} Time-dependent solution in the FLASH code {#sec:timeint} ----------------------------------------- Chemistry and cooling are operator-split from the other parts of the <span style="font-variant:small-caps;">flash</span> code [@Fryxell2000], which compute e.g. the magneto-hydrodynamic evolution of the gas or the gas self-gravity. As in @WalGirNaa15, the chemistry and gas temperature are integrated simultaneously using the ODE solver <span style="font-variant:small-caps;">dvode</span> [@Brown1989]. We employ sub-timestepping if the chemical abundances or the internal energy are about to change significantly in a given cell. This ensures that the reaction and cooling rates are accurate even if the gas temperature changes by a large factor over a single timestep. The heating and cooling processes considered and a table of references for their implementation are given in Appendix \[sec:cooling\]. The inputs to the ODE solver and the chemical network are the total column density $N_\mathrm{H}$, the column densities of CO and H$_2$, the attenuated ISRF, the attenuated X-ray flux in each energy bin (see section \[sec:xrays\]), the gas density, internal energy and the chemical state at the beginning of a timestep. The ODE solver integrates the equations and returns the updated internal energy and chemical state at the end of each timestep. Therefore, chemistry and thermodynamics are mostly time-dependent, giving us an advantage over previous XDR calculations because we can study what happens when the X-ray radiation field varies on timescales shorter than the chemical or thermal timescale in full 3D geometry. There are some caveats to this statement: we do use a chemical network in which we assume (i) that the O/O$^+$ ratio has reached its equilibrium value based on the the H$^+$ fraction; (ii) that H$_2^+$ reacts instantly to produce further products; and (iii) that H$_3^+$ has its equilibrium abundance; and (iv) that the locally-generated UV radiation field is produced instantly by hot electrons in the molecular cloud. The first three approximations are made because these reactions are usually faster than others which are calculated in a fully time-dependent way. Regarding the fourth assumption, we note that the timescale on which the local UV field builds up is of the order of the stopping time of the hot photoelectrons (i.e. the time it takes for them to lose the bulk of their kinetic energy). At typical molecular cloud densities this is $\ll 1$ yr [@DalYanLiu99], much shorter than the timescales of interest in Section \[sec:flare\] and, therefore, for our purposes the approximation that the UV field appears instantly is reasonable. Test problems {#sec:tests} ============= ![ UV flux (blue) and X-ray flux from Table \[tab:MS05bins\] for the 4 test problems considered by MS05 in section \[sec:ms05\] (left $y$-axis) and X-ray absorption cross-section (right $y$-axis). $E$ is the energy in keV and $F_E$ is the energy flux in units ergcm$^{-2}$s$^{-1}$keV$^{-1}$. For the cross-section, the continuous black line plots Eqn. \[eqn:sigma\] from @PanCabPin12, and the dashed black line the discrete cross-section used for each of the 10 energy bins. For these tests the UV flux is scaled to $G_0=10^{-6}$ to make it insignificant. \[fig:ms05\_spec\]](fig1.pdf){width="49.00000%"} --------------------- -------------------------------- -------------------------------- $F_X=1.6$ ergcm$^{-2}$s$^{-1}$ $F_X=160$ ergcm$^{-2}$s$^{-1}$ \[0pt\]\[0pt\] \[0pt\]\[0pt\][ ]{} --------------------- -------------------------------- -------------------------------- Our chemical network is much smaller than networks used by XDR calculations in the literature that assumed chemical equilibrium [e.g. @MeiSpa05 hereafter MS05]. This means that we have fewer potential coolants in the gas and fewer potential sources of electrons in highly shielded gas, although the inclusion of species M is intended to mimic the effects of a number of metals that are not explicitly incorporated. Furthermore, in some cases we are using different reaction rates and cooling rates from previous authors. These differences may be significant, so it is important to benchmark our results against other codes, and to try to understand any differences that may be present. We begin by considering the test problems studied by MS05, and then run calculations using a large range of densities and X-ray fluxes, to make sure that our model produces sensible results for all ISM conditions. Model $n_\mathrm{H}$ (cm$^{-3}$) $F_X$ (ergcm$^{-2}$s$^{-1}$) $E_\mathrm{rad}$ (ergcm$^{-3}$) ------- ---------------------------- ------------------------------ --------------------------------- 1 $10^3$ 1.6 $5.34\times10^{-11}$ 2 $10^3$ 160 $5.34\times10^{-9}$ 3 $10^{5.5}$ 1.6 $5.34\times10^{-11}$ 4 $10^{5.5}$ 160 $5.34\times10^{-9}$ : Simulation parameters for the 4 test problems of MS05.[]{data-label="tab:ms05"} Comparison with MS05 {#sec:ms05} -------------------- We consider the four calculations by MS05 as test problems for our XDR chemistry module, and follow these authors by referring to them as models 1-4. They are one-dimensional XDR calculations of an infinite slab that is irradiated from one side by X-ray radiation, and follow closely the work of @Yan97. The gas density and X-ray fluxes for models 1-4 are given in Table \[tab:ms05\]. Models 1 and 2 have $n_\mathrm{H}=10^3$cm$^{-3}$ whereas Models 3 and 4 have a density about 300 times larger. Models 1 and 3 have a moderate total X-ray flux of $F_X=1.6$ ergcm$^{-2}$s$^{-1}$, and models 2 and 4 have a flux 100 times larger. MS05 considered an X-ray spectrum of the form $F\propto\exp(-E/10\,\mathrm{keV})$ (a typo in MS05 said 1keV in the exponential instead of 10keV; R. Meijerink, private communication), and they only considered X-rays in the range 1-10keV. We run the calculations with 10 energy bins, logarithmically spaced between 1 and 10 keV, shown in Table \[tab:MS05bins\]. The ISRF is set to $G_0=10^{-6}$, i.e. effectively no UV irradiation. This radiation field is plotted in Fig. \[fig:ms05\_spec\] together with the absorption cross-section used in each of the 10 bins. For consistency with previous work, the radiation field is assumed to be zero from the Lyman limit up to 1 keV. This can be justified because of the large interstellar absorption cross-section at these energies, although the abrupt switch-on of the X-rays at 1 keV is somewhat artificial. Bin, $i$ $E_\mathrm{min,i}$ $E_\mathrm{max,i}$ $\langle\sigma_i\rangle$ $F_{X,i}$ $E_{\mathrm{rad}, i}$ ---------- -------------------- -------------------- -------------------------- ----------- ----------------------- 0 1.000 1.259 $1.686\times10^{-22}$ 0.069 $2.30$ 1 1.259 1.585 $9.515\times10^{-23}$ 0.084 $2.81$ 2 1.585 1.995 $5.369\times10^{-23}$ 0.102 $3.41$ 3 1.995 2.512 $3.030\times10^{-23}$ 0.123 $4.10$ 4 2.512 3.162 $1.710\times10^{-23}$ 0.146 $4.87$ 5 3.162 3.981 $9.648\times10^{-24}$ 0.171 $5.69$ 6 3.981 5.012 $5.444\times10^{-24}$ 0.196 $6.54$ 7 5.012 6.310 $3.072\times10^{-24}$ 0.220 $7.33$ 8 6.310 7.943 $1.734\times10^{-24}$ 0.239 $7.97$ 9 7.943 10.00 $9.782\times10^{-25}$ 0.250 $8.35$ : Energy bins, mean absorption cross section, X-ray flux and energy density in each bin for MS05 test models 1 and 3. Models 2 and 4 are identical except that the flux in each bin is multiplied by 100. Bin energy limits $E_\mathrm{min}$ and $E_\mathrm{max}$ are in keV, mean cross section $\langle\sigma_i\rangle$ in cm$^{-2}$, flux $F_X$ is in ergcm$^{-2}$s$^{-1}$ per bin, and energy density $E_\mathrm{rad}$ in units $10^{-12}$ergcm$^{-3}$ per bin. []{data-label="tab:MS05bins"} We set up a one-dimensional grid with 200 logarithmically spaced grid-zones, without hydrodynamics and with constant gas density, and we set the grid-zones so that column densities from $N_\mathrm{H}=10^{16}$cm$^{-2}$ to $10^{26}$cm$^{-2}$ are calculated. The initial conditions are uniform, with sound speed 10kms$^{-1}$, and partially ionized with $y$(H$^+)=0.5$, $y$(He$^+)=0.05$, $y$(C$^+)=Y_\mathrm{C}$, $y$(M$^+)=Y_\mathrm{M}$, and molecular species set to have abundance $10^{-20}$. The column density of H, H$_2$, and CO are trivially calculated at each timestep on such a grid, and these are used as an input to the chemistry solver. The chemical and thermodynamic properties are then integrated for each grid point over a timestep. The initial timestep is $10^5$s, and this is doubled after each step. The MS05 calculations assume chemical and thermal equilibrium, so we integrate our chemical network for $10^9$ years to ensure that equilibrium conditions are obtained in all cases. Models 1 and 2 reach equilibrium in 5-10Myr, and models 3 and 4 take $<1$Myr because of their higher gas density. The results obtained at the end of the integration are shown in Fig. \[fig:ms05\_1234\]. The effects of attenuation are negligible for $N_\mathrm{H} \lesssim 10^{21}$cm$^{-2}$ (which corresponds to a visual extinction $A_V \sim 0.5$), and attenuation is basically complete by $N_\mathrm{H} \gtrsim 10^{25}$cm$^{-2}$; the abundances and temperature tend to constant values in these limits. Models 1 and 2 have a moderate gas density ($n_\mathrm{H}=10^3$cm$^{-3}$) and so weaker gas cooling (per unit volume) than the denser Models 3 and 4. As a result they have higher equilibrium temperatures at all $N_\mathrm{H}$. At low $N_\mathrm{H}$, Model 1 has $T\approx10^3$K, Model 2 has $T\approx10^4$K, Model 3 has $T\approx10^2$K, and Model 4 has $T\approx10^{3.6}$K. All models are charaterised by decreasing temperature and electron fraction in the range $N_\mathrm{H} \in [10^{22}, 10^{25}]$cm$^{-2}$. Models 1, 2 and 4 have low molecular fractions at low column density, and increasing abundance with increasing column density. Model 3 is so dense that the moderate X-ray flux cannot destroy the molecules even at low column density, and so it is mostly molecular at all column densities. For all four calculations, the atomic-to-molecular transition happens at $T\sim100$K and when $y(\mathrm{e^-})\lesssim10^{-4}$, although the column density at which this occurs is strongly dependent on gas density and X-ray flux. The C to CO transition occurs at approximately the same column density as the H to H$_2$ transition. Our results can be directly compared with figs. 3 and 4 of MS05. Taking each model in turn, we discuss the similarities and differences between our results and those of MS05. #### Model 1 (Fig. \[fig:ms05\_1234\], top-left panel): {#model-1-fig.figms05_1234-top-left-panel .unnumbered} At small $N_\mathrm{H}$ we find larger $y(\mathrm{H}_2)$, larger $y(\mathrm{C}^+)$, smaller $T$ and $y(\mathrm{e^-})$ than found by MS05. At large $N_\mathrm{H}$ we cannot see the asymptotic values that the MS05 results will tend to, but the results appear comparable. At intermediate $N_\mathrm{H}$ some changes occur at smaller $N_\mathrm{H}$ in our calculations: the H to H$_2$ transition occurs at $N_\mathrm{H} \sim 10^{23.7}$cm$^{-2}$, at which point $T<100$K, $y(\mathrm{e^-})\sim10^{-4.5}$. These $T$ and $y(\mathrm{e^-})$ values are consistent with MS05, except that they find the transition at $N_\mathrm{H} \sim 10^{24.2}$cm$^{-2}$, about 0.5 dex larger than us. MS05 also find that $y(\mathrm{C}^+)$ remains large until $N_\mathrm{H} \sim 10^{24.2}$cm$^{-2}$, whereas we find a significant decrease already at $N_\mathrm{H} \sim 10^{23}$cm$^{-2}$. Similarly to H$_2$, we find that the C to CO transition happens at about 0.5 dex smaller $N_\mathrm{H}$ than MS05. Apart from the offset in $N_\mathrm{H}$ and the qualitative difference in $y(\mathrm{C}^+)$, the results are very comparable. #### Model 2 (Fig. \[fig:ms05\_1234\], top-right panel): {#model-2-fig.figms05_1234-top-right-panel .unnumbered} At small $N_\mathrm{H}$ we find very similar results, except that $y(\mathrm{H}_2)$ is smaller than MS05. The reason for this close agreement is probably that the gas is partially ionized and $T\sim10^4$K, and this convergence of electron fraction and temperature means that most quantities are comparable. At large $N_\mathrm{H}$ we see the same trends as for Model 1, namely that the H to H$_2$ transition happens at smaller $N_\mathrm{H}$ in our calculations, by about 0.3 dex, and the same for the C to CO transition. Model 2 has a weak discontinuity in $T$ and $y$(H$_2$) at $N_\mathrm{H} \approx 10^{23.5}$cm$^{-2}$, which was not found by MS05. This is one of the more striking features of Fig. \[fig:ms05\_1234\], and also appears in Model 4 at $N_\mathrm{H} \approx 10^{21.2}$cm$^{-2}$. Such discontinuities were also obtained by @Yan97 for gas with sub-solar metallicity, and arise from a chemothermal instability that is associated with a region in $n-T$ space where H$_2$ is the dominant coolant (see also <span style="font-variant:small-caps;">Cloudy</span> results in Section \[sec:cloudy\]). These discontinuities are superficially reminiscent of an ionization front, where the thermal and ionization properties of a medium change very rapidly. In that case, however, the cross-section for absorption of ionizing photons is so large that there is very strong deposition of energy in a thin layer separating neutral from ionized gas. In contrast, the X-ray heating rate as a function of column density is unaffected by the chemothermal instability and remains a smooth function of $N_\mathrm{H}$. #### Model 3 (Fig. \[fig:ms05\_1234\], bottom-left panel): {#model-3-fig.figms05_1234-bottom-left-panel .unnumbered} This shows the largest discrepancies between our results and MS05. We find that the gas is mostly molecular at all column densities, and at small $N_\mathrm{H}$ we find that $y(\mathrm{H})\approx0.08$ and $y(\mathrm{C})\approx3\times10^{-5}$, whereas MS05 found that H and H$_2$ should have comparable abundances and that C should be more abundant than CO. They also found a larger electron fraction but comparable temperature. The difference seems to arise from the treatment of C$^+$: MS05 find $y(\mathrm{C}^+)>10^{-5}$ up to $N_\mathrm{H} \approx 10^{24.5}$cm$^{-2}$, whereas we have $y(\mathrm{C}^+)\approx10^{-7}$ at small $N_\mathrm{H}$ and decreasing as $N_\mathrm{H}$ increases. Consequently MS05 have a significantly larger electron fraction than we do, and this affects the chemical balance. #### Model 4 (Fig. \[fig:ms05\_1234\], bottom-right panel): {#model-4-fig.figms05_1234-bottom-right-panel .unnumbered} The asymptotic temperatures at low and high $N_\mathrm{H}$ are similar to MS05, and the run of $T$ with $N_\mathrm{H}$ is also similar, although not identical. As mentioned above, there is a temperature discontinuity at $N_\mathrm{H} \approx 10^{21.2}$cm$^{-2}$, associated with a chemo-thermal instability. This was not found by MS05, and it is the most striking difference between our results and theirs. We again find that the atomic-to-molecular transition occurs at smaller $N_\mathrm{H}$ than MS05 by about 0.5 dex, and the temperature and electron fraction also decrease more rapidly with $N_\mathrm{H}$. For example at $N_\mathrm{H} = 10^{23}$cm$^{-2}$, MS05 find $T\approx10^3$K and $y(\mathrm{e^-})\approx10^{-3}$, whereas we find $T=180$K and $y(\mathrm{e^-})=7\times10^{-5}$. The broad agreement between our results and those of MS05 is encouraging, but there are systematic differences in the column density of the atomic-to-molecular transition, the abundance of $y(\mathrm{C}^+)$ and the occurance of temperature discontinuities. This prompted a direct comparison with an XDR code that uses a much larger network, discussed in the next subsection. We also present a study of the effects of the new reactions added to the NL99 network following @GonOstWol17 in Appendix \[app:network\]. Regarding $y(\mathrm{C}^+)$, the appendices show that the addition of new reactions following @GonOstWol17 is driving the discrepancy, particularly the grain recombination reactions, without which we obtain similar C$^+$ abundances to MS05. ---------------- -------------------------------- -------------------------------- $F_X=1.6$ ergcm$^{-2}$s$^{-1}$ $F_X=160$ ergcm$^{-2}$s$^{-1}$ \[0pt\]\[0pt\] \[0pt\]\[0pt\] ---------------- -------------------------------- -------------------------------- ---------------- -------------------------------- -------------------------------- $F_X=1.6$ ergcm$^{-2}$s$^{-1}$ $F_X=160$ ergcm$^{-2}$s$^{-1}$ \[0pt\]\[0pt\] \[0pt\]\[0pt\] ---------------- -------------------------------- -------------------------------- Comparison with CLOUDY {#sec:cloudy} ---------------------- We also ran the same test problems with <span style="font-variant:small-caps;">Cloudy</span> [@FerPorVan13], which has a more detailed treatment of X-ray absorption and ionization processes than our module and also a much larger chemical network. The calculations were performed with version 17.00 of <span style="font-variant:small-caps;">Cloudy</span> as described by @FerPorVan13 and @FerChaGuz17. Note that even for the species that we have in common with the <span style="font-variant:small-caps;">Cloudy</span> network, the reaction and cooling rates used may not be the same. We use the standard [Cloudy]{} mix of of silicate and graphitic dust grains with a ratio of total to selective extinction of $R_V=3.1$, which is typical for the ISM in the Milky Way in terms of abundance and size distribution [@MatRumNor77]. Polycyclic aromatic hydrocarbons (PAHs) are not included. As in previous work [@WalGirNaa15] we set the overall dust-to-gas mass ratio to 0.01. We started from the full ISM model for the gas phase abundances and reduced the included heavy elements to the most important ones, i.e. the ones we find to be necessary in order to reproduce our results reasonably well (see Table \[table\_cloudy\], left column). All other elements were switched off but were thoroughly checked to only result in minor changes when included with their standard ISM gas phase abundances from [Cloudy]{}. We find that magnesium and iron are most important for setting the electron abundance. The abundances of the elements that we do include are shown in Table \[table\_cloudy\], middle column. Element Abundance Ionization potential ----------- ----------------------- ---------------------- Sodium $3.16\times 10^{-7}$ 5.14 eV Magnesium $1.26 \times 10^{-5}$ 7.65 eV Iron $6.31 \times 10^{-7}$ 7.9 eV Silicon $3.16 \times 10^{-6}$ 8.15 eV Sulphur $3.24 \times 10^{-5}$ 10.36 eV Carbon $1.40\times10^{-4}$ 11.26 eV Oxygen $3.40 \times 10^{-4}$ 13.62 eV Nitrogen $7.94 \times 10^{-5}$ 14.53 eV Helium $1.00 \times 10^{-1}$ 24.59 eV : List of heavy elements included in the [Cloudy]{} models (first column) sorted by their their respective ionization potential (last column). The relative abundances with respect to hydrogen are given in the middle column.[]{data-label="table_cloudy"} In [Cloudy]{} the ISRF is modeled as a blackbody with temperature 30000 K in the energy range of 0.44 to 0.99 Rydberg as suggested by the [Cloudy]{} documentation. The total intensity of the ISRF is scaled to the same value as used in section \[sec:ms05\], i.e. corresponding to a $G_0=10^{-6}$. All other initial conditions are also the same as in section \[sec:ms05\]. The results are plotted in Fig. \[fig:cloudy\], in a similar manner to Fig. \[fig:ms05\_1234\]. <span style="font-variant:small-caps;">Cloudy</span> also obtains the chemo-thermal instability for models 2 and 4. It occurs at the same $N_\mathrm{H}$ as what we find for model 2, but the jump in $T$ and $y(\mathrm{H}_2)$ is larger. For model 4 <span style="font-variant:small-caps;">Cloudy</span> finds a weaker discontinuity that occurs at larger $N_\mathrm{H}$ than in our calculations. For hydrogen, the atomic-to-molecular transition happens at similar $N_\mathrm{H}$ for <span style="font-variant:small-caps;">Cloudy</span> and our code, and the H$_2$ abundance is comparable in both calculations for all models. The biggest difference is for model 2, where $y(\mathrm{H}_2)$ increases more rapidly with $N_\mathrm{H}$ in the <span style="font-variant:small-caps;">Cloudy</span> calculation and the H$\rightarrow$H$_2$ transition occurs at smaller $N_\mathrm{H}$ (by $\sim0.5$ dex). This is the opposite of what we found comparing with MS05, where they found the transition at larger $N_\mathrm{H}$ than our results by $\sim0.5$ dex. The gas temperature from our calculations agrees well with <span style="font-variant:small-caps;">Cloudy</span> for models 1 and 2, but for models 3 and 4 <span style="font-variant:small-caps;">Cloudy</span> finds larger temperature than our module in the range $21.5 \lesssim \log N_\mathrm{H}/\mathrm{cm}^{-3} \lesssim 24.5$. The electron fraction is also larger in this range. The temperature discrepancy is up to 0.5dex for model 4. The results for carbon-bearing species are plotted in Fig. \[fig:cloudy2\]. <span style="font-variant:small-caps;">Cloudy</span> can include freeze-out of molecules onto grains, which is not in our network, so we switched this off for the comparison. The CO abundance agrees well for all calculations in Figs. \[fig:cloudy\] and \[fig:cloudy2\]. In the <span style="font-variant:small-caps;">Cloudy</span> results, the dip in CO abundance just below $N_\mathrm{H}\approx10^{24}$cm$^{-2}$ in model 1 (slightly larger $N_\mathrm{H}$ in model 2) is because of CS formation at this depth, which is not in our network. In Models 1 and 2 the <span style="font-variant:small-caps;">Cloudy</span> abundance of CO increases more rapidly with $N_\mathrm{H}$ than what we find, but the opposite is true in model 4. The abundances of atomic C and C$^+$ generally agree well between the two networks, but the limiting $y(\mathrm{C})$ at large $N_\mathrm{H}$ is much lower in the <span style="font-variant:small-caps;">Cloudy</span> results for models 3 and 4. We find generally smooth and monotonic curves for C$^+$, C and CO, with at most a single maximum for $y($C$)$, whereas <span style="font-variant:small-caps;">Cloudy</span> has more pronounced maxima and and other features. This is probably due to interaction with other carbon-bearing species that are not included in our network. Notably, the agreement with <span style="font-variant:small-caps;">Cloudy</span> is better than with MS05, suggesting that updated reaction and cooling rates over the past 13 years have a bigger impact on our results than the size of the chemical network. In summary, our results for the H$\rightarrow$H$_2$ and C$^+$$\rightarrow$C$\,\rightarrow$CO transitions agree well with results obtained from <span style="font-variant:small-caps;">Cloudy</span>, with small differences in the exact value of $N_\mathrm{H}$ for each transition. The temperature and electron fractions as a function of $N_\mathrm{H}$ also agree well with some caveats, notably the discrepancy in model 4. Less abundant species (CH$_\mathrm{x}$, OH$_\mathrm{x}$, HCO$^+$) are poorly predicted by our simple reaction network, probably because these are primarily included in the network in order to obtain the correct relative abundances of C$^+$, C, and CO. These trace species are not the focus of this work. ---------------- --------------------------------------- --------------------------------- $F_X=10^{-3}$ ergcm$^{-2}$s$^{-1}$ $F_X=10^5$ ergcm$^{-2}$s$^{-1}$ \[0pt\]\[0pt\] ![image](./fig5a.pdf){height="6.5cm"} \[0pt\]\[0pt\] ---------------- --------------------------------------- --------------------------------- Tests of energy resolution {#sec:eres} -------------------------- We ran a large grid of 1D models with varying ISM density, X-ray flux, and X-ray spectrum. Density varies from $n_\mathrm{H}=[0.1-10^6]$cm$^{-3}$, flux from $F_X=[10^{-5}-10^5]$ergcm$^{-2}$s$^{-1}$, and blackbody spectra with radiation temperature $E_\mathrm{rad}=[0.1-10]$keV. This was used to validate the code over a large range of different conditions, find any regions of parameter space where the ODE solver fails to converge, and test how many energy bins are required for different ISM conditions. A sample of results are shown in Fig. \[fig:eres\], for a fixed gas density ($n_\mathrm{H}=10^4$cm$^{-3}$), two different X-ray fluxes and two different radiation temperatures, $E_\mathrm{rad}$ (for a blackbody spectrum). All of these calculations have a UV radiation field of $G_0=1$, which is why the CO abundance is low at low column density. The models of MS05 (Section \[sec:ms05\]) had $G_0=10^{-6}$, and so the gas could be fully molecular at low column density for Model 3. Apart from this, the low-flux calculations in Fig. \[fig:eres\] have many similarities to Model 3. The high-flux calculations are most similar to Model 4, but the flux is significantly higher. In these extreme conditions the energy resolution plays a key role because the cross-section of the softest (hardest) energy bin increases (decreases) as the energy bin gets narrower. For all plotted calculations, 2 energy bins (0.1-1 and 1-10keV, dotted lines) is a rather crude approximation and some atomic-to-molecular transitions happen at quite different column densities for $F_X=10^5$ergcm$^{-2}$s$^{-1}$. For the low-flux calculations, 6 energy bins (dashed lines) are sufficient in all cases and seem adequate but not ideal for the high-flux calculations. The transitions between different phases (ionized-to-atomic, atomic-to-molecular) happen at column densities differing by up to 0.1dex between 6 and 20 energy bins, whereas the difference can be up to 1dex between 2 and 20 bins. The tradeoff between number of energy bins and computational cost (memory and cpu cycles) means that we have to accept some level of error from using discrete energy bins. The worst case found on the grid of calculations was for $E_\mathrm{rad}=10$keV and $F_X=10^5$ergcm$^{-2}$s$^{-1}$, i.e. gas irradiated very strongly by a hard X-ray field. In this case the location of the atomic-to-molecular transition differed by about 0.1dex between 6 and 20 energy bins. This is because there is a lot of flux in the highest-energy bin for such a hard spectrum, and so its cross-section is key to determining the column density at which X-ray heating becomes ineffective. For the calculations in the next sections we use a thermal spectrum with $T=1$keV, and so this problem is not so severe because there is very little flux in the highest energy bins. Irradiation of a Fractal Cloud {#sec:fractal} ============================== ![ UV flux (blue) and X-ray flux from Table \[tab:sims\] for the 9 simulations in section \[sec:fractal\] (left $y$-axis) and X-ray absorption cross-section (right $y$-axis). The continuous flux is plotted in all cases, and the discrete flux for simulation F4 using the dashed brown line. $E$ is energy in keV and $F_E$ is energy flux in units ergcm$^{-2}$s$^{-1}$keV$^{-1}$. For the cross-section, the continuous black line plots Eqn. \[eqn:sigma\] from @PanCabPin12, and the dashed black line the discrete cross-section used for each of the six energy bins. \[fig:spectrum\]](fig6.pdf){width="49.00000%"} Simulation Flux (ergcm$^{-2}$s$^{-1}$) $E_\mathrm{rad}$ (ergcm$^{-3}$) ------------ ----------------------------- --------------------------------- F0 $10^{-5}$ $3.3\times10^{-16}$ F1 $10^{-4}$ $3.3\times10^{-15}$ F2 $10^{-3}$ $3.3\times10^{-14}$ F3 $10^{-2}$ $3.3\times10^{-13}$ F4 $10^{-1}$ $3.3\times10^{-12}$ F5 $10^{0}$ $3.3\times10^{-11}$ F6 $10^{1}$ $3.3\times10^{-10}$ F7 $10^{2}$ $3.3\times10^{-9}$ F8 $10^{3}$ $3.3\times10^{-8}$ : X-ray fluxes and energy densities considered in each of the simulations in section \[sec:fractal\].[]{data-label="tab:sims"} Bin $E_{\mathrm{min},i}$ (keV) $E_{\mathrm{max},i}$ (keV) $\langle\sigma_i\rangle$ (cm$^{-2}$) $4\pi J_{X,i}$ (erg cm$^{-2}$ s$^{-1}$) $E_{\mathrm{rad},i}$ (erg cm$^{-3}$) ----- ---------------------------- ---------------------------- -------------------------------------- ----------------------------------------- -------------------------------------- 0 0.500 0.881 $5.84\times10^{-22}$ $1.97\times10^{-2}$ $6.57\times10^{-13}$ 1 0.881 1.554 $1.43\times10^{-22}$ $7.85\times10^{-2}$ $2.62\times10^{-12}$ 2 1.554 2.739 $3.49\times10^{-23}$ $2.34\times10^{-1}$ $7.81\times10^{-12}$ 3 2.739 4.827 $8.54\times10^{-24}$ $3.98\times10^{-1}$ $1.33\times10^{-11}$ 4 4.827 8.510 $2.09\times10^{-24}$ $2.42\times10^{-1}$ $8.09\times10^{-12}$ 5 8.510 15.000 $5.10\times10^{-25}$ $2.76\times10^{-2}$ $9.20\times10^{-13}$ We added the new chemistry network to the <span style="font-variant:small-caps;">flash</span> code, as discussed in Section \[sec:methods\]; this was implemented in a similar way to how the NL97 network [@NelLan97; @GloCla12] has been used for the SILCC simulations [@WalGirNaa15]. Multiple chemical species are implemented using the <span style="font-variant:small-caps;">flash</span> Multispecies framework, and radiative transfer uses <span style="font-variant:small-caps;">TreeRay</span> [@WunWalDin18]. We follow @Shadmehri2011 and @Walch2012 to set up a fractal density field with a given fractal index $D_f$ and a lognormal density probability density function (PDF). The fractal density field is setup in Fourier space using a power-law distribution of the amplitude squared, $A_{\rho(k)}^2 \propto k^{-n}$ on all modes ranging from 1 to 128. The power spectral index $n$ is related with $D_f$ through $D_f = 4-\frac{n}{2}$. Here we choose $D_f = 2.5$ and hence $n=3.0$, typical for molecular clouds in the Milky Way [@Stutzki1998]. The simulation box is a cube of diameter 25.6pc and we use a uniform grid with $256^3$ grid cells, so the grid cell-size is 0.1pc, sufficient to resolve the CO chemistry [@SeiWalGir17]. The total mass in the box is $10^5$M$_\odot$ and the maximum density located at the origin of the computational domain is $\rho_\mathrm{max}=1.6\times10^{-20}$gcm$^{-3}$. Nine different simulations were run without hydrodynamics, labelled F0-F8, each with a different X-ray flux irradiating the outer boundary given in Table \[tab:sims\]. Recall that this flux is equal to $\sum_{i=1}^{N_E}4\pi J_{X,i}$ where $J_{X,i}$ is the mean intensity of the isotropic radiation field in energy bin $i$. The hydrodynamic boundary conditions are irrelevant for the calculation, and as noted above we use isolated boundaries for the <span style="font-variant:small-caps;">TreeRay</span> algorithm. We consider a thermal X-ray spectrum between 0.5 and 15 keV, with a temperature of 1keV. Six new scalar field variables are added to account for the attenuation of the six logarithmically spaced X-ray energy bins, with energy limits and mean cross sections in each bin given in Table \[tab:bins\]. The unattenuated X-ray flux and energy density in each energy bin is also quoted for Simulation F5 in Table \[tab:bins\]; for other simulations these values can be scaled, e.g., F0 is scaled down by $10^5$ and simulation F8 is scaled up by $10^3$. This Table shows that the energy range 0.5-15keV covers almost all of the emission for the 1keV blackbody that we consider; adding further energy bins above or below this range would add less than 1% to the total X-ray energy density. Fig. \[fig:spectrum\] plots the UV and X-ray flux for each of the 9 simulations, as well as the continuous and discrete cross section for X-ray absorption. For simulation F4 the discrete flux in each bin is also shown as the brown dashed line, converted to the appropriate units by multiplying the flux by the midpoint energy of the bin. The external UV radiation field is set to $G_0=1.7$ in units of the Habing field, corresponding to the @Dra78 field, and is not scaled with the X-ray field strength but rather kept constant. For each simulation, we start with constant temperature 1423K (sound speed of 3kms$^{-1}$) and uniform number fractions of $y($H$_2)=10^{-5}$, $y($H$^+)=0.1$ and $y($CO$)=10^{-8}$. We assume the rest of the carbon is in the form of C$^+$, that helium is neutral, and that the metal, M, is in the form of M$^+$. The simulation is then run so that it evolves chemically and thermally towards equilibrium for 4Myr. The dense regions have reached equilibrium by this time, but the lowest density gas is still evolving slowly. Physical state of the gas {#sec:fractal:phase} ------------------------- \ \ \ \ Fig. \[fig:rhotemp\] plots the location of the grid cells in the density–temperature plane for simulations F0-F8; effectively an unnormalised, volume-weighted, probability distribution function (PDF) in density and temperature. Brighter colours indicate regions with more cells. Similarly, Fig. \[fig:coltemp\] plots the same in the extinction–temperature plane. It is important to note that different cells in our 3D simulations experience different UV extinction factors and so the equilibrium temperature depends on both density and location. Once chemical and thermal equilibrium has been reached, the cells all sit on a surface in the space of density, temperature and UV extinction, and Figs. \[fig:rhotemp\] and \[fig:coltemp\] are projections of this surface onto two different planes. The scatter in these plots arises from this projection and not from the gas being out of equilibrium. For larger X-ray flux the UV field has decreasing importance and so the effect of extinction on equilibrium temperature starts to drop out. The extinction, $A_V$, is calculated using equation \[eqn:attenuation\], but for the UV ISRF rather than X-ray radiation field. This is $$\langle A_V \rangle = -\frac{1}{2.5}\log \frac{1}{N_\mathrm{pix}}\sum_{i=1}^{N_\mathrm{pix}} \exp \left(-2.5 A_V^i \right) \;, \label{eqn:av}$$ where $A_V^i$ is the visual extinction along ray $i$, and $N_\mathrm{pix}$ is the number of rays used to sample all directions in 3D space (here $N_\mathrm{pix}=48$, see section \[sec:xrays\]). Due to the non-linear nature of this equation, the resulting average $\langle A_V \rangle$ is dependent on the radiation energy at which the average is taken, i.e., dependent on the numerical multiplier that here is 2.5, appropriate for the UV ISRF. Using the attenuation from one of the X-ray energy bins, or indeed the visual attenuation (a numerical multiplier of unity) gives a different mean value. This shows the importance of 3D simulations: for a 1D calculation the extinction is a single number, but for 3D simulations the weighting of different rays is wavelength dependent, and so the mean UV or X-ray extinction is not necessarily consistent with what one expects given the mean optical extinction. There is very little difference between F0 and F1 in Fig. \[fig:rhotemp\], because the X-ray field is weak and cannot affect the chemistry or thermal state of the gas to any significant extent (a run with zero X-ray flux is almost identical to F0 in these plots). Almost all of the gas is in the temperature range 7-100K, and there is a relatively weak correlation between temperature and density (multiple temperatures are found for gas at a given density). In contrast, there is a strong correlation between temperature and extinction, $A_V$, for these simulations (Fig. \[fig:coltemp\]), with temperature decreasing strongly with increasing extinction and most cells following a single curve in the plane. Simulation F2 is a transitional case, where the X-ray field has a noticeable effect on the gas temperature but where the temperature is still strongly correlated with $A_V$. The minimum temperature at large column density (where X-ray heating is effective) is increased to $>10$K with respect to F0 and F1, but the temperature at low column density (where UV heating is effective) is similar to F0 and F1. There is similar energy in both the UV and X-ray fields ($F_X\approx10^{-3}$ergcm$^{-2}$s$^{-1}$) and so both have similar levels of influence. The majority of the UV energy is deposited at $A_V<1$ near the cloud surface, whereas the X-ray energy penetrates beyond $A_V=10$ and so it acts on the whole cloud. The thermodynamics of the remaining simulations are all dominated by the X-ray radiation field. The mean temperatures of F3 and F5 are 100K and 8000K, respectively, with very little dependence on extinction (Fig. \[fig:coltemp\]). Simulation F4 has significant quantities of gas at all temperatures from 100K to 8000K, regardless of $A_V$. This is because the cloud is optically thin to X-rays in the higher energy bins ($>1$keV), and so the heating rate of a cell depends on the cell density to a much greater extent than the cell’s $A_V$. Fig. \[fig:rhotemp\] reflects this, showing very tight correlations between gas density and temperature for F4-F8. For F5 ($4\pi J_{X}=1$ergcm$^{-2}$s$^{-1}$) there are two regimes, where gas with $\rho\lesssim10^{-21}$gcm$^{-3}$ is at $T\sim10^4$K, whereas higher density gas has progressively lower temperature. For F4 the dividing line is $\rho\sim10^{-22}$gcm$^{-3}$, and for F3 it is about $\rho\sim10^{-23}$gcm$^{-3}$. This reflects the fact that the cooling rate increases dramatically at $T\sim10^4$K, with Lyman-$\alpha$ and forbidden-line cooling becoming very strong. The cooling rate scales with $n_\mathrm{H}^2$ whereas X-ray heating scales with $n_\mathrm{H}$, and so the density at which the Lyman-$\alpha$ and forbidden-line cooling equals the heating rate should scale with $4\pi J_{X}$. At higher densities the temperature decreases with increasing density. Simulations F6, F7 and F8 have sufficiently strong X-ray fields that the heating rate is stronger than the Lyman-$\alpha$ cooling rate, and so much of the gas becomes highly ionized with $T>10^4$K. With such high temperatures the molecules in these simulations are destroyed, and the chemistry network that we use is no longer well-suited to the physical conditions because we do not include higher ionization stages of important coolants such as C, N, O, Fe, etc. The empty region in the plots for F6-F8 at $4.6\lesssim \log T \lesssim4.8$ is an artifact of this limitation of the network. For $T\gg10^4$K we assume cooling appropriate for collisional ionization equilibrium [interpolated from a table; see @WalGirNaa15], which is not satisfied for X-ray irradiated gas, and so the cooling rate has an incorrect temperature dependence. For sufficiently large X-ray heating rates this leads to runaway heating, and so we set the net heating rate to zero for $T>10^5$K in these simulations, because we are not interested in the coronal gas that X-ray heating can produce. Simulation F6 also has a gap around $T\sim10^{3.5}$K, which is a manifestation of the chemo-thermal instability seen in MS05 models 2 and 4. The gap is also seen in the $T$-$y(\mathrm{H}_2)$ plane. Chemical state of the gas {#sec:fractal:chem} ------------------------- \ \ \ \ Fig. \[fig:co\_col\] plots the CO abundance, $y(\mathrm{CO})$, in each cell as a function of $A_V$ for simulations F0-F5 (F6-F8 have very little CO). Simulations F0 and F1 are showing what is typically found in PDR simulations, where the molecular fraction increases with column density, and increases dramatically once the column density is sufficient for self-shielding [see e.g. @TieHol85; @RoeAbeBel07]. In the remaining simulations (F2-F5) we see the increasingly strong effect of X-ray ionization and heating. As well as a general decrease in CO abundance at all column densities, the highly molecular gas at large column density progressively decreases with increasing flux, and disappears almost completely for F5. The CO abundance as a function of the H$_2$ abundance is plotted in Fig. \[fig:co\_h2\], again only for simulations F0-F5. As the X-ray flux increases, the correlation between $y(\mathrm{CO})$ and $y(\mathrm{H}_2)$ gets stronger, and the overall CO abundance decreases. The correlation of CO abundance with H$_2$ abundance is stronger than that with $A_V$, and this is again because the hard X-rays can penetrate to large $A_V$. They are not strongly attenuated by the cloud that we simulate here, and so the thermal and chemical properties of a cell are set much more by the gas density than by the extinction. The CO abundance increases with the square of the H$_2$ abundance. Column density maps of CO and H$_2$ {#sec:fractal:colmaps} ----------------------------------- -- -- -- -- -- -- In Fig. \[fig:frac\_F0\_F5\] we show the column density of H$_2$ and CO, and the column-density ratio of the two, for simulations F0, F3, F4 and F5. Runs F1 and F2 are not shown because they are similar to F0, and F6-F8 are also not shown because they have very little CO (F7 has no cells with $y($CO$)>3\times10^{-8}$, F6 has only a handful with $y($CO$)>10^{-6}$). Visual inspection of these figures shows that CO and H$_2$ start to be depleted for $4\pi J_{X} \gtrsim 10^{-1}$ergcm$^{-2}$s$^{-1}$ (F4) and are mostly destroyed for $4\pi J_{X}\gtrsim1$ergcm$^{-2}$s$^{-1}$ (F5). CO also is destroyed more completely than H$_2$ for large X-ray fluxes: the mass ratio of CO to H$_2$ in the simulation box decreases from about $10^{-3}$ for F0-F4 to $3.7\times10^{-4}$ for F5, $1.1\times10^{-4}$ for F6, $1.4\times10^{-5}$ for F7, and F8 has no CO. In simulation F3 the densest regions still have large CO column densities and, counter-intuitively, the lowest column density regions at the edges of the simulation box have more CO in F3 than in F0. The effect of X-rays is to raise the gas and dust temperatures (speeding up most reactions) and to increase the abundance of electrons and ions that are required for the formation of CO. Fig. \[fig:3d\_ion\_mol\] shows the total mass fractions of various chemical species in the simulation domain for simulations F0-F8, again at $t=4$Myr, with the X-ray flux on the $x$-axis. For low fluxes, the CO mass fraction actually increases slightly with increasing X-ray flux (already seen in Fig. \[fig:frac\_F0\_F5\] and discussed above), along with CH$_\mathrm{x}$, OH$_\mathrm{x}$ and HCO$^+$. All molecular species are destroyed with increasing flux following similar trends and beginning at the same flux value: $4\pi J_{X}>10^{-2}$ergcm$^{-2}$s$^{-1}$ (F3). H$_2$ is more resistant for large X-ray fluxes than any other molecule, surviving at trace levels up to the highest X-ray fluxes, whereas CO and the other molecular species are completely destroyed for $4\pi J_{X}>10^{2}$ergcm$^{-2}$s$^{-1}$ (F7). The reason for this can be seen in the temperature panel, where the mean temperature approaches $10^4$K for $4\pi J_{X}>1$ergcm$^{-2}$s$^{-1}$, and the minimum temperature jumps from $\sim10^2$K to nearly $10^4$K between $4\pi J_{X}=10$ and $10^3$ergcm$^{-2}$s$^{-1}$. Most of the destroyed CO goes into increasing the C$^+$ abundance, but this has a small effect on the total electron abundance because most electrons are produced from H$^+$ and He$^+$ for $4\pi J_{X}>10^{-2}$ergcm$^{-2}$s$^{-1}$ Neutral carbon also decreases in abundance with increasing $4\pi J_{X}$, albeit with a much weaker dependence on $4\pi J_{X}$ than CO. ![ Change in the mass fraction of ionic species (top panel), molecular species (middle panel), and temperature evolution (bottom panel) as a function of the incident X-ray flux on a fractal molecular cloud. These are the mass-fractions of all gas in the simulation domain, after $4\times10^6$ years of evolution to chemical equilibrium. The volume-weighted ($\langle T\rangle_\mathrm{vol}$) and mass-weighted ($\langle T\rangle_\mathrm{mass}$) mean temperatures are plotted, together with the minimum gas temperature, $T_\mathrm{min}$, and volume-weighted mean dust temperature, $\langle T\rangle_\mathrm{d,vol}$.[]{data-label="fig:3d_ion_mol"}](./fig12a.pdf "fig:"){height="6.5cm"} ![ Change in the mass fraction of ionic species (top panel), molecular species (middle panel), and temperature evolution (bottom panel) as a function of the incident X-ray flux on a fractal molecular cloud. These are the mass-fractions of all gas in the simulation domain, after $4\times10^6$ years of evolution to chemical equilibrium. The volume-weighted ($\langle T\rangle_\mathrm{vol}$) and mass-weighted ($\langle T\rangle_\mathrm{mass}$) mean temperatures are plotted, together with the minimum gas temperature, $T_\mathrm{min}$, and volume-weighted mean dust temperature, $\langle T\rangle_\mathrm{d,vol}$.[]{data-label="fig:3d_ion_mol"}](./fig12b.pdf "fig:"){height="6.5cm"} ![ Change in the mass fraction of ionic species (top panel), molecular species (middle panel), and temperature evolution (bottom panel) as a function of the incident X-ray flux on a fractal molecular cloud. These are the mass-fractions of all gas in the simulation domain, after $4\times10^6$ years of evolution to chemical equilibrium. The volume-weighted ($\langle T\rangle_\mathrm{vol}$) and mass-weighted ($\langle T\rangle_\mathrm{mass}$) mean temperatures are plotted, together with the minimum gas temperature, $T_\mathrm{min}$, and volume-weighted mean dust temperature, $\langle T\rangle_\mathrm{d,vol}$.[]{data-label="fig:3d_ion_mol"}](./fig12c.pdf "fig:"){height="6.5cm"} Flaring X-ray sources {#sec:flare} ===================== Effect of increasing the X-ray irradiation {#sec:flare_on} ------------------------------------------ Here we study the effects of a strong X-ray radiation field that is switched on for a given length of time and then switched off (i.e. a flare) to see how the chemistry of a molecular cloud responds. We take as initial conditions the cloud in simulation F2, where the chemistry and thermodynamics have been allowed to relax towards equilibrium for 4Myr. We then increase the X-ray flux instantaneously by a factor of $10^5$, from $4\pi J_{X}=10^{-3}$ ergcm$^{-2}$s$^{-1}$ to $10^{2}$ ergcm$^{-2}$s$^{-1}$. This large flux is similar to models 2 and 4 in MS05, who chose this value because it is typical of the cloud irradiation near AGN (it is also what is used in our simulation F7). Because the speed of light is considered to be infinite, this affects all parts of the simulation instantaneously, heating, ionizing atoms and dissociating molecules. Note that we find strong chemical and thermal effects on timescales shorter than the light-crossing-time of the simulation domain (i.e. 100 years). The actual thermal and chemical effects we see on the cloud are robust, but the time-lag would be slightly different if we had a greater level of realism in modelling the radiative transfer. ![ Evolution of the mass fraction of various ionic species (top panel), molecular species (middle panel), and temperature (bottom panel) over time, measured from when the X-ray flux is increased by a factor of $10^5$. The volume-weighted ($\langle T\rangle_\mathrm{vol}$) and mass-weighted ($\langle T\rangle_\mathrm{mass}$) mean temperatures are plotted in the bottom panel, together with the minimum gas temperature, $T_\mathrm{min}$, and volume-weighted mean dust temperature, $\langle T\rangle_\mathrm{d,vol}$. []{data-label="fig:flare_ion_mol"}](./fig13a.pdf "fig:"){width="49.00000%"}\ ![ Evolution of the mass fraction of various ionic species (top panel), molecular species (middle panel), and temperature (bottom panel) over time, measured from when the X-ray flux is increased by a factor of $10^5$. The volume-weighted ($\langle T\rangle_\mathrm{vol}$) and mass-weighted ($\langle T\rangle_\mathrm{mass}$) mean temperatures are plotted in the bottom panel, together with the minimum gas temperature, $T_\mathrm{min}$, and volume-weighted mean dust temperature, $\langle T\rangle_\mathrm{d,vol}$. []{data-label="fig:flare_ion_mol"}](./fig13b.pdf "fig:"){width="49.00000%"}\ ![ Evolution of the mass fraction of various ionic species (top panel), molecular species (middle panel), and temperature (bottom panel) over time, measured from when the X-ray flux is increased by a factor of $10^5$. The volume-weighted ($\langle T\rangle_\mathrm{vol}$) and mass-weighted ($\langle T\rangle_\mathrm{mass}$) mean temperatures are plotted in the bottom panel, together with the minimum gas temperature, $T_\mathrm{min}$, and volume-weighted mean dust temperature, $\langle T\rangle_\mathrm{d,vol}$. []{data-label="fig:flare_ion_mol"}](./fig13c.pdf "fig:"){width="49.00000%"} The evolution of the mass fractions of ions and molecules as a function of time, as well as the mean temperature, are plotted in Fig. \[fig:flare\_ion\_mol\]. The mass fractions of H$^+$ and He$^+$ increase rapidly because of the dramatically increased ionization rate until they saturate at their equilibrium values after about $3\times10^3$ years. Carbon goes from being partially ionized to almost fully ionized throughout the whole simulation after about 10 years, and the equilibrium mass fraction of C$^+$ at $t>10^3$ years is slightly larger than, but comparable to, that of neutral carbon. The metal (M) is almost fully ionized in the initial conditions, and so its ionization state doesn’t change much. The results for the molecules are more interesting and subtle. The middle panel of Fig. \[fig:flare\_ion\_mol\] shows that CO is very rapidly destroyed between 1 and 20 years after the X-ray flare switches on, and after about 20 years its rate of destruction decreases noticeably. HCO$^+$ follows the same trend, whereas CH$_\mathrm{x}$ and OH$_\mathrm{x}$ are destroyed more gradually. H$_2$ is almost unaffected for 100 years, and is significantly destroyed only after $10^3$ years. This means that an X-ray flare can destroy almost all of the CO in a molecular cloud, while leaving the H$_2$ unaffected if it is shorter than $\sim10^3$ years. This surprising result can be explained by looking at the temperature dependence of the various creation and destruction reactions for CO. The mass-weighted mean temperature shows a rapid rise from $\approx30$K initally to $\approx100$K after 1 year to $\approx1000$K after 10 years. This increase in temperature affects the dominant creation and destruction reaction rates for CO in a different way to H$_2$, with the result that the CO abundance is much more sensitive to cloud heating than the H$_2$ abundance for $T\lesssim1000$K. At early times the main creation reaction is through HCO$^+$ + e$^-$ (Table \[tab:reactions\], \#38), and destruction is through H$_3^+$ (Table \[tab:reactions\], \#24). This pair of reactions is circular, however, and largely just convert CO to HCO$^+$ and back again, rather than reducing the overall quantity of CO. The H$_3^+$ destruction rate is constant, whereas the destruction through locally generated FUV by fast electrons (Table \[tab:photoreactions\], \#74) increases with temperature, so as the gas heats up, the FUV destruction becomes dominant after 1 year. The creation rate (\#38) decreases as $T$ increases, so there is a phase of runaway CO destruction as long as these two (\#38 and \#74) are the dominant rates and $T$ is increasing with time. During this phase the HCO$^+$ abundance decreases because it is being converted to CO through reaction \#38 whereas the reverse reaction (\#24) is no longer effective. The abundances of CH$_\mathrm{x}$ and OH$_\mathrm{x}$ are not so dramatically affected because the FUV destruction reactions (\#75 and \#76 in Table \[tab:photoreactions\]) are independent of temperature, unlike the CO destruction rate. After about 10 years, the HCO$^+$ creation channel for CO (\#38) becomes too small, and the main creation rates are the constant rate from CH$_\mathrm{x}$ + O (\#36) and OH$_\mathrm{x}$ + C (\#37). This slows down the CO destruction because after 10 years $T$ remains relatively constant, and so the FUV destruction rate (\#74) scales with the decreasing CO abundance. Only after $>100$ years does the CO + He$^+$ destruction reaction (\#34) become the main one, by which stage most CO is already destroyed. For CR ionization of molecular clouds, @BisPapVit15 found that He$^+$ is the main destruction agent of CO, which superficially appears in conflict with our result. The resolution to this seems to be that at late times in our flare simulation He$^+$ *is* the main destruction channel, but most of the CO has already been destroyed through other reaction channels by the time He$^+$ becomes important. This highlights an important difference between equilibrium and non-equilibrium chemistry. We do assume that the rotational temperature of CO molecules (which is what determines the UV dissociation rate) is the same as the kinetic temperature. In fact the rotational temperature lags behind rapid changes in the kinetic temperature, but the timescale is $\ll1$yr for the gas densities in the cloud that we simulate. The reason H$_2$ is so much more robust than other molecular species is that it is not destroyed by the FUV radiation that the non-thermal electrons excite. Indeed the excitation of H$_2$ molecules is the main source of this locally generated FUV field. Once the H$^+$ mass fraction increases to the point that the electron fraction reaches $\sim0.1$, most of the absorbed X-ray energy goes into Coulomb heating [@DalYanLiu99] and the gas temperature rises above $10^3$K in most of the cloud mass. The rate of collisional dissociation of H$_2$ from collisions with H atoms increases hugely from $T=1000$K to $T=5000$K, and this is what ultimately destroys the H$_2$. When the H$_2$ mass fraction decreases, this reduces the cooling rate and the temperature increases, further decreasing the H$_2$ fraction in a runaway process until a new equilibrium temperature is reached. Relaxation once the flare switches off {#sec:flare_off} -------------------------------------- We now consider what happens if the increased X-ray irradiation switches off after a certain time; here we take 1, 10, 25 and 100 years as examples. We restart from the flare simulation of the previous subsection but decrease the X-ray irradiation to $4\pi J_{X}=10^{-3}$ ergcm$^{-2}$s$^{-1}$ (model F2). This decrease is again instantaneous, and takes effect everywhere in the domain because of the infinite-speed-of-light approximation. The gas then cools and molecules reform. The global evolution of the ions and molecules is plotted for these three flare durations in Fig. \[fig:flare\_all\], where the top panel shows the results of the 1 year flare, the middle panel the 10 year flare, and the bottom panel depicts the results of the 25 year flare. If the duration of the flare is only 1 year, then the gas temperature has not increased dramatically and the molecular species have not been significantly affected by the X-rays (see also bottom panel of Fig. \[fig:flare\_ion\_mol\]), and so not too much changes after the flare is switched off. Fig. \[fig:flare\_ion\_mol\] shows that most of the CO and HCO$^+$ are already destroyed after 10 years, so for a flare duration of 10 years or longer we see significant evolution during and after the flare in Fig. \[fig:flare\_all\]. After the flare the ionic mass fractions decrease over $10^2$-$10^4$ years, and reach equilibrium in about $10^5$ years in all cases. The molecular evolution is somewhat more complicated, but the trend is that CO starts to reform immediately, and is approaching its equilibrium mass fraction after $10^5$ years. For shorters flares the recovery is faster: for a 10 year flare the CO mass fractions reaches half of its pre-flare equilibrium value after 1750 years; for a 25 year flare it takes 4000 years; and for the 100 year flare (not shown) 31000 years. H$_2$ remains constant because it was not destroyed by the flare. This result raises the possibility that molecular clouds with negligible CO abundance may exist near X-ray sources simply because X-ray flares efficiently destroy CO but not H$_2$. Since it takes $10^3 - 10^5$ years to reform the CO, we expect that molecular clouds near centres of galaxies that are occasionally active, and clouds hosting young massive star clusters with X-ray binaries, can have out-of-equilibrium CO-to-H$_2$ ratios for much of their lifetime (see sec. \[sec:discussion\]). ---------------- ------------------------------------------- ------------------------------------------- \[0pt\]\[0pt\] ![image](./fig14a.pdf){width="45.00000%"} ![image](./fig14b.pdf){width="45.00000%"} \[0pt\]\[0pt\] ![image](./fig14c.pdf){width="45.00000%"} ![image](./fig14d.pdf){width="45.00000%"} \[0pt\]\[0pt\] ![image](./fig14e.pdf){width="45.00000%"} ![image](./fig14f.pdf){width="45.00000%"} ---------------- ------------------------------------------- ------------------------------------------- Discussion {#sec:discussion} ========== We have shown that a gas cloud exposed to an X-ray flare with radiation energy density of $E_\mathrm{rad}\sim3\times10^{-9}$ergcm$^{-3}$ will suffer catastropic CO destruction for flares of duration 10 years or longer, and that the flare duration must be $\gtrsim1000$ years to significantly destroy the H$_2$. Also, gas clouds irradiated by a constant X-ray energy density $E_\mathrm{rad}\gtrsim3\times10^{-13}$ergcm$^{-3}$ (F3) show significant heating and chemical effects, and X-rays dominate over CRs as the main heating agent (assuming the CR flux does not scale with X-ray flux). If $E_\mathrm{rad}\gtrsim3\times10^{-12}$ergcm$^{-3}$ (F4) then X-rays begin to significantly destroy CO and H$_2$. It is useful to discuss where such conditions arise, ignoring for now the issue of attenuation and focusing purely on the dilution due to the inverse-square law. The energy density at a distance $d$ from a point source with luminosity $L_\mathrm{x}$ is given by $$E_\mathrm{rad} = \frac{L_\mathrm{x}}{4\pi c d^2} = 2.8\times10^{-9} \frac{L_\mathrm{x}}{10^{40}\,\mathrm{erg\,s}^{-1}} \left(\frac{1\,\mathrm{pc}}{d}\right)^2 \;\mathrm{erg\,cm}^{-3}\;.$$ The Galactic Centre today has an X-ray luminosity of $L_\mathrm{x}\lesssim10^{35}$ergs$^{-1}$, implying that only clouds within a small fraction of a parsec have significant CO depletion from the current X-ray emission of Sgr A$^\star$. During the flare from 100 years ago, the luminosity was 4 orders of magnitude larger, but still only clouds within $\lesssim0.5$pc of Sgr A$^\star$ would have been affected as strongly as the cloud we simulate. Our results for the simulations with X-ray fields of differing strength show that clouds close to Sgr A$^\star$ (0.5-10pc) would have some CO destruction, with the effect decreasing with distance. For $d\gtrsim10$pc ($E_\mathrm{rad}\lesssim3\times10^{-12}$ergcm$^{-3}$, comparable to simulation F4 or weaker) the CO abundance should actually be enhanced because of the X-ray heating and production of free electrons. Our results imply that the clouds in the circumnuclear disk around Sgr A$^\star$ could have been significantly affected by X-rays, but the clouds in the 100-pc molecular ring would have remained largely unaffected, given the luminosity estimates of the flare obtained from X-ray reflection [@PonTerGol10]. Active Galactic Nuclei (AGN) can have $L_\mathrm{x}>10^{43}$ergs$^{-1}$, for which gas clouds up to 30pc (larger for higher $L_\mathrm{x}$) from the black hole should have their CO completely destroyed by X-ray radiation, unless they are optically thick to hard X-rays. CO should be depleted out to $d\gtrsim1000$pc, and for sources that emit with this luminosity for thousands of years the H$_2$ should also be depleted, again with stronger depletion closer to the source. The class of ultraluminous X-ray sources (ULX) have $L_\mathrm{x}\sim10^{39}-10^{42}$ergs$^{-1}$ [@SwaGhoTen04], and it is thought that some of these are powered by pulsars, stellar-mass black holes, and possibly intermediate-mass black holes for the most luminous of them [@MezRobSut13; @EarRobHei16; @MezCivFab16; @Bac16]. The pulsars and stellar-mass black holes are associated with high-mass star formation, and hence with molecular clouds. This, together with the variable nature of ULX sources [@Bac16] suggests that we should see strong effects of X-rays on the chemistry and temperature of molecular clouds in the vicinity of ULX, out to tens of parsecs from the source. For the most luminous ULXs, this radius is 300-1000pc, a significant fraction of the volume of a dwarf galaxy. Fig. \[fig:3d\_ion\_mol\] shows that simulation F3, with $E_\mathrm{rad}\sim3\times10^{-13}$ergcm$^{-3}$ (Table \[tab:sims\]), divides the lower-flux simulations where X-rays have little effect, from the high-flux simulations where X-rays have a big impact on the chemistry and thermal state of the molecular cloud. This is a few times less than the energy density of the ISM in the Galactic plane in CRs, magnetic fields and turbulent kinetic energy [$\sim1$eVcm$^{-3}$; @Cox05]. Our results imply, therefore, that X-rays will dominate the chemistry/thermodynamics of molecular clouds if the X-ray energy density is comparable to or exceeds that of CRs. This claim is of course dependent on energy and environment, because the interaction cross-sections of both X-rays and CRs are strongly energy-dependent. Furthermore, sources of CRs are invariably also sources of X-rays, but the scaling of energy density with respect to distance from the source is not the same for CRs and X-rays, because CRs diffuse whereas X-rays stream freely until they are absorbed. Absorption cross-sections of CRs are very uncertain, but it should still be possible to use the code we have developed to constrain the conditions under which X-rays deposit more energy in molecular clouds than CRs, and vice versa. The local far-UV ISRF has $E_\mathrm{FUV}\sim5\times10^{-14}$ergcm$^{-3}$ [@Dra78] (or $4\pi J_\mathrm{FUV}\approx1.5\times10^{-3}$ergcm$^{-2}$s$^{-1}$) which is significantly smaller than the ISM energy density in CRs and the X-ray energy density in simulation F4. The ISRF can significantly affect ISM chemistry with a smaller energy density than X-rays (or CRs) because it has a larger absorption cross-section, and so a larger heating rate per unit energy density, but it consequently can only affect the outer (low-extinction) layers of a molecular cloud [cf. @MeiSpa05]. Fig. \[fig:coltemp\] shows that the low-extinction part of the cloud is only significantly affected by the X-rays for Simulations F3 and above, with $E_\mathrm{rad}\gtrsim3.3\times10^{-13}$ergcm$^{-3}$. This reflects that only a small fraction of the X-ray radiation is absorbed in the low-extinction part of the cloud, so the X-rays must have a significantly larger energy density than FUV in order to have a comparable effect at low column densities. In contrast, the high-extinction part of the cloud is already heated by X-rays for a flux 10 times lower (F2) because (i) here it is not competing with the FUV but only with CRs, and (ii) the majority of the X-ray radiation is deposited here. Our 1D test calculations in Section \[sec:ms05\] showed that H$_2$ is a significant coolant when dense clouds are strongly irradiated by X-rays, supported by the <span style="font-variant:small-caps;">Cloudy</span> calculations in Section \[sec:cloudy\]. The 3D simulations of an X-ray flare show (see Fig. \[fig:flare\_ion\_mol\]) that molecular gas is heated to $T\sim10^3$K in about 10 years, into the temperature regime where H$_2$ cooling becomes effective. We therefore expect that this hot H$_2$ gas would emit in the infrared and be observable with upcoming observatories such as the *James Webb Space Telescope* [@GarMatCla06; @Kal18]. Our simulations predict that CO is destroyed on a similar (10-20 year) timescale to gas heating, and so it should be possible to observe CO emission decreasing on the same timescale as H$_2$ emission switches on after a bright flare near a molecular cloud. @GloMac07b showed that turbulent motions in molecular clouds can significantly speed up the formation of H$_2$ and other molecules. We cannot address this with the static simulations presented here, but future calculations with a turbulent cloud will study whether CO can re-form more quickly than indicated by our results. Our results should also have application to protoplanetary disks, where @CleBerObe17 showed that time-dependent X-ray irradiation can modify the observable HCO$^+$ signature in the disk. Low-mass protostars typically have strong X-ray emission and variability on account of the strong surface magnetic fields, and this radiation field strongly affects the properties of protostellar disks [@GlaNajIge97]. The time-dependent effects of the X-ray irradiation have not yet been investigated in great detail. A limitation of our work is that we use the infinite speed-of-light approximation, whereas the chemical and thermal properties of the molecular cloud that we model are changing on a timescale less than the light travel time across the cloud for the model of an X-ray flare. If we tracked the photon front propagating through a cloud then the heating, dissociation and ionization would sweep through the cloud rather than happen simultaneously at all places. The same chemical and thermal evolution would still occur, but there would be time offsets between different parts of the cloud depending on when they were first exposed to the X-ray flare. How this would appear to an observer is very dependent on the angle between the photon propagation direction and the observer’s line of sight. If the photon front were propagating directly towards the observer then nothing would look different, whereas if it were propagating at right angles then we could potentially see different molecular and atomic transitions switch on and off in a wave moving across a cloud as more of the cloud gets heated by X-rays. The long-term evolution of the cloud, which is perhaps the most interesting result we have obtained, would not look any different because the timescales for recombination and for CO to re-form are much longer than the light-crossing time of a cloud. Conclusions {#sec:conclusions} =========== This paper presents a new implementation of hydrogen and carbon non-equilibrium chemistry when exposed to a (potentially time-varying) X-ray radiation field. The chemical network is relatively small, so that it can be integrated efficiently enough for use in 3D magnetohydrodynamic simulations of molecular clouds and the ISM. Comparison of 1D test calculations using the new network and more complex XDR/PDR codes such as <span style="font-variant:small-caps;">cloudy</span> shows that the gas temperature and abundances of the most abundant species agree satisfactorily. Species with typically low abundance, namely CH$_\mathrm{x}$, OH$_\mathrm{x}$, and HCO$^+$, show poor agreement with <span style="font-variant:small-caps;">cloudy</span>, probably reflecting their status in our network as *helper molecules* whose main purpose is to obtain the correct abundances of C$^+$, C, and CO. The chemical network is coupled to the <span style="font-variant:small-caps;">TreeRay/Optical depth</span> solver [@WunWalDin18] for radiative transfer of the far-UV ISRF, modified to include X-ray radiative transfer, and implemented in the simulation code <span style="font-variant:small-caps;">Flash</span>. The first application of the code was to study the equilibrium chemical and thermal state of a fractal molecular cloud when exposed to X-ray radiation of different intensities. UV radiation acts only on the surface layers of a molecular cloud, but hard X-rays can penetrate deep into the whole volume of the simulated cloud, and so have a much stronger effect. X-ray energy densities of $3\times10^{-16}-3\times10^{-14}$ergcm$^{-3}$ had limited effects on the cloud other than a small increase in the minimum temperature and also an increase in the CO to H$_2$ ratio (on account of the increased ion and electron abundances induced by the X-rays). A radiation field with $E_\mathrm{rad}=3\times10^{-13}$ergcm$^{-3}$ increased the mean cloud temperature to nearly 100K, and provided sufficient ionizations that H$^+$ and He$^+$ became the main source of electrons (instead of C$^+$ and M$^+$, which have much lower overall abundance). The CO abundance for this X-ray radiation field is elevated compared with a zero flux case because of the increased electron abundance. Still stronger radiation fields increased the mean temperature to $10^3-10^4$K or above, and the ionized fractions of H and He to 10% or more. For weak X-ray irradiation the gas temperature and molecular abundances are strongly correlated with the local extinction at a given point in the cloud because the UV radiation field is stronger than the X-ray field. For stronger irradiation this correlation disappears and the chemical and thermal properties of the gas depend almost entirely on gas density. We studied the time-dependent response of the fractal cloud to a sudden increase in X-ray radiation intensity for a duration of 1 to 100 years, followed by a sudden decrease back to the original intensity. This is a crude model of an X-ray flare from a variable source, such as Sgr A$^\star$ in the Galactic Centre, or an AGN or ultra-luminous X-ray source. In one year the mass-weighted-mean gas temperature increased from $\sim30$K to $\gtrsim10^2$K, and the ionization fraction of H and He increased by more than an order of magnitude. The abundances of molecular species do not change on this short timescale, however. After a flare of 10 year duration, the gas temperature increased to $10^3$K, and H$^+$ fraction to $\sim0.01$, and the molecular species start to be affected. The CO abundance decreases by more than an order of magnitude, whereas the H$_2$ abundance is unchanged. For a flare of 25 years duration or more, the effects on the cloud are similar, with the temperature and H$^+$ fraction even larger and the CO almost completely destroyed, but H$_2$ again unaffected. The temperature increase means that H$_2$ may become a major coolant in the molecular cloud and should emit brightly in the infrared. It takes hundreds to thousands of years after the flare for the CO to re-form and reach a value close to its pre-flare abundance. The main agent of CO destruction is the locally generated FUV radiation field, produced by H atoms and H$_2$ molecules that are excited by collisions with high-energy, non-thermal secondary electrons. Only once the CO abundance is already very low does the He$^+$ destruction channel become important. As a function of time, the CO-to-H$_2$ abundance decreases dramatically for flares of duration a few years or more. Our main result is that CO is destroyed almost 100 times more rapidly than H$_2$, because of the different destruction channels of these molecules. Our results show that some molecular clouds that have been exposed to recent intense X-ray radiation should be still out of chemical equilibrium, and we predict that some of these clouds will still have fully molecular hydrogen, but will contain very little CO. These CO-dark clouds should remain deficient in CO for about $10^3$ years after a flare (depending on gas density, shorter for higher-density gas). Depending on the frequency and intensity of X-ray flares, a molecular cloud near a flaring source could be permanently deficient in CO but still be fully molecular as far as hydrogen is concerned. For Galactic Centre clouds at $\gtrsim10$pc from Sgr A$^\star$ the irradiation from the strong X-ray flare about 100 years ago was not sufficiently strong to destroy CO, and in fact we predict that the CO abundance may actually have been enhanced by the X-ray irradiation. Only for clouds within a parsec of Sgr A$^\star$ would significant CO destruction have occured. Acknowledgements {#acknowledgements .unnumbered} ================ We are grateful to the referee for very useful comments and suggestions that have improved this paper. We thank B. Godard for a very useful discussion about modeling the X-ray absorption, T. Millar for discussions on the CO dissociation rate in UMIST12, R. Meijerink for discussions on comparison with his results, and E. Pellegrini for suggesting to use <span style="font-variant:small-caps;">Cloudy</span> for the comparison and providing preliminary results. JM acknowledges funding from a Royal Society-Science Foundation Ireland University Research Fellowship (14/RS-URF/3219). This research was funded by the ERC starting grant No. 679852 “RADFEEDBACK”. We further acknowledge support by the Deutsche Forschungsgemeinschaft via priority program 1573, Physics of the Interstellar Medium. SW thanks the Bonn-Cologne Graduate School, which is funded through the German Excellence Initiative. DS acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG) via the Collaborative Research Center SFB 956 “Conditions and Impact of Star Formation” (subproject C5). SCOG acknowledges support from the Deutsche Forschungsgemeinschaft via SFB 881, “The Milky Way System” (sub-projects B1, B2 and B8), and from the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007 - 2013) via the ERC Advanced Grant “STARLIGHT: Formation of the First Stars” (project number 339177). RW acknowledges support by the Albert Einstein Centre for Gravitation and Astrophysics via the Czech Science Foundation grant 14-37086G and by the institutional project RVO:67985815 of the Academy of Sciences of the Czech Republic. The authors acknowledge the DJEI/DES/SFI/HEA Irish Centre for High-End Computing (ICHEC) for the provision of computational facilities and support. The software used in this work was in part developed by the DOE-supported ASC/Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago. This research has made use of NASA’s Astrophysics Data System. Chemical network {#app:network} ================ ID Reaction Type Note Reference ---- ------------------------------------------------------- ------------------------------ -------------------- ---------------------------------------- 1 H + e $\rightarrow$ H$^+$ + 2e Collisional ionization polynomial fit @AbeAnnZha97 2 He + e $\rightarrow$ He$^+$ + 2e Collisional ionization Not in GOW17 @AbeAnnZha97 3 C + e $\rightarrow$ C$^+$ + 2e Collisional ionization Not in GOW17 @Vor97 4 M + e $\rightarrow$ M$^+$ + 2e Collisional ionization Not in GOW17 @Vor97 5 H$_2$ + e $\rightarrow$ 2H + e Collisional dissociation Not in GOW17 @TreTen02 6 H$_2$ + H $\rightarrow$ 3H Collisional dissociation similar to GOW17 @LepShu83 [@MacShu86; @MarSchMan96] 7 H$_2$ + H$_2$ $\rightarrow$ H$_2$ + 2H Collisional dissociation similar to GOW17 @MarKeoMan98 [@ShaKan87; @PalSalSta83] 8 H$^+$ + e $\rightarrow$ H + $\gamma$ Radiative recomb. Same as GOW17 @FerPetHor92 9 He$^+$ + e $\rightarrow$ He + $\gamma$ Radiative+dielec. recomb. @Ost89 [@HumSto98; @Bad06] 10 C$^+$ + e $\rightarrow$ C + $\gamma$ Radiative recomb. Same as GOW17 @BadOMuSum03 [@Bad06] 11 M$^+$ + e $\rightarrow$ M + $\gamma$ Radiative recomb. Similar to GOW17 @NelLan99 12 H$^+$ + e $\rightarrow$ H Grain-assisted recomb. Same as GOW17 @WeiDra01 13 He$^+$ + e $\rightarrow$ He Grain-assisted recomb. Same as GOW17 @WeiDra01 14 C$^+$ + e $\rightarrow$ C Grain-assisted recomb. Same as GOW17 @WeiDra01 15 M$^+$ + e $\rightarrow$ M Grain-assisted recomb. Same as GOW17 @WeiDra01 16 H + H $\rightarrow$ H$_2$ Grain-assisted H$_2$ form. Similar to GOW17 @HolMcK79 17 H$_2^+$ + H $\rightarrow$ H$_2$ + H$^+$ Charge ex. Same as GOW17 @KarAniHun79 18 H$_2$ + H$^+$ $\rightarrow$ H$_2^+$ + H Charge ex. Not in GOW17 @SavKrsPre04 19 H$_3^+$ + M $\rightarrow$ M$^+$ + H$_2$ + H Dissociative charge ex. not in GOW17 @NelLan99 20 H$_3^+$ + e $\rightarrow$ H$_2$ + H Dissociative recomb. Same as GOW17 @McCHunSay04 [@WooAguMar07] 21 H$_3^+$ + C $\rightarrow$ CH$_\mathrm{x}$ + H$_2$ Formation of CH$_\mathrm{x}$ Same as GOW17 @VisBuzMil16 [@GonOstWol17] 22 H$_3^+$ + O $\rightarrow$ OH$_\mathrm{x}$ + H$_2$ Formation of OH$_\mathrm{x}$ Same as GOW17 @DeRMilOCo16 [@GonOstWol17] 23 H$_3^+$ + O + e $\rightarrow$ O + 3H Pseudo-reaction Same as GOW17 @DeRMilOCo16 [@GonOstWol17] 24 H$_3^+$ + CO $\rightarrow$ HCO$^+$ + H$_2$ Proton transfer Same as GOW17 @KimTheHun75 25 CH$_\mathrm{x}$ + H $\rightarrow$ H$_2$ + C Exchange reaction Same as GOW17 @WakSmiHer10 [@GonOstWol17] 26 He$^+$ + H$_2$ $\rightarrow$ He + H + H$^+$ Dissociative charge ex. Same as GOW17 @SchJefBar89 27 He$^+$ + H$_2$ $\rightarrow$ He + H$_2^+$ Charge ex. Same as GOW17 @Bar84 28 O$^+$ + H$_2$ $\rightarrow$ OH$_\mathrm{x}$ + H Formation of OH$_\mathrm{x}$ Same as GOW17 @GonOstWol17 29 O$^+$ + H$_2$ + e $\rightarrow$ O + 2H H$_2$ destruction Same as GOW17 @GonOstWol17 30 C$^+$ + H$_2$ $\rightarrow$ CH$_\mathrm{x}$ + H Formation of CH$_\mathrm{x}$ Same as GOW17 @WakSmiHer10 31 C$^+$ + H$_2$ + e $\rightarrow$ C + 2H H$_2$ Destruction Same as GOW17 @WakSmiHer10 32 H$_2$ + H$_2^+$ $\rightarrow$ H$_3^+$ + H Formation of H$_3^+$ Same as GOW17 @StaLepDal98 33 C + H$_2$ $\rightarrow$ CH$_\mathrm{x}$ Radiative association Not in GOW17 @PraHun80 34 He$^+$ + CO $\rightarrow$ He + C$^+$ + O Dissociative charge ex. GOW17 differs @PetDweAll89 35 C$^+$ + OH$_\mathrm{x}$ $\rightarrow$ HCO$^+$ HCO$^+$ formation Same as GOW17 @WakSmiHer10 36 O + CH$_\mathrm{x}$ $\rightarrow$ CO + H CO formation Same as GOW17 @WakSmiHer10 37 C + OH$_\mathrm{x}$ $\rightarrow$ CO + H CO formation Same as GOW17 @ZanBusJor09 [@WakSmiHer10] 38 HCO$^+$ + e $\rightarrow$ CO + H CO formation GOW17 rate similar @BriMit90 [@McEWalMar12] 39 OH$_\mathrm{x}$ + O $\rightarrow$ 2O + H OH$_\mathrm{x}$ destruction Same as GOW17 @CarGodKoh06 40 OH$_\mathrm{x}$ + He$^+$ $\rightarrow$ O$^+$ + He + H Dissociative charge ex. Same as GOW17 @WakSmiHer10 41 O$^+$ + H $\rightarrow$ O + H$^+$ Charge ex. Equilibrium @StaSchKim99 42 H$^+$ + O $\rightarrow$ H + O$^+$ Charge ex. Equilibrium @StaSchKim99 43 H$_2^+$ + e $\rightarrow$ 2H Dissociative recomb. Not in GOW17 @AbeAnnZha97. ID Reaction Type Note Reference ---- -------------------------------------------------- ---------------------------- -------------------------------------------------- ----------------------------------------------------- 44 H$_2$ + FUV $\rightarrow$ 2H Photodiss. same as GOW17 @HeaBosVan17 45 HCO$^+$ + FUV $\rightarrow$ CO + H$^+$ Photodiss. not in GOW17 @HeaBosVan17 46 CO + FUV $\rightarrow$ C + O Photodiss. same as GOW17 @HeaBosVan17 47 C + FUV $\rightarrow$ C$^+$ + e Photoioniz. same as GOW17 @HeaBosVan17 48 M + FUV $\rightarrow$ M$^+$ + e Photoioniz. same as GOW17 @HeaBosVan17 49 OH$_\mathrm{x}$ + FUV $\rightarrow$ O + H Photodiss. same as GOW17 @HeaBosVan17 50 CH$_\mathrm{x}$ + FUV $\rightarrow$ C + H Photodiss. same as GOW17 @HeaBosVan17 51 H + CR $\rightarrow$ H$^+$ + e Cosmic-ray ioniz. $\zeta_\mathrm{H}=3\times10^{-17}$s$^{-1}$ per H @WalGirNaa15 52 He + CR $\rightarrow$ He$^+$ + e Cosmic-ray ioniz. same as GOW17 @g10 53 C + CR $\rightarrow$ C$^+$ + e Cosmic-ray ioniz. within 1% of GOW17 @Lis03 54 H$_2$ + CR $\rightarrow$ H$^+$ + H + e Cosmic-ray ioniz. $0.037\zeta_\mathrm{H}$ per H$_2$ @MicGloFed12 55 H$_2$ + CR $\rightarrow$ 2H Cosmic-ray diss. $0.21\zeta_\mathrm{H}$ per H$_2$ @MicGloFed12 56 H$_2$ + CR $\rightarrow$ H$_2^+$ + e Cosmic-ray ioniz. as GOW17; $2\zeta_\mathrm{H}$ per H$_2$ @GonOstWol17 57 CO + CR (+H) $\rightarrow$ HCO$^+$ Pseudoreaction, via CO$^+$ same as GOW17 @g10 58 CO + CR $\rightarrow$ C + O Cosmic-ray diss. 10$\zeta_\mathrm{H}y$(CO) @WakLoiHer15 59 C + CRPHOT $\rightarrow$ C$^+$ + e Ioniz. by CR-induced FUV Similar to GOW17 @GreLepDal87 [@McEWalMar12] 60 CO + CRPHOT $\rightarrow$ C + O Diss. by CR-induced FUV GOW17 rate differs @GreLepDal87 [@McEWalMar12] 61 M + CRPHOT $\rightarrow$ M$^+$ + e Ioniz. by CR-induced FUV Similar to GOW17 @McEWalMar12 reference Rawlings (1992, priv. comm.) 62 H + XR $\rightarrow$ H$^+$ + e Secondary ioniz. Fitted from table @DalYanLiu99 63 He + XR $\rightarrow$ He$^+$ + e Secondary ioniz. Fitted from table @DalYanLiu99 64 C + XR $\rightarrow$ C$^+$ + e Secondary ioniz. 3.92$\times$ rate for H MS05 65 M + XR $\rightarrow$ M$^+$ + e Secondary ioniz. 6.67$\times$ rate for H MS05 66 H$_2$ + XR $\rightarrow$ H$_2^+$ + e Secondary ioniz. Fitted from table @DalYanLiu99 67 H$_2$ + XR $\rightarrow$ 2H Secondary diss. Fitted from table @DalYanLiu99 68 CO + XR $\rightarrow$ C$^+$ + O + e Secondary ioniz. 3.92$\times$ rate for H MS05 69 CH$_\mathrm{x}$ + XR $\rightarrow$ C$^+$ + H + e Secondary ioniz. 3.92$\times$ rate for H MS05 70 OH$_\mathrm{x}$ + XR $\rightarrow$ O + H$^+$ + e Secondary ioniz. 2.97$\times$ rate for H MS05 71 HCO$^+$ + XR $\rightarrow$ C$^+$ + H$^+$ + O + e Secondary ioniz. 3.92$\times$ rate for H MS05 72 C + XRPHOT $\rightarrow$ C$^+$ + e Ioniz. by XR-induced FUV Eqn. \[eqn:xrfuv\]; Tab. \[tab:crphot\] See Tab. \[tab:crphot\] 73 M + XRPHOT $\rightarrow$ M$^+$ + e Ioniz. by XR-induced FUV Eqn. \[eqn:xrfuv\]; Tab. \[tab:crphot\] See Tab. \[tab:crphot\] 74 CO + XRPHOT $\rightarrow$ C + O Ioniz. by XR-induced FUV Eqn. \[eqn:crphot\] @GreLepDal87 [@McEWalMar12] 75 CH$_\mathrm{x}$ + XRPHOT $\rightarrow$ C + H Diss. by XR-induced FUV Eqn. \[eqn:xrfuv\]; Tab. \[tab:crphot\] See Tab. \[tab:crphot\] 76 OH$_\mathrm{x}$ + XRPHOT $\rightarrow$ O + H Diss. by XR-induced FUV Eqn. \[eqn:xrfuv\]; Tab. \[tab:crphot\] See Tab. \[tab:crphot\] The collisional reactions considered are listed in Table \[tab:reactions\] and photo/CR/X-ray reactions in Table \[tab:photoreactions\]. The reaction network is a superset of the NL99 @GloCla12 network, with most additions taken from @GonOstWol17. The extra reactions included are numbers \#13, \#14, \#15, \#18, \#25, \#28, \#29, \#31, \#39, \#40, \#41, \#56, \#60, plus the X-ray photoreactions \#62-76. The results of 1D simulations of the MS05 models 1-4, calculated with and without these additional reactions, are plotted in Figs. \[fig:gongAB\] and \[fig:gongC\]. The abundances of H$_2$, CO, H, electrons, and gas temperature are shown in Fig. \[fig:gongAB\], and abundances of carbon-bearing species in Fis. \[fig:gongC\]. The main difference apparent from Fig. \[fig:gongAB\] is that $y$(CO) has a very different relationship with column density for the two sets of reactions. The gas temperature is not strongly affected, except for models 2 and 4, which have a strong chemo-thermal instability for the original NL99 network. This is weaker when using the updated network. Looking at the carbon chemistry in Fig. \[fig:gongC\], the updated network has consistently lower C$^+$ abundance for all calculations. The original NL99 network produces results much closer to those of MS05; in fact the C$^+$ abundance showed the largest discrepancy between our results and MS05 in section \[sec:ms05\]. The neutral C abundance is higher using the updated network except in the region of column density where C$^+$ and CO co-exist, for which the updated network typically has lower neutral C abundance. At very high column density, the neutral C abundance is much higher with the updated network. CO forms more rapidly with increasing column density using the updated network; this is in much better agreement with the <span style="font-variant:small-caps;">Cloudy</span> results in Section \[sec:cloudy\]. ---------------- --------------------------------------- --------------------------------------- $4\pi J_{X}=1.6$ ergcm$^{-2}$s$^{-1}$ $4\pi J_{X}=160$ ergcm$^{-2}$s$^{-1}$ \[0pt\]\[0pt\] \[0pt\]\[0pt\] ---------------- --------------------------------------- --------------------------------------- ---------------- --------------------------------------- --------------------------------------- $4\pi J_{X}=1.6$ ergcm$^{-2}$s$^{-1}$ $4\pi J_{X}=160$ ergcm$^{-2}$s$^{-1}$ \[0pt\]\[0pt\] \[0pt\]\[0pt\] ---------------- --------------------------------------- --------------------------------------- Heating and Cooling Rates {#sec:cooling} ========================= We model the thermal evolution of the gas in our simulations using a cooling function based largely on the one developed by @g10 and @GloCla12, but updated to account for the effects of X-ray heating, as detailed in Section 2.3 of the current paper. A full list of the processes included in the cooling function is given in Table \[cool\_model\], along with the sources for the rates used. For a few processes, we also give additional details below. Process Reference(s) ---------------------------------- ------------------------------------------------------ [**Radiative cooling:**]{} C fine structure lines Atomic data – @sv02 Collisional rates (H) – @akd07 Collisional rates (H$_{2}$) – @sch91 Collisional rates (e$^{-}$) – @joh87 Collisional rates (H$^{+}$) – @rlb90 C$^{+}$ fine structure lines Atomic data – @sv02 Collisional rates (H$_{2}$) – @fl77 Collisional rates (H, $T < 2000 \: {\rm K}$) – @hm89 Collisional rates (H, $T > 2000 \: {\rm K}$) – @k86 Collisional rates (e$^{-}$) – @wb02 O fine structure lines Atomic data – @sv02 Collisional rates (H) – @akd07 Collisional rates (H$_{2}$) – see @gj07 Collisional rates (e$^{-}$) – @bbt98 Collisional rates (H$^{+}$) – @p90 [@p96] Si fine structure lines All data – @hm89 Si$^{+}$ fine structure lines Atomic data – @sv02 Collisional rates (H) – @r90 Collisional rates (e$^{-}$) – @dk91 H$_{2}$ rovibrational lines @ga08 CO rovibrational lines @nk93 [@nlm95] Gas-grain energy transfer @hm89 Atomic resonance lines Hydrogen – @black81 [@cen92] Helium and metals – @gf12 Atomic metastable transitions @hm89 [@bac15] Compton cooling @cen92 [**Chemical cooling:**]{} H collisional ionisation See Table A1 H$_{2}$ collisional dissociation See Table A1 H$^{+}$ recombination @FerPetHor92 [@w03] [**Heating:**]{} Photoelectric effect @bt94 [@w03] H$_{2}$ photoionisation @MeiSpa05 H$_{2}$ photodissociation @bd77 UV pumping of H$_{2}$ @bht90 H$_{2}$ formation on dust grains @hm89 X-ray Coulomb heating See Section 2.3.2 Cosmic ray ionisation @gl78 Fine structure cooling {#fine-structure-cooling .unnumbered} ---------------------- We model atomic fine structure cooling from neutral C, O and Si atoms and C$^{+}$ and Si$^{+}$ ions by directly solving for the fine structure level populations, with the assumption that the populations of any electronically-excited states are zero. This assumption allows us to model C$^{+}$ and Si$^{+}$ as two-level systems and C, O and Si as three-level systems, allowing us to write down analytical expressions for the cooling rate from each species in a relatively simple fashion. We do not account for any external sources of radiation other than the cosmic microwave background. The sources for the data used in the level population calculations are listed in Table \[cool\_model\], and a more detailed discussion of our approach can be found in @gj07. Note that we use the Si and Si$^{+}$ cooling rates as a proxy for the cooling coming from the species represented by M and M$^{+}$, which include not only Si but also other low ionization potential metals such as Mg or Fe. This simplification is somewhat inaccurate, but in practice this is unlikely to be important as the fine structure cooling is typically dominated by C$^{+}$ and O in regions with low A$_{\rm V}$ and by C in regions with high A$_{\rm V}$. CO rovibrational line cooling {#co-rovibrational-line-cooling .unnumbered} ----------------------------- We model CO cooling using the cooling tables given in @nk93 and @nlm95, which are based on a large velocity gradient (LVG) calculation of the CO level populations as a function of the H$_{2}$ number density, CO number density temperature, and local velocity gradient. The lowest temperature included in these tables is 10 K, but to allow us to handle very cold molecular gas we have extended them down to 5 K using collisional data from @fl01 and @w06, as described in Appendix A of @GloCla12. The LVG calculation in @nk93 and @nlm95 assumes that CO is excited primarily by collisions with H$_{2}$. However, in our cooling function, we also account for collisions with atomic hydrogen and with electrons, using the procedure described in Section C.4 of @MeiSpa05. \[lastpage\] [^1]: E-mail: jmackey@cp.dias.ie (JM) [^2]: H$_{3}^{+}$ can also form via the radiative association of H$_{2}$ with H$^{+}$, but this process is slow (see e.g. the discussion in @GloSav09), and is only competitive with formation via H$_{2}^{+}$ in gas with a very low H$_{2}$ abundance. In these conditions, the H$_{3}^{+}$ abundance itself is very small and H$_{3}^{+}$ plays a negligible role in the gas chemistry.
--- abstract: 'Cellular automata are discrete dynamical systems and a model of computation. The limit set of a cellular automaton consists of the configurations having an infinite sequence of preimages. It is well known that these always contain a computable point and that any non-trivial property on them is undecidable. We go one step further in this article by giving a full characterization of the sets of Turing degrees of cellular automata: they are the same as the sets of Turing degrees of effectively closed sets containing a computable point.' author: - 'Alex Borello, Julien Cervelle, Pascal Vanier' bibliography: - 'books.bib' - 'biblio.bib' title: Turing degrees of limit sets of cellular automata --- Introduction ============ Cellular Automata (CAs for short) are both discrete dynamical systems and a model of computation. They were introduced in the late 1940s independently by John von Neumann and Stanislaw Ulam to study, respectively, self-replicating systems and the growth of quasi-crystals. A $d$-dimensional CA consists of cells aligned on that may be in a finite number of states, and are updated synchronously with a local rule, depending only on a finite neighborhood. All cells operate under the same local rule. The state of all cells at some time step is called a configuration. CAs are very well known for being simple systems that may exhibit complicated behavior. A $d$-dimensional subshift of finite type (SFT for short) is a set of colorings of by a finite number of colors containing no pattern from a finite family of forbidden patterns. Most proofs of undecidability concerning CAs involve the use of SFTs, so both topics are very intertwined [@Kar1990; @Kar1992; @Kar1994; @Mey2008; @Kar2011]. A recent trend in the study of SFTs has been to give computational characterizations of dynamical properties, which has been followed by the study of their computational structure and in particular the comparison with the computational structure of effectively closed sets, which are the subsets of on which some Turing machine does not halt. It is quite easy to see that SFTs are such sets. In this paper, we follow this trend and study the limit set $\limitset{\ca A}$ of a CA $\ca A$, which consist of all the configurations of the CA that can occur after arbitrarily long computations. They were introduced by @CPY1989 in order to classify CAs. It has been proved that non-trivial properties on these sets are undecidable by @Kar1994b [@GR2010] for CAs of all dimensions. Limit sets of CAs are subshifts, and the question of which subshifts may be limit sets of CA has been a thriving topic, see [@Hur1987; @Hur1990b; @Hur1990; @Maa1995; @FK2007; @DiLM2009; @BGK2011]. However, most of these results are on the language of the limit set or on simple limit sets. Our aim here is to study the configurations themselves. In dimension $1$, limit sets are effectively closed sets, so it is quite natural to compare them from a computational point of view. The natural measure of complexity for effectively closed sets is the Medvedev degree [@Sim2011a], which, informally, is a measure of the complexity of the simplest points of the set. As limit sets always contain a uniform configuration (wherein all cells are in the same state), they always contain a computable point and have Medvedev degree . Thus, if we want to study their computable structure, we need a finer measure; in this sense, the set of Turing degrees is appropriate. It turns out that for SFTs, there is a characterization of the sets of Turing degrees found by @JeandelV2013:turdeg, which states that one may construct SFTs with the same Turing degrees as any effectively closed set containing a computable point. In the case of limit sets, such a characterization would be perfect, as limit sets always contain a computable point[^1]. This is exactly what we achieve in this article: \[mainthm\] For any effectively closed set $S$, there exists a cellular automaton $\ca A$ such that $$\turdeg\limitset{\ca A}=\turdeg{S}\cup\{\turdegzero\}\text{.}$$ In the way to achieve this theorem, we introduce a new construction which gives us some control over the limit set. We hope that this construction will lead to other unrelated results on limit sets of CAs, as it was the case for the construction in [@JeandelV2013:turdeg], see [@JeandelV2013]. The paper is organized as follows. In Section \[prelim\] we recall the usual definitions concerning CAs and Turing degrees. In Section \[requirements\] we give the reasons for each trait of the construction which allows us to prove theorem \[mainthm\]. In Section \[construction\] we give the actual construction. We end the paper by a discussion, in Section \[CB\], on the Cantor-Bendixson ranks of the limit sets of CAs. The choice has been made to have colored figures, which are best viewed on screen. \[prelim\]Preliminary definitions ================================= A ($1$-dimensional) *cellular automaton* is a triple $\ca A = (Q, r, \delta)$, where $Q$ is the finite set of *states*, $r > 0$ is the *radius* and $\delta : Q^{2r + 1}\to Q$ the *local transition function*. An element of $i\in\ZZ$ is called a *cell*, and the set $\inter{i - r}{i + r}$ is the *neighborhood* of $i$ (the elements of which are the *neighbors* of $i$). A *configuration* is a function $\cacf c : \ZZ\to Q$. The local transition function induces a *global transition function* (that can be regarded as the automaton itself, hence the notation), which associates to any configuration $\cacf c$ its *successor*: $$\ca A(\cacf c) : \left\{\begin{array}{ccl} \ZZ &\to& Q\\ i &\mapsto& \delta(\cacf c(i - r), \dots, \cacf c(i - 1), \cacf c(i), \cacf c(i + 1), \dots, \cacf c(i + r)))\text{.} \end{array}\right.$$ In other words, all cells are finite automata that update their states in parallel, according to the same local transition rule, transforming a configuration into its successor. If we draw some configuration as a horizontal bi-infinite line of cells, then add its successor above it, then the successor of the latter and so on, we obtain a *space-time diagram*, which is a two-dimensional representation of some computation performed by $\ca A$. A *site* $(i, t)\in\ZZ^2$ is a cell $i$ at a certain time step $t$ of the computation we consider (hereinafter there will never be any ambiguity on the automaton nor on the computation considered). The *limit set* of $\ca A$, denoted by $\limitset{\ca A}$, is the set of all the configurations that can appear after arbitrarily many computation steps: $$\limitset{\ca A} = \bigcap_{k\in\NN}\ca A^k(Q^\ZZ)\text{.}$$ For surjective CAs, the limit set is the set of all possible configurations $Q^\ZZ$, while for non-surjective CAs, it is the set of all configurations containing no orphan of any order, see [@Hur1990b]. An *orphan of order $n$* is a finite word $w$ which has no preimage by $\ca A^n_{|Q^{|w|}}$. An *effectively closed set*, or *class*, is a subset $S$ of for which there exists a Turing machine that, given any $x\in\cantor$, halts if and only if $x\not\in S$. Equivalently, a class $S\subseteq\cantor$ is if there exists a computable set $L$ such that $x\in S$ if and only if no prefix of $x$ is in $L$. It is then quite easy to see that limit sets of CAs are classes: for any limit set, the set of forbidden patterns is the set of all orphans of all orders, which form a recursively enumerable set, since it is computable to check whether a finite word is an orphan. For $x, y\in\cantor$, we say that $x\turinf y$ if $x$ is computable by a Turing machine using $x$ as an oracle. If $x\turinf y$ and $x\tursup y$, $x$ and $y$ are said to be Turing-equivalent, which is noted $x\turequiv y$. The *Turing degree* of $x$, noted $\turdeg x$, is its equivalence class under relation $\turequiv$. The Turing degrees form a lattice whose bottom is , the Turing degree of computable sequences. Effectively closed sets are quite well understood from a computational point of view, and there has been numerous contributions concerning their Turing degrees, see the book of @CR1998 for a survey. One of the most interesting results may be that there exist classes whose members are two-by-two Turing incomparable [@JS1972]. \[requirements\]Requirements of the construction ================================================ The idea to prove Theorem \[mainthm\] is to make a construction that embeds computations of a Turing machine that will check a read-only oracle tape containing a member of the class $S$ that will have to appear “non-deterministically”. The following constraints have to be addressed. - Since CAs are intrinsically deterministic, this non-determinism will have to come from the “past”, from the “limit” of the preimages. - The oracle tape, the element of that needs to be checked, needs to appear entirely on at least one configuration of the limit set. - Each configuration of the limit set containing the oracle tape needs to have exactly one head of the Turing machine, in order to ensure that there really is a computation going on in the associated space-time diagram. - The construction, without any computation, needs to have a very simple limit set, it needs to be computable, and in particular countable; this to ensure that no complexity overhead will be added to any configuration containing the oracle tape, and that “unuseful” configurations of the limit set – the configurations that do not appear in a space-time diagram corresponding to a computation – will be computable. - The computation of the embedded Turing machine needs to go backwards, this to ensure that we can have the non-determinism. And an error in the computation must ensure that there is no infinite sequence of preimages. - The computation needs to have a beginning (also to ensure the presence of a head), so the construction needs some marked beginning, and the representation of the oracle and work tapes in the construction need to disappear at this point, otherwise by compactness the part without any computation could be extended bi-infinitely to contain any member of , thus leading to the full set of Turing degrees. There are other constraints that we will discuss during the construction, as they arise. In order to make a construction complying to all these constraints, we reuse, with heavy modifications, an idea of @JeandelV2013:turdeg, which is to construct a sparse grid. However, their construction, being meant for subshifts, requires to be completely rethought in order to work for CAs. In particular, there was no determinism in this construction, and the oracle tape did not need to appear on a single column/row, since their result was on two-dimensional subshifts. \[construction\]The construction ================================ \[sparsegrid\]A self-vanishing sparse grid ------------------------------------------ In order to have space-time diagrams that constitute sparse grids, the idea is to have columns of squares, each of these columns containing less and less squares as we move to the left, see fig. \[butterfly:baselayer\]. The CA has three categories of states: - a *killer state*, which is a spreading state that erases anything on its path; - a *quiescent state*, represented in white in the figures; its sole purpose is to mark the spaces that are “outside” the construction; - some *construction states*, which will be constituted of signals and background colors. In order to ensure that just with the signals themselves it is not possible to encode anything non-computable in the limit set, all signals will need to have, at all points, at any time, different colors on their left and right, otherwise the local rule will have a killer state arise. Here are the main signals. - Vertical lines: serve as boundaries between columns of squares and form the left/right sides of the squares. - SW-NE and SE-NW diagonals: used to mark the corners of the squares, they are signals of respective speeds $1$ and $-1$. Each time they collide with a vertical line (except for the last square of the row), they bounce and start the converse diagonal of the next square. - Counting signal: will count the number of squares inside a column; every time it crosses the SW-NE diagonal of a square it will shift to the left. When it is superimposed to a vertical line, it means that the square is the last of its column, so when it crosses the next SE-NW diagonal, it vanishes and with it the vertical line. - Starting signals: used to start the next column to the left, at the bottom of one column. Here is how they work. - The bottommost signal, of speed $-\frac 14$, is at the boundary between the empty part of the space-time diagram and the construction. It is started $4$ time steps after the collision with the signal of speed $-\frac 13$. - The signal of speed $-\frac 13$ is started just after the vertical line sees the incoming SE-NW diagonal of the first square of the row on the right, at distance $3$[^2] (the diagonal will collide with the vertical line $2$ time steps after the start of that signal). - At the same time as the signal of speed $-\frac 13$ is created, a signal of speed $-\frac 12$ is generated. When this signal collides with the bottommost signal, it bounces into a signal of speed $\frac 14$ that will create the first SE-NW diagonal of the first square of the row of squares of the left, $4$ time steps after it will collide with the vertical line. On top of the construction states, except on the vertical lines, we add a parity layer $\{0, 1\}$: on a configuration, two neighboring cells of the construction must have different parity bits, otherwise a killer state appears. On the left of a vertical line there has to be parity $1$ and on the right parity $0$, otherwise the killer state pops up again. This is to ensure that the columns will always contain an even number of squares. The following lemmas address which types of configurations may occur in the limit set of this CA. First note that any configuration wherein the construction states do not appear in the right order do not have a preimage. \[limitlem:squares\] The sequence of preimages of a segment ended by consecutive vertical lines (and containing none) is a slice of a column of squares of even side. Suppose a configuration contains two vertical-line symbols, then to be in the limit set, in between these two symbols there needs to be two diagonal symbols, one for the SE-NW one and one for SW-NE one, a symbol for the counting signal, and in between these signals there needs to be the appropriate colors: there is only one possibility for each of them. If this is not the case, then the configuration has no preimage. Also, the distance between the first vertical line and the SE-NW diagonal needs to be the same than the distance between the second vertical line and the SW-NE diagonal, otherwise the signals at the bottom – the ones starting a column, that are the only preimages of the first diagonals – would have, in one case, created a vertical line in between, and in the other case, not started at the same time on the right vertical. The side of the squares is even, otherwise the parity layer has no preimage. \[limitlem:distances\] A configuration of the limit set containing at least three vertical-line symbols needs to verify, for any three consecutive symbols, that if the distance between the first one and the second one is $k$, then the distance between the second one and the third one needs to be $(k + 2)$. Let us take a configuration containing at least three vertical-line symbols, take three consecutive ones. The states between them have to be of the right form as we said above. Suppose the first of these symbols is at distance $k_1$ of the second one, which is at distance $k_2$ of the third one. This means that the first (resp. second) segment defines a column of squares of side $k_1$ (resp. $k_2$). It is clear that the second column of squares cannot end before the first one. Now let $i$ be the position of the counting signal of the first column and $j$ the distance between the SW-NE diagonal and the left vertical line. The preimage of the first segment ends $(k_1i + j)$ (resp. $(k_1(i - 1) + j)$) steps before if the counting signal is on the left (resp. right) of the SW-NE diagonal. Then, the preimages of the left and right vertical lines of this column are the creating signals. Before the signal created on the right bounces on the one of speed $-\frac 14$ created on the left, it collides with the one of speed $-\frac 13$, thus determining the height of the squares on the right column of squares. So $k_1 = k_2 - 2$. \[limitlem\] A configuration having two vertical-line symbols pertaining to the limit set needs to verify one of the following statements. - It is constituted of a finite number of vertical lines. - It appears in the space-time diagram of fig. \[butterfly:baselayer\]. - It is constituted of an infinite number of vertical lines, then starting from some position it is equal on the right to some (shifted) line of fig. \[butterfly:baselayer\]. We place ourselves in the case of a configuration of the limit set. Because of lemma \[limitlem:squares\], two consecutive vertical lines at distance $k$ from each other define a column of squares. In a space-time diagram they belong to, on their left there necessarily is another column of squares, because of the starting signal generated at the beginning of the left vertical line, except when $k = 3$, in which case there is nothing on the left. In this column, the vertical lines are at distance $(k - 2)$, see lemma \[limitlem:distances\]. So, if there is an infinite number of vertical lines, either it is of the form of fig. \[butterfly:baselayer\], or there is some killer state coming from infinity on the left and “eating” the construction. \[computationingrid\]Backward computation inside the grid --------------------------------------------------------- We now wish to embed the computation of a reversible Turing machine inside the aforementioned sparse grid, which for this purpose is better seen as a lattice. The fact the TM is reversible allows us to embed it backwards in the CA. We will below denote by *TM time* (resp. *CA time*) the time going forward for the Turing machine (resp. the CA); on a space-time diagram, TM time goes from top to bottom, while CA time goes from bottom to top (arrows in fig. \[computation:inbutterfly\]). That way, the beginning of the computation of the TM will occur in the first (topmost) square of the first (leftmost) column of squares. We have to ensure that any computation of the TM is possible, and in particular ensure that such a computation is consistent over time; the idea is that at the first TM time step, the moment the sparse grid disappears, the tape is on each of the vertical line symbols, but since these all disappear a finite number of CA steps before, we have to compel all tape cells to shift to the right regularly as TM time increases. Moreover, we want to force the presence of exactly one head (there could be none if it were, for instance, infinitely far right). To do that, the grid is divided into three parts that must appear in this order (from left to right): the left of the head, the right of the head (together referred to as the computation zone) and the unreachable zone (where no computation can ever be performed), resp. in blue, yellow and green in fig. \[computation:inbutterfly\]. The vertices of our lattice are the top left corners of the squares, each one marked by the rebound of a SE-NW diagonal on a vertical line, while the top right corners will just serve as intermediate points for signals. More precisely, if we choose (arbitrarily) the top left corner of the first square of the first column to appear at site $(0, 0)$, then for any $i, j\in\NN$, the respective sites for the top left and top right corners of $s_{i, j}$, the $(j + 1)$-th square of the $(i + 1)$-th column, are the following (cf. fig. \[computation:inbutterfly\]): $$\left\{\hspace{-1mm} \begin{array}{ l@{\;}l@{\;}l} s^\ell_{i, j} &=& (i(i + 1), -2(i + 1)j)\\ s^r_{i, j} &=& ((i + 1)(i + 2), -2(i + 1)j)\text{.} \end{array}\right.$$ Fig. \[computation:mt\] illustrates a computation by the TM, with the three aforementioned zones, as it would be embedded the usual way (but with reverse time) into a CA, with site $(i, -t)$ corresponding to the content of the tape at $i\in\NN$ and TM time $t\in\NN$. Fig. \[computation:inter\] represents another, still simple, embedding, which is a distortion of the previous one: the head moves every even time step within a tape that is shifted every odd time steps, so that instead of site $(i, -t)$, we have two sites, $(i + t, -2t)$ and $(i + t, -2t - 1)$, resp. the *computation site* (big circle on fig. \[computation:inter\]) and the *shifting site* (small circle on fig. \[computation:inter\]). The head only reads the content of the tape when it lies on a computation site. This type of embedding can easily be realized forwards or backwards (provided the TM is reversible). Our embedding, derived from the latter, is drawn on fig. \[computation:inbutterfly\]. The “only” difference is the replacement of sites $(i + t, -2t)$ and $(i + t, -2t - 1)$ by sites $s^\ell_{i, t}$ and $s^\ell_{i, t + 1}$. Notice that as the number of squares in a column is always finite, each square can “know” whether its top left corner is a computation or a shifting site with a parity bit. More precisely, the $j$-th square (from bottom to top) of a column has a computation site on its top left if and only if $j$ is even. Let $s_{i, j}$ be a square of our construction. $s^\ell_{i, j}$ is either a computation site or a shifting site. In the latter case, it is supposed to receive the content of a cell of the TM tape with an incoming signal of speed $-1$. All it has to do is to send it to $s^\ell_{i, j - 1}$ (at speed $0$), which is a computation site. In the former case, however, things a slightly more complicated. The content of the tape has to be transmitted to $s^\ell_{i - 1, j - 1}$ (which is a shifting site). To do that, a signal of speed $0$ is sent and waits for site $s^r_{i - 1, j}$, which sends the content to $s^\ell_{i - 1, j - 1}$ with a signal of speed $-1$ along the SE-NW diagonal. The problem is to recognize which $s^r$ site is the correct one. Fortunately, there are only two possibilities: it is either the first or the second $s^r$ site to appear after (in CA time, of course) $s^\ell_{i, j}$ on the vertical line. The first case corresponds exactly to the unreachable zone (where $j\leq i$), hence the result if the three zones are marked. The lack of other cases is due to the number of $s_i$ squares, which is only $2(i + 1)$. Another issue is the superposition of such signals. Here again, there are only two cases: in the unreachable zone there is none, whereas in the computation zone a signal of speed $0$ from a computation site can be superimposed to the signal of speed $0$ sent by the shifting site just above it. As aforesaid, there is no other case because of the limited number of $s_i$ squares. Thus, there is no problem to keep the number of states of the CA finite, since the number of signals going through a same cell is limited to two at the same time. While the two parts of the computation zones are to be separated by the presence of a head, the unreachable zone is at the right of a signal that is sent from any computation site that has two diagonals (one from the left and one from the right) below it (indicated as circles on fig. \[butterfly:baselayer\]), goes at speed $0$ until the next $s^r$ site, then at speed $1$ (along SE-NW diagonals) to the second next shifting site, and finally at speed $0$ again, to the next computation site (cf. fig. \[computation:inbutterfly\]), which also has two diagonals below it if the grid contains no error. Another way to detect the unreachable zone is to detect that the counting signal crossed the SW-NE diagonal exactly two CA time steps after it has crossed the SE-NW diagonal. This means that the unreachable zone is structurally coded in the construction. Now only the movements of the head remain to be described (in black on fig. \[computation:inbutterfly\]). Let $s^\ell_{i, j}$ be a computation site containing the head. - If the previous move of the head (previous because we are in CA time, that is, in reverse TM time) was to the left, the next computation site is the one just above, that is, $s^\ell_{i, j - 2}$. The head is thus transferred by a simple signal of speed $0$. - If the previous move was to stand still, the next computation site is $s^\ell_{i - 1, j - 2}$. It can be reached by a signal of speed $0$ until the second next $s^r$ site, from which a signal of speed $-1$ (along a SE-NW diagonal) is launched, to be replaced by another signal of speed $0$ from $s^\ell_{i - 1, j - 1}$ on. - If the previous move was to the right, the next computation site is $s^\ell_{i - 2, j - 2}$. It can be reached by a signal of speed $0$ until the second next $s^r$ site, from which a signal of speed $-1$ (along a SE-NW diagonal) is launched, to be replaced by another signal of speed $0$ from $s^\ell_{i - 1, j - 1}$ on, which itself waits for the next $s^r$ site (which is $s^r_{i - 2, j}$) to start another signal of speed $1$ (along a SW-NE diagonal) that is finally succeeded to by a last signal of speed $0$ from $s^\ell_{i - 2, j - 1}$ on. \ \[hooper\]The computation itself -------------------------------- As we said before, the computation will take place on the computation sites, which will contain two kinds of tape cells: one for the oracle and one for the work. In the unreachable zone there are only oracle cells, which do not change over time except for the shifting. Now we want to eliminate all space-time diagrams corresponding to rejecting computations of some Turing machine $M$. @Ben1973 has proved that for any Turing machine, we can construct a reversible one computing the same function. So a first idea would just be to encode this reversible Turing machine in the sparse grid; however there is no way to guarantee that the work tape that was non-deterministically inherited from the past corresponds to a valid configuration and by the time the Turing machine “realizes” this it will be too late, there will already exist configurations containing some oracle that we would otherwise have rejected. The solution to this problem is to use a robust Turing machine in the sense of @Hoo1966, that is to say a Turing machine that regularly rechecks its whole computation. @KO2008 have constructed reversible such machines. In these constructions the machines constructed were working on a bi-infinite tape, which had the drawback that some infinite side of the tape might not be checked; here it is not the case, hence we can modify the machine so that on an infinite computation it visits all cells of the tape (we omit the details for brevity’s sake). In terms of limit sets, this means that if some oracle is rejected by the machine, then it must have been rejected an infinite number of times in the past (CA time). So, only oracles pertaining to the desired class may appear in the limit set. Furthermore, even if some killer state coming from the right eats the grid, at some point in the past of the CA, it will be in the unreachable zone, and stay there for ever, so the computation from that moment on even ensures that the oracle computed is correct. Though, that doesn’t matter, because in this case the configurations of the corresponding space-time diagram that are in the limit set are uniform both on the right and on the left except for a finite part in the middle, and are hence computable. \[CB\]Cantor-Bendixson rank of limit sets ========================================= The *Cantor-Bendixson derivative* of some set $S\subseteq \Sigma^\ZZ$, with $\Sigma$ finite, is noted $\CBd{S}$ and consists of all configurations of $S$ except the isolated ones. A configuration $\cacf c$ is said to be *isolated* if there exists a pattern $P$ such that $\cacf c$ is the only configuration of $S$ containing $P$ (up to a shift). For any ordinal $\lambda$ we can define $\CBdn{S}{\lambda}$, the Cantor-Bendixson derivative of rank $\lambda$, inductively: $$\begin{array}{l@{\;\;}c@{\;\;\;}l} \CBdn{S}{0} &=& S\\ \CBdn{S}{\lambda + 1} &=& \CBd{\CBdn{S}{\lambda}}\\ \CBdn{S}{\lambda} &=& \displaystyle{\bigcap_{\gamma<\lambda}}\CBdn{S}{\gamma}\text{.} \end{array}$$ The *Cantor-Bendixson rank* of $S$, denoted by $\CB{S}$, is defined as the first ordinal  $\lambda$ such that $\CBdn{S}{\lambda + 1} = \CBdn{S}{\lambda}$. In particular, when $S$ is countable, $\CBdn{S}{\CB{S}}$ is empty. An element $s$ is of rank $\lambda$ in $S$ if $\lambda$ is the least ordinal such that $s\notin\CBdn{S}{\lambda}$. For more information about Cantor-Bendixson rank, one may skim [@Kechris]. The Cantor-Bendixson rank corresponds to the height of a configuration corresponding to a preorder on patterns as noted by @BDJ2008.Thus, it gives some information on the way the limit set is structured pattern-wise. A straightforward corollary of the construction above is the following. \[CBrank\] There exists a constant $c\leq 10$ such that for any class $S$, there exists a CA $\ca A$ such that $$\CB{\limitset{\ca A}} = \CB{S}+c\text{.}$$ Here the constant corresponds to the pattern overhead brought by the sparse-grid construction. Acknowledgments {#sec:Acknowledgments .unnumbered} =============== This work was sponsored by grants EQINOCS ANR 11 BS02 004 03 and TARMAC ANR 12 BS02 007 01. The authors would like to thank Nicolas Ollinger and Bastien Le Gloannec for some useful discussions. [^1]: Note that this is not the case for subshifts: there exist non-empty subshifts containing only non-computable points. [^2]: That can be done, provided the radius of the CA is large enough.
--- abstract: | We report the first measurement of the [(e,e’p) $(e,e'p)$ ]{}three-body breakup reaction cross sections in Helium-3 ([\^3[He]{} $^3$He]{}) and Tritium ([\^3[H]{} $^3$H]{}) at large momentum transfer ($\langle Q^2 \rangle \approx 1.9$ (GeV/c)$^2$) and $x_B>1$ kinematics, covering a missing momentum range of $40 \le {\ifmmode p_{miss} \else $p_{miss}$\fi}\le \SI{500}{\mega\eVperc}$. The measured cross sections are compared with different plane-wave impulse approximation (PWIA) calculations, as well as a generalized Eikonal-Approximation-based calculation that includes the final-state interaction (FSI) of the struck nucleon. Overall good agreement is observed between data and Faddeev-formulation-based PWIA calculations for the full [p\_[miss]{} $p_{miss}$]{} range for [\^3[H]{} $^3$H]{} and for $150 \le {\ifmmode p_{miss} \else $p_{miss}$\fi}\le \SI{350}{\mega\eVperc}$ for [\^3[He]{} $^3$He]{}. This is a significant improvement over previous studies at lower $Q^2$ and $x_B \sim 1$ kinematics where PWIA calculations differ from the data by up to 400%. For ${\ifmmode p_{miss} \else $p_{miss}$\fi}\ge 250$ MeV/c, the inclusion of FSI makes the calculation agree with the data to within about $10\%$. For both nuclei PWIA calculations that are based on off-shell electron-nucleon cross-sections and exact three-body spectral functions overestimate the cross-section by about $60\%$ but well reproduce its [p\_[miss]{} $p_{miss}$]{} dependence. These data are a crucial benchmark for few-body nuclear theory and are an essential test of theoretical calculations used in the study of heavier nuclear systems. author: - 'R. Cruz-Torres' - 'D. Nguyen' - 'F. Hauenstein' - 'A. Schmidt' - 'S. Li' - 'D. Abrams' - 'H. Albataineh' - 'S. Alsalmi' - 'D. Androic' - 'K. Aniol' - 'W. Armstrong' - 'J. Arrington' - 'H. Atac' - 'T. Averett' - 'C. Ayerbe Gayoso' - 'X. Bai' - 'J. Bane' - 'S. Barcus' - 'A. Beck' - 'V. Bellini' - 'F. Benmokhtar' - 'H. Bhatt' - 'D. Bhetuwal' - 'D. Biswas' - 'D. Blyth' - 'W. Boeglin' - 'D. Bulumulla' - 'A. Camsonne' - 'J. Castellanos' - 'J-P. Chen' - 'E. O. Cohen' - 'S. Covrig' - 'K. Craycraft' - 'B. Dongwi' - 'M. Duer' - 'B. Duran' - 'D. Dutta' - 'E. Fuchey' - 'C. Gal' - 'T. N. Gautam' - 'S. Gilad' - 'K. Gnanvo' - 'T. Gogami' - 'J. Golak' - 'J. Gomez' - 'C. Gu' - 'A. Habarakada' - 'T. Hague' - 'O. Hansen' - 'M. Hattawy' - 'O. Hen' - 'D. W. Higinbotham' - 'E. Hughes' - 'C. Hyde' - 'H. Ibrahim' - 'S. Jian' - 'S. Joosten' - 'H. Kamada' - 'A. Karki' - 'B. Karki' - 'A. T. Katramatou' - 'C. Keppel' - 'M. Khachatryan' - 'V. Khachatryan' - 'A. Khanal' - 'D. King' - 'P. King' - 'I. Korover' - 'T. Kutz' - 'N. Lashley-Colthirst' - 'G. Laskaris' - 'W. Li' - 'H. Liu' - 'N. Liyanage' - 'P. Markowitz' - 'R. E. McClellan' - 'D. Meekins' - 'S. Mey-Tal Beck' - 'Z-E. Meziani' - 'R. Michaels' - 'M. Mihovilovič' - 'V. Nelyubin' - 'N. Nuruzzaman' - 'M. Nycz' - 'R. Obrecht' - 'M. Olson' - 'L. Ou' - 'V. Owen' - 'B. Pandey' - 'V. Pandey' - 'A. Papadopoulou' - 'S. Park' - 'M. Patsyuk' - 'S. Paul' - 'G. G. Petratos' - 'E. Piasetzky' - 'R. Pomatsalyuk' - 'S. Premathilake' - 'A. J. R. Puckett' - 'V. Punjabi' - 'R. Ransome' - 'M. N. H. Rashad' - 'P. E. Reimer' - 'S. Riordan' - 'J. Roche' - 'M. Sargsian' - 'N. Santiesteban' - 'B. Sawatzky' - 'E. P. Segarra' - 'B. Schmookler' - 'A. Shahinyan' - 'S. Širca' - 'R. Skibiński' - 'N. Sparveris' - 'T. Su' - 'R. Suleiman' - 'H. Szumila-Vance' - 'A. S. Tadepalli' - 'L. Tang' - 'W. Tireman' - 'K. Topolnicki' - 'F. Tortorici' - 'G. Urciuoli' - 'L.B. Weinstein' - 'H. Wita[ł]{}a' - 'B. Wojtsekhowski' - 'S. Wood' - 'Z. H. Ye' - 'Z. Y. Ye' - 'J. Zhang' bibliography: - 'TritiumBib2.bib' date: today title: 'Probing few-body nuclear dynamics via $^3$H and $^3$He pn cross-section measurements' --- [^1] [^2] Understanding the structure and properties of nuclear systems is a formidable challenge with implications ranging from the formation of elements in the universe to their application in laboratory measurements of fundamental interactions. Due to the complexity of the strong nuclear interaction, nuclear systems are often described using effective models that are based on various levels of approximations. Testing and benchmarking such approximations is a high priority of modern nuclear physics research. The three nucleon system plays a special role in this endeavor as its ground state is complex but still exactly calculable. Therefore studies of Helium-3 ([\^3[He]{} $^3$He]{}) and Tritium ([\^3[H]{} $^3$H]{}) nuclei, especially using electron-scattering reactions, serve as a precision test of modern nuclear theory [@Golak:2005iy]. While there is a lot of electron scattering data on [\^3[He]{} $^3$He]{} [@Sick:2001rh; @Benmokhtar:2004fs; @Rvachev:2004yr; @Long:2019iig; @Mihovilovic:2014gdi; @Mihovilovic:2018fux; @Zhang:2015kna; @Camsonne:2016ged; @Riordan:2010id], [\^3[H]{} $^3$H]{} data are very sparse due to the safety limitations associated with placing a radioactive gas target in a high-current electron beam. In the early 60’s the Stanford Linear Accelerator Center (SLAC) measured [\^3[He]{} $^3$He]{} and [\^3[H]{} $^3$H]{} $(e,e')$ and $(e,e'p)$ to extract their elastic form factors and to test theoretical models of the three-nucleon wave functions [@Collard:1963zza; @Schiff:1963zzb; @PhysRev.136.B1030; @RevModPhys.37.402]. In the late 80’s MIT-Bates and Saclay extended the $(e,e')$ measurements to higher momentum transfer with improved accuracy [@Dow:1986auc; @Beck:1986rhj; @Beck:1987zz; @Dow:1988rk; @Juster:1985sd; @Amroun:1994qj]. However, despite significant theoretical advances, no new electron scattering data on [\^3[H]{} $^3$H]{} were published in over 30 years. Here we study the distributions of protons in [\^3[He]{} $^3$He]{} and in [\^3[H]{} $^3$H]{} using high-energy quasi-elastic (QE) electron scattering. The simultaneous measurement of both [\^3[He]{} $^3$He]{} and [\^3[H]{} $^3$H]{}[(e,e’p) $(e,e'p)$ ]{} cross sections places stringent constraints on the possible contribution of non-QE reaction mechanisms to our measurement, thereby increasing its sensitivity to the properties of the [\^3[He]{} $^3$He]{} and [\^3[H]{} $^3$H]{} ground-states. This work follows a recent extraction of the [\^3[He]{} $^3$He]{}[(e,e’p) $(e,e'p)$ ]{}to [\^3[H]{} $^3$H]{}[(e,e’p) $(e,e'p)$ ]{}cross-section ratio [@Cruz-Torres:2019bqw]. The measured cross-section ratio was expected to be largely insensitive to non-QE reaction mechanisms and thereby to test calculations of the ratio of proton momentum distributions in the measured nuclei. The results agreed with theoretical calculations for reconstructed initial proton momenta below $250$ MeV/c. However, the theoretical calculations underpredicted the measured ratio by 20% - 50% for momenta between $250$ and $550$ MeV/c. Therefore, the individual [\^3[He]{} $^3$He]{} and [\^3[H]{} $^3$H]{}[(e,e’p) $(e,e'p)$ ]{}cross-sections were needed to understand whether the observed disagreement arose from contributions of non-QE reaction mechanisms that do not cancel in the measured ratio or due to deficiencies in either nucleus wave function calculations. The results of this study are reported herein. We find that our cross sections are better described by PWIA calculations than previous works, that [\^3[H]{} $^3$H]{} is better described than [\^3[He]{} $^3$He]{}, and that including leading nucleon rescattering further improved the agreement with theory. The remaining difference between data and theory is opposite for [\^3[He]{} $^3$He]{} and [\^3[H]{} $^3$H]{}, leading to the previously observed large discrepancy in [\^3[He]{} $^3$He]{}/[\^3[H]{} $^3$H]{} cross-section ratio that might be explained by charge exchange processes. The experiment took place in 2018 at Hall A of the Thomas Jefferson National Accelerator Facility (JLab). It used the two high-resolution spectrometers (HRSs) [@Alcorn:2004sb] and a 20 $\mu A$ electron beam at 4.326 GeV incident on one of four identical 25-cm long gas target cells filled with Hydrogen ($70.8 \pm 0.4$ mg/cm$^2$), Deuterium ($142.2 \pm 0.8$ mg/cm$^2$), Helium-3 ($53.4 \pm 0.6$ mg/cm$^2$), and Tritium ($85.1 \pm 0.8$ mg/cm$^2$) [@Santiesteban:2018qwi]. Each HRS consisted of three quadrupole magnets for focusing and one dipole magnet for momentum analysis [@Alcorn:2004sb; @HRSupdates]. These magnets were followed by a detector package, slightly updated with respect to the one in Ref. [@Alcorn:2004sb], consisting of a pair of vertical drift chambers used for tracking, and two scintillation counter planes that provided timing and trigger signals. A CO$_{2}$ Cherenkov detector placed between the scintillators and a lead-glass calorimeter placed after them were used for particle identification. Scattered electrons were detected in the left-HRS, positioned at central momentum and angle of $\vec p_e {\!'} = 3.543$ GeV/c and $\theta_e = 20.88^{\circ}$, giving a central four-momentum transfer $Q^2 =\vec q\thinspace^2 -\omega^2 = 2.0$ (GeV/c)$^2$ (where the momentum transfer is $\vec q = \vec p_e - \vec p_e {\!'}$), energy transfer $\omega = 0.78$ GeV, and $x_{B} \equiv \frac{Q^2}{2m_p\omega} = 1.4$ (where $m_p$ is the proton mass). Knocked-out protons were detected in the right-HRS at two central kinematical settings of $(\theta_p, p_p)$ = ($48.82^{\circ}$, 1.481 GeV/c), and ($58.50^{\circ}$, 1.246 GeV/c) corresponding to low-[p\_[miss]{} $p_{miss}$]{} ($40 \le {\ifmmode p_{miss} \else $p_{miss}$\fi}\le 250$ MeV/c) and high-[p\_[miss]{} $p_{miss}$]{} ($250 \le {\ifmmode p_{miss} \else $p_{miss}$\fi}\le 500$ MeV/c), respectively, where $\vec p_{miss} = {\vec p}_p - \vec q$. The exact electron kinematics for each [p\_[miss]{} $p_{miss}$]{}bin varied within the spectrometer acceptance, see supplementary materials Tables III-VI for details. In the Plane-Wave Impulse Approximation (PWIA) for QE scattering, where a single exchanged photon is absorbed on a single proton and the knocked-out proton does not re-interact as it leaves the nucleus, the missing momentum and energy equal the initial momentum and separation energy of the knocked-out nucleon: $\vec{p}_i = \vec{p}_{miss}$, $E_i = E_{miss}$, where $E_{miss} = \omega - T_p - T_{A-1}$, $T_{A-1} = (\omega + m_A - E_p) - \sqrt{(\omega + m_A - E_p)^2-|\vec{p}_{miss}|^2}$ is the reconstructed kinetic energy of the residual $A-1$ system. $T_p$ and $E_p$ are the measured kinetic and total energies of the outgoing proton. Non-QE reaction mechanisms that lead to the same measured final state also contribute to the cross section, complicating this simple picture. Such mechanisms include rescattering of the struck nucleon (final-state interactions or FSI), meson-exchange currents (MEC), and exciting isobar configurations (IC). In addition, relativistic effects can be significant [@gao00; @udias99; @AlvarezRodriguez:2010nb]. The kinematics of our measurement were chosen to reduce contributions from such non-QE reaction mechanisms. For high-$Q^2$ reactions, the effects of FSI were shown to be reduced by choosing kinematics where the angle between $\vec{p}_{recoil}=-\vec{p}_{miss}$ and $\vec{q}$ is $\theta_{rq} \lesssim 40^{\circ}$, which also corresponds to $x_B \ge 1$  [@Boeglin:2011mt; @Sargsian:2001ax; @Frankfurt:1996xx; @Jeschonnek:2008zg; @Laget:2004sm; @Sargsian:2009hf; @Hen:2014gna]. Additionally MEC and IC were shown to be suppressed for $Q^2 > 1.5$ (GeV/c)$^2$ and $x_B > 1$ [@Sargsian:2001ax; @Sargsian:2002wc]. ![The number of [\^3[He]{} $^3$He]{}[(e,e’p) $(e,e'p)$ ]{}events as a function of [E\_[miss]{} $E_{miss}$]{} vs [p\_[miss]{} $p_{miss}$]{}. The solid purple line separates the high- and low-[p\_[miss]{} $p_{miss}$]{} kinematics. The dashed horizonal line labeled ‘2-body’ marks the 5-MeV two-body breakup peak and the dashed line labeled ‘Standing pair’ shows the expected [E\_[miss]{} $E_{miss}$]{}-[p\_[miss]{} $p_{miss}$]{} correlation for scattering off a standing SRC pair. []{data-label="fig:kinematical"}](EmPmKinematics.pdf) ![image](XSection.pdf) ![image](XSection.pdf) The raw data analysis follows that previously reported in Ref. [@Cruz-Torres:2019bqw] for the [\^3[He]{} $^3$He]{}/[\^3[H]{} $^3$H]{} $(e,e'p)$ cross-section ratio extraction. We selected electrons by requiring that the particle deposits more than half of its energy in the calorimeter: $E_{cal} / |\vec{p}| > 0.5$. We selected [(e,e’p) $(e,e'p)$ ]{}coincidence events by placing $\pm3\sigma$ cuts around the relative electron and proton event times and the relative electron and proton reconstructed target vertices (corresponding to a $\pm 1.2$ cm cut). Due to the low experimental luminosity, the random coincidence event rate was negligible. We discarded a small number of runs with anomalous event rates. Measured electrons were required to originate within the central $\pm 9$ cm of the gas target to exclude events originating from the target walls. By measuring scattering from an empty-cell-like target we determined that the target cell wall contribution to the measured [(e,e’p) $(e,e'p)$ ]{}event yield was negligible ($\ll 1\%$). To avoid the acceptance edges of the spectrometer, we only analyzed events that were detected within $\pm 4\%$ of the central spectrometer momentum, and $\pm \SI{27.5}{\milli\radian}$ in in-plane angle and $\pm \SI{55.0}{\milli\radian}$ in out-of-plane angle relative to the center of the spectrometer acceptance. We further restricted the measurement phase-space by requiring $\theta_{rq} < 37.5^{\circ}$ to minimize the effect of FSI and, in the high-[p\_[miss]{} $p_{miss}$]{} kinematics, $x_B > 1.3$ to further suppress non-QE events. The spectrometers were calibrated using sieve slit measurements to define scattering angles and by measuring the kinematically over-constrained exclusive $^1$H[(e,e’p) $(e,e'p)$ ]{}and $^2$H$(e,e'p)n$ reactions. The $^1$H[(e,e’p) $(e,e'p)$ ]{}reaction [p\_[miss]{} $p_{miss}$]{} resolution was better than 9 MeV/$c$. We verified the absolute luminosity normalization by comparing the measured elastic $^1$H[(e,e’) $(e,e')$ ]{}yield to a parametrization of the world data [@Lomon:2006xb]. We also found excellent agreement between the elastic $^1$H[(e,e’p) $(e,e'p)$ ]{}and $^1$H[(e,e’) $(e,e')$ ]{}rates, confirming that the coincidence trigger performed efficiently. One significant difference between [\^3[He]{} $^3$He]{}[(e,e’p) $(e,e'p)$ ]{} and [\^3[H]{} $^3$H]{}[(e,e’p) $(e,e'p)$ ]{} stems from their possible final states. The [\^3[H]{} $^3$H]{}[(e,e’p) $(e,e'p)$ ]{} reaction can only result in a three-body $pnn$ continuum state, while [\^3[He]{} $^3$He]{} can breakup into either a two-body $pd$ state or a three-body $ppn$ continuum state. To allow for a more detailed comparison of the two nuclei we only considered three-body breakup reactions by requiring ${\ifmmode E_{miss} \else $E_{miss}$\fi}> 8$ MeV (i.e., above the [\^3[He]{} $^3$He]{} two-body breakup peak). See online supplementary materials for details. Figure \[fig:kinematical\] shows the measured distribution of [\^3[He]{} $^3$He]{} [(e,e’p) $(e,e'p)$ ]{} events as a function of [E\_[miss]{} $E_{miss}$]{} and [p\_[miss]{} $p_{miss}$]{}. The [\^3[H]{} $^3$H]{} and [\^3[He]{} $^3$He]{} distributions are similar with the exception that [\^3[He]{} $^3$He]{} has more strength at low [E\_[miss]{} $E_{miss}$]{} due to the two-body breakup channel. At high-[p\_[miss]{} $p_{miss}$]{} ($ \gtrsim 250$ MeV/$c$) nucleons are expected to be predominantly in the form of high relative-momentum two-nucleon Short-Range Correlated (SRC) pairs [@Hen:2016kwk; @Atti:2015eda; @piasetzky06; @subedi08; @korover14; @Hen:2014nza; @Cohen:2018gzh; @Duer:2018sby; @Duer:2018sxh]. Neglecting pair center-of-mass motion, the missing energy of such SRC pairs should be determined by their momentum such that $E_{miss} \approx m_p - m_A + \sqrt{ \Big(m_A-m_d+\sqrt{p_{miss}^2 + m_p^2} \ \Big)^2 - p_{miss}^2}$. This correlation is shown in Fig. \[fig:kinematical\] by the dashed line labeled ‘Standing pair’. Our kinematics are largely centered around this curve. The cross-section was calculated from the [(e,e’p) $(e,e'p)$ ]{} event yield in a given $(p_{miss}, E_{miss})$ bin as: $$ \frac{d^{6}\sigma(p_{miss}, E_{miss})}{dE_{e}dE_{p}d\Omega_{e}d\Omega_p} = \frac{Yield(p_{miss}, E_{miss})}{C \cdot t \cdot (\rho/A) \cdot b \cdot V_B \cdot C_{Rad} \cdot C_{BM}}, \label{Eq:xsection}$$ where $C$ is the total accumulated beam charge, $t$ is the live time fraction in which the detectors are able to collect data, $A=3$ is the target atomic mass, $\rho$ is the nominal areal density of the gas in the target cell, and $b$ is a correction factor to account for changes in the target density caused by local beam heating. $b$ was determined by measuring the beam current dependence of the inclusive event yield [@Santiesteban:2018qwi]. $V_B$ is a factor that accounts for the detection phase space and acceptance correction for the given $(p_{miss}, E_{miss})$ bin and $C_{Rad}$ and $C_{BM}$ are the radiative and bin migration corrections, respectively. The [\^3[H]{} $^3$H]{} event yield was also corrected for the radioactive decay of $2.78 \pm 0.18\%$ of the target [\^3[H]{} $^3$H ]{}nuclei to [\^3[He]{} $^3$He ]{}in the six months since the target was filled. See online supplementary materials for details. We used the SIMC [@Simc] spectrometer simulation package to simulate our experiment to calculate the $V_B$, $C_{Rad}$ and $C_{BM}$ terms in Eq. \[Eq:xsection\], and to compare the measured cross-section with theoretical calculations. SIMC generates [(e,e’p) $(e,e'p)$ ]{}events with the addition of radiation effects over a wide phase-space, propagates the generated events through a spectrometer model to account for acceptance and resolution effects, and then weights each accepted event by a model cross-section calculated for the original kinematics of that specific event. The weighted events are subsequently analyzed as the data and can be used to compare between the data and different model cross-section predictions. We considered two PWIA cross-section models: (1) Faddeev-formulation-based calculations by J. Golak et al. [@CARASCO200341; @BERMUTH2003199; @Golak:2005iy] that either includes or excludes the continuum interaction between the two spectator nucleons (FSI$_{23}$), labeled Cracow and Cracow-PW respectively and (2) a factorized calculation using the [\^3[He]{} $^3$He]{} spectral function of C. Ciofi degli Atti and L. P. Kaptari including FSI$_{23}$ [@CiofidegliAtti:2004jg] and the $\sigma_{cc1}$ electron off-shell nucleon cross-section [@DeForest:1983ahx], labeled CK+$CC1$. Due to the lack of [\^3[H]{} $^3$H]{} proton spectral functions, we assumed isospin symmetry and used the [\^3[He]{} $^3$He]{} neutron spectral function for the [\^3[H]{} $^3$H]{}[(e,e’p) $(e,e'p)$ ]{}simulation. In addition, as the Cracow calculation used the CD-Bonn nucleon-nucleon potential [@Machleidt:2000ge] and CK used AV18 [@Wiringa:1994wb]. To make consistent comparisons within this work, we have chosen to rescale the CK calculation for each nucleus by the ratio of the proton momentum distribution obtained with CD-Bonn relative to that obtained with AV18 based on calculations in Ref. [@Marcucci:2018llz]. See online supplementary materials for details. We corrected the [\^3[He]{} $^3$He]{} and [\^3[H]{} $^3$H]{} cross-sections for radiation and bin migration effects using SIMC and the CK+$CC1$ cross-section model. Due to the excellent resolution of the HRS, bin migration effects were very small. Radiation effects were also small for [\^3[H]{} $^3$H]{} ($\lesssim 20\%$), but significant for [\^3[He]{} $^3$He]{} at low-[p\_[miss]{} $p_{miss}$]{} due to two-body breakup events that reconstructed to ${\ifmmode E_{miss} \else $E_{miss}$\fi}> 8$ MeV due to radiation. Since the cross section at high [E\_[miss]{} $E_{miss}$]{} is dominated by radiative effects, we required ${\ifmmode E_{miss} \else $E_{miss}$\fi}<50$ and 80 MeV for the low- and high-[p\_[miss]{} $p_{miss}$]{} kinematics respectively. See online supplementary materials for details. We then integrated the two dimensional experimental and theoretical cross sections, $\sigma({\ifmmode p_{miss} \else $p_{miss}$\fi},{\ifmmode E_{miss} \else $E_{miss}$\fi})$, over [E\_[miss]{} $E_{miss}$]{} to get the cross sections as a function of [p\_[miss]{} $p_{miss}$]{}. To facilitate comparison with future theoretical calculations, we bin-centered the resulting cross-sections, using the ratio of the point theoretical cross section to the acceptance-averaged theoretical cross section. We calculated the point theoretical cross section by summing the cross section evaluated at the central ($\langle Q^2 \rangle ,\langle x_B \rangle$) values over the seven [E\_[miss]{} $E_{miss}$]{}-bins for that [p\_[miss]{} $p_{miss}$]{} as follows: $$\begin{split} &\sigma_{point}({\ifmmode p_{miss} \else $p_{miss}$\fi})= \\ &\sum_{j=1}^{N}\sigma(\langle Q^2\rangle ^j, \langle x_B\rangle ^j,{\ifmmode p_{miss} \else $p_{miss}$\fi},{\ifmmode E_{miss} \else $E_{miss}$\fi}^j) \times \Delta{\ifmmode E_{miss} \else $E_{miss}$\fi}^{j} \label{eq:pointsigma} \end{split}$$ where $j$ labels the [E\_[miss]{} $E_{miss}$]{} bin and $\Delta{\ifmmode E_{miss} \else $E_{miss}$\fi}^{j}$ is the bin width. We used both the Cracow and CK+$CC1$ cross-section models for this calculation, taking their average as the correction factor and their difference as a measure its uncertainty. Future calculations can directly compare to our data by calculating the cross section at a small number of points and using Eq. \[eq:pointsigma\], rather than by computationally-intensive integration over spectrometer acceptances. See online supplementary materials for details. The point-to-point systematical uncertainties due to the event selection criteria (momentum and angular acceptances, and $\theta_{rq}$ and $x_B$ limits) were determined by repeating the analysis 100 times, selecting each criterion randomly within reasonable limits for each iteration. The systematic uncertainty was taken to be the standard deviation of the resulting distribution cross sections. They range from 1% to 8% and are typically much smaller than the statistical uncertainties. Additional point-to-point systematics are due to bin-migration, bin-centering and radiative corrections and range between 0.5% and 3.5%. See online supplementary Materials Table VIII and IX for details. The overall normalization uncertainty of our measurement equals $2\%$, and is due to uncertainty in the target density (1.5%), beam-charge measurement run-by-run stability ($1\%$), Tritium decay correction (0.15%), and spectrometer detection and trigger efficiencies (1%). For completeness we also used SIMC to calculate the acceptance-averaged cross sections using both Cracow and CK+$CC1$ cross-section models and compared them to our measured data before any bin-centering corrections. Both models well reproduce the shape of the measured [E\_[miss]{} $E_{miss}$]{} and [p\_[miss]{} $p_{miss}$]{} event distributions. The ratio of the acceptance-averaged experimental to theoretical cross-section is similar to the bin-centered ratios shown here. See online Supplementary Materials Tables III-VI and Figs. 9 and 10 for details. ![The ratio of the experimental cross sections to the calculation of Sargsian that includes FSI of the leading nucleon for [\^3[He]{} $^3$He]{} (red squares) and [\^3[H]{} $^3$H]{} (black circles). The shaded regions show $10\%$ and $20\%$ agreement intervals.[]{data-label="fig:resultsFSI"}](XSection.pdf) Fig. \[fig:resultsPWIA\] shows the experimental, bin-centered, [\^3[He]{} $^3$He]{} and [\^3[H]{} $^3$H]{} cross-sections divided by the different PWIA calculations as a function of [p\_[miss]{} $p_{miss}$]{} and integrated over [E\_[miss]{} $E_{miss}$]{} from $8$ to $50$ or $80$ MeV for the low- and high-[p\_[miss]{} $p_{miss}$]{} kinematics, respectively. For [\^3[H]{} $^3$H]{}, the Cracow calculation agrees with the data to about $20\%$. For [\^3[He]{} $^3$He]{}, the two agree for $150 \le {\ifmmode p_{miss} \else $p_{miss}$\fi}\le \SI{350}{\mega\eVperc}$ but disagree by up to a factor of two for larger and lower [p\_[miss]{} $p_{miss}$]{}. For both nuclei the CK+$CC1$ calculation is higher than the data by about $60\%$. The most recent high-$Q^2$ measurements of the [\^3[He]{} $^3$He]{}[(e,e’p) $(e,e'p)$ ]{}three-body breakup cross-sections were done at $Q^2 = 1.5$ (GeV/c)$^2$ and $x_B = 1$ [@Benmokhtar:2004fs], near the expected maximum of struck-proton rescattering. The measured cross-sections were lower than PWIA calculations by a factor of $\sim 2$ for ${\ifmmode p_{miss} \else $p_{miss}$\fi}< 250$ MeV/c and higher by a factor of $\sim 3$ for $400 < {\ifmmode p_{miss} \else $p_{miss}$\fi}< 500$ MeV/c (see Fig. \[fig:resultsPWIA\]). These deviations were described by calculations which included the contribution of non-QE reaction mechanisms, primarily FSI [@CiofidegliAtti:2005qt; @Laget:2004sm; @Frankfurt:2008zv; @Alvioli:2009zy]. The large contribution of such non-QE reaction mechanisms to the measured [(e,e’p) $(e,e'p)$ ]{} cross-sections limited their ability to constrain the nucleon distributions at high momenta. These non-QE effects are much smaller in the current measurement due to our choice of kinematics. In order to estimate the effects of struck-proton rescattering, we also considered a cross-section calculation by M. Sargsian [@SargsPRivate] that accounts for the FSI of the struck-nucleon using the generalized Eikonal approximation [@misak05a; @misak05b]. This calculation does not include the continuum interaction between the two spectator nucleons, FSI$_{23}$, and is therefore only applicable where those effects are small. To assess this effect we compared the available calculations with and without FSI$_{23}$ and found that its effects are very large at low-[p\_[miss]{} $p_{miss}$]{} but are small at ${\ifmmode p_{miss} \else $p_{miss}$\fi}> 250$ MeV/c (see online supplementary materials Fig. 11). We therefore use the Sargsian FSI calculations only at ${\ifmmode p_{miss} \else $p_{miss}$\fi}\ge 250$ MeV/c. We further verified that using this model for bin centering does not result in significantly different correction factors. Fig. \[fig:resultsFSI\] shows the ratio of the experimental, bin-centered cross-section to the Sargsian FSI calculation for ${\ifmmode p_{miss} \else $p_{miss}$\fi}> 250$ MeV/c. The FSI calculation overall agrees with the data. The general trend of the ratio seems to be opposite for [\^3[He]{} $^3$He]{} and [\^3[H]{} $^3$H]{} with the former rising above unity while the latter decreasing below it. In an SRC-dominance model where the electron scatters primarily off nucleons in $np$-SRC pairs, this trend might be caused by single-charge exchange with the spectator nucleon which would increase the [\^3[He]{} $^3$He]{}[(e,e’p) $(e,e'p)$ ]{} cross-section due to the spectator being a proton, but decrease the [\^3[H]{} $^3$H]{}[(e,e’p) $(e,e'p)$ ]{} cross-section due to the spectator being a neutron. This hypothesis is supported by the observation that the total $A=3$ cross-section (i.e. [\^3[He]{} $^3$He]{} + [\^3[H]{} $^3$H]{}) is well reproduced by the calculation (see supplementary materials Fig. 12). Future calculations are needed to properly quantify this effect. To conclude, [\^3[He]{} $^3$He]{} and [\^3[H]{} $^3$H]{}[(e,e’p) $(e,e'p)$ ]{} cross-sections were measured for the first time in over 30 years. The measurement was done in high-$Q^2$ and $x_B >1$ kinematics covering $40 \le {\ifmmode p_{miss} \else $p_{miss}$\fi}\le 500$ MeV/c. We required that the momentum direction of the recoil nucleus be within 37.5$^\circ$ of $\vec q$ to reduce the effects of leading-nucleon rescattering. Measured cross-sections are compared with state-of-the-art PWIA and FSI cross-section calculations. The agreement between data and theory for [\^3[He]{} $^3$He]{} is significantly better than that of previous work at lower $Q^2$ and $x_B \sim 1$ kinematics. An overall good agreement is observed between [\^3[H]{} $^3$H]{} data and theory for all [p\_[miss]{} $p_{miss}$]{}. The same is not true for [\^3[He]{} $^3$He]{} at high and low [p\_[miss]{} $p_{miss}$]{}. Including FSI of the leading nucleon in the calculation improves its agreement with the data at high-[p\_[miss]{} $p_{miss}$]{}. These data are a crucial benchmark for few-body nuclear theory and are an essential test of theoretical calculations used in the study of heavier nuclear systems. We acknowledge the contribution of the Jefferson-Lab target group and technical staff for design and construction of the Tritium target and their support running this experiment. We thank C. Ciofi degli Atti and L. Kaptari for the $^3$He spectral function calculations and M. Sargsian for the FSI calculations. We also thank M. Strikman for many valuable discussions. This work was supported by the U.S. Department of Energy (DOE) grant DE-AC05-06OR23177 under which Jefferson Science Associates, LLC, operates the Thomas Jefferson National Accelerator Facility, the U.S. National Science Foundation, the Pazi foundation, and the Israel Science Foundation. The Kent State University contribution is supported under the PHY-1714809 grant from the U.S. National Science Foundation. The University of Tennessee contribution is supported by the DE-SC0013615 grant. The work of ANL group members is supported by DOE grant DE-AC02-06CH11357. The contribution of the Cracow group was supported by the Polish National Science Centre under Grants No. 2016/22/M/ST2/00173 and No.2016/21/D/ST2/01120. The numerical calculations were partially performed on the supercomputer cluster of the JSC, Jülich, Germany. The Temple University group is supported by the DOE award DE-SC0016577. [^1]: Equal Contribution [^2]: Equal Contribution
--- author: - 'Robson B. Rodrigues' - 'Paulo A. Maia Neto' - Astrid Lambrecht - Serge Reynaud title: 'Reply to “Comment on “Lateral Casimir Force beyond the Proximity Force Approximation” ”' --- Our letter [@letter] is devoted to the presentation of a novel theoretical approach to the lateral Casimir force beyond the regime of validity of the “Proximity Force Approximation” (PFA). The approach relies on scattering theory used in a perturbative expansion [@scat_pert] valid when the corrugation amplitudes $a_1,a_2$ are smaller than the 3 other length scales, the mean separation distance $L$, the corrugation period $\lambda _\C$ and the plasma wavelength $\lambda _\P$. This restriction is repeatedly stressed in the abstract and the body text of [@letter] and it is also the main topic of the comment [@comment]. We agree with the statements in the comment which constitute yet another warning that the calculations presented in [@letter] are valid “provided that the corrugation amplitude is smaller than the other length scales” (last sentence of [@letter]). But we strongly disagree with the idea that the approach in Ref. [@letter] is not appropriate for making statements on the accuracy of the PFA. It was natural to illustrate the results of the new approach by applying them to a comparison with the experiment reported in Refs. [@exp]. As the corrugation amplitudes in the experiment are smaller, but not much smaller, than the other length scales, the comparison could unfortunately not be direct, as explained as fairly as possible in [@letter]. The results of [@letter] are however of clear interest for the experiment, as they can be summed up as follows, assuming that $L,\lambda_\C,\lambda_\P$ are chosen in accordance with the experimental numbers [@exp] : i) the perturbative calculation beyond the PFA [@letter] gives a force approximately 40% smaller than the perturbative calculation within the PFA; ii) as the calculation of Refs. [@exp] takes into account higher order powers in $a_1a_2$ (which is easy within the PFA), we extracted the perturbative result (proportional to $a_1a_2$) by discarding the higher orders contribution; this procedure produced a discrepancy of approximately 30% between the two methods. This number points to a potential concern for theory-experiment comparison, which is nevertheless not so severe as the experimental results (0.32$\pm $0.077pN according to [@comment]) correspond to a relative accuracy of $\pm 24\%$. The focus of the comment [@comment] is an argument about our estimation of the discrepancy. The comment sidesteps the issue by comparing two numbers which are [*not*]{} to be compared (and which we did [*not*]{} compare), namely the perturbative result beyond the PFA and the non perturbative result within the PFA. It thus fabricates a large discrepancy (nearly 60%) which would make the concern more severe. We certainly do not approve this way of comparison since there is not any reason to ignore the effect of higher order corrections in one calculation and take it into account in the other one. More work is needed in order to settle the issue of theory-experiment comparison. Progress on this question could be achieved by calculating higher order corrections for metallic mirrors beyond the PFA. These corrections are expected to affect the numbers, but they will hardly compensate exactly the deviation from the PFA demonstrated in the perturbative regime. Let us underline at this point that the second paragraph of the comment [@comment], aimed at raising doubts on the predictions of [@letter], is based on a mistake. The factor $\rho$, which measures the deviation from the PFA, is a function of the [*three*]{} length scales $L,\lambda _\C,\lambda _\P$, which is calculated in [@letter] for metallic mirrors. The case of perfectly reflecting mirrors is recovered in the limit $\lambda _\P\rightarrow 0$ but, in contrast with what is stated in [@comment], the general function cannot be reconstructed from this particular limit. Progress could alternatively come for experiments using smaller corrugation amplitudes while showing a better experimental accuracy. The first condition would aim at reaching the condition $a_1,a_2\ll L,\lambda _\C,\lambda _\P$ which delineates the range of validity of the theoretical predictions of [@letter]. As emphasized in the conclusion of our letter, this would make possible “an accurate comparison between theory and experiment in a configuration where geometry plays a non trivial role, [*i.e.*]{} beyond the PFA.” Meanwhile, an improved accuracy would allow to compare experiment with different theoretical predictions. [9]{} R.B. Rodrigues, P.A. Maia Neto, A. Lambrecht and S. Reynaud, Phys. Rev. Lett. **96**, 100402 (2006). P.A. Maia Neto, A. Lambrecht and S. Reynaud, Europhys. Lett. **69**, 924 (2005); Phys. Rev. **A 72**, 012115 (2005). F. Chen, U. Mohideen, G.L. Klimchtskaya and V.M. Mostepanenko, “Comment on [@letter]”, submitted to Phys. Rev. Lett. F. Chen, U. Mohideen, G.L. Klimchtskaya and V.M. Mostepanenko, Phys. Rev. Lett. **88**, 101801 (2002); Phys. Rev. **A 66**, 032113 (2002).
--- abstract: | We describe an approach to modelling and reasoning about data-centric business processes and present a form of general model checking. Our technique extends existing approaches, which explore systems only from concrete initial states. Specifically, we model business processes in terms of smaller fragments, whose possible interactions are constrained by first-order logic formulae. In turn, process fragments are connected graphs annotated with instructions to modify data. Correctness properties concerning the evolution of data with respect to processes can be stated in a first-order branching-time logic over built-in theories, such as linear integer arithmetic, records and arrays. Solving general model checking problems over this logic is considerably harder than model checking when a concrete initial state is given. To this end, we present a tableau procedure that reduces these model checking problems to first-order logic over arithmetic. The resulting proof obligations are passed on to appropriate “off-the-shelf” theorem provers. We also detail our modelling approach, describe the reasoning components and report on first experiments. author: - Andreas Bauer - Peter Baumgartner - Michael Norrish bibliography: - 'bibliography.bib' title: 'Reasoning with Data-Centric Business Processes' --- Introduction ============ Data is becoming increasingly important to large organisations, both private enterprises and large government departments. Recent headlines on “big data” ( [@NYT120212]) suggest that many organisations manage unprecedented amounts of structured data, and that worldwide, the volume of information processed by machines and humans doubles approximately every two years. Organisations need to be able to organise and process data according to their defined business processes, and according to business rules that may further specify properties of the processed data. Unfortunately, most approaches to business process modelling do not adequately support the analysis of the complex interactions and dependencies that exist between an organisation’s processes and data. Although they may support process analysis, helping users find and remove errors in their models, most fall short when the processes are closely tied to structured data. The reasons for this are specific to the concrete formalism used for the analyses, but can normally be traced back to the fact that classical propositional logic or discrete Petri-nets are used. Neither of these can adequately represent structured data and the operations on it. In other words, these tools’ analyses make coarse abstractions of the data, and instead focus mostly on the correctness of workflows. The business artifact approach, initially outlined in [@DBLP:journals/ibmsj/NigamC03], was one of the first to tackle this issue. It systematically elevates data to be a “first-class citizen”, while still offering automated support for process analysis. Its cornerstones are *artifacts*, which are records of data values that can change over time due to the modifications performed by *services*, which are formalised using first-order logic. Process analysis is provided, essentially, by means of model checking. That is, the following question is answered automatically: given some artifact model, a database providing initial values, and a correctness property in terms of a first-order linear-time temporal logic formula (called LTL-FO), do all possible artifact changes over time satisfy the correctness property? For the constraints given in [@DBLP:conf/bpm/DamaggioDHV11], this problem is always decidable. In this paper, we present an approach to modelling and reasoning about data-centric business processes, which is similar to this work, but which offers reasoning support that goes beyond that work’s “concrete model checking”. Our approach is based on *process fragments* that describe specific tasks of a larger process, as well as *constraints* for limiting the interactions between the fragments. As such it is also inspired by what is known as *declarative business process modelling* [@DBLP:conf/bpm/PesicA06], meaning that users do not have to create a single, large transition system containing all possible task interleavings. Instead, users can create many small process fragments whose interconnections are governed by rules that determine which executions are permitted. In our framework, those rules are given by first-order temporal logic. Unlike [@DBLP:conf/bpm/DamaggioDHV11], we choose to extend , , a branching time logic, rather than LTL, since process fragments are essentially annotated graphs and is, arguably, an appropriate formalism to express its properties ( [@clarke_em-etal:1999a]). Our database is given in terms of JSON objects [@JSON], enriched by a custom, static type system which models and preserves the type information of any input data. Process fragments may modify data, and one can easily state and answer the concrete model checking problem as outlined above. However, our approach also works if one does not start with an initial concrete database; that is, we intend to not only check whether it is possible to, reach a bad state (, a set of data for which no process fragment is applicable) from some given state (, the initial set of data), but also to determine whether for *any* set of data a bad state can be reached. In other words, we support what we call *generic model checking*. As the domains of many data items are infinite (, any item of type integer), this problem is considerably harder, in fact, generally undecidable. Informally, the two reasoning problems we are interested in are: Concrete data model checking problem: : Given a specification $\calS$, a database $s_0$, and a formula $\Phi$. Does $(s_0,\calS) \models \Phi$ hold? Unrestricted model checking problem: : Given a specification $\calS$ and a formula $\Phi$. For every database $s_0$, does $(s_0,\calS) \models \Phi$ hold? As will become clear below, a *specification* is comprised of a process model, logical definitions, and constraints to combine process fragments. The relation $(s_0,\calS) \models \Phi$ means that the pair $(s_0,\calS)$ satisfies the query $\Phi$. See Section \[sec:process\] for the precise semantics. Without any further restrictions, both problems are not even semi-decidable. This can be seen, , by reduction from the domain-emptyness problem of 2-register machines. Hence, practical approaches need to work with restrictions to recover more pleasant complexity properties. The rest of the paper is structured as follows. In Section \[sec:example\] we present a running example. In Section \[sec:data\] we explain the way we handle the rich data of our models: with JSON values, a special type system for those values, and a sorted first order logic for further constraining and describing those values. This much covers business *rules*; in Section \[sec:process\], we describe how we can model *processes*. When processes (actually process *fragments*) combine with rules, we get what we call *specifications*. In Section \[sec:ctlsfo-tableaux\], we describe the tableau-based model checking algorithm that is used to decide user queries of the two sorts identified above. Section \[sec:experiments\] discusses how we have implemented our technology, and describes some experimental results. Finally, we conclude in Section \[sec:conclusion\]. A Running Example: Purchase Order {#sec:example} ================================= **Process model:**\ \[ node distance=1.8cm, state/.style=[ text width=1.7cm, rectangle, rounded corners, draw=black, very thick, minimum height=2em, inner sep=2pt, text centered]{}, task/.style=[ rectangle, rounded corners, shade, top color=white, bottom color=blue!50!black!20, draw=blue!40!black!60, very thick]{} \] (INIT) [ Init ]{}; (PACK) [ Pack ]{}; (STOCKTAKE) [ Stocktake ]{}; (DECLINED) [ Declined ]{}; (PACKED) [ Packed ]{}; (INVOICED) [ Invoice ]{}; (INIT) edge node\[left\] [$e_1$]{} (PACK); (INIT) edge node\[right\](DECLINED\_EDGE) [$e_2$]{} (DECLINED); (PACK) edge\[bend left=25\] node\[right\] [$e_4$]{} (STOCKTAKE); (STOCKTAKE) edge\[bend left=25\] node\[left\] [$e_3$]{} (PACK); (PACK) edge node\[right\] [$e_5$]{} (PACKED); (PACKED) edge node\[right\] [$e_6$]{} (INVOICED); (PAID) [ Paid ]{}; (PAID\_INCOMING) [ ]{}; (PAID\_INCOMING) edge node\[right\](PAID\_EDGE) [$e_7$]{} (PAID); (SHIPPED) [ Shipped ]{}; (SHIPPED\_INCOMING) [ ]{}; (SHIPPED\_INCOMING) edge node\[right\] [$e_8$]{} (SHIPPED); (COMPLETED) [ Completed ]{}; (COMPLETED\_INCOMING) [ ]{}; (COMPLETED\_INCOMING) edge node\[right\] [$e_9$]{} (COMPLETED); (LEGEND\_PAID) [ entry = “$true$”\ exit = “$true$” ]{}; (LEGEND\_PAID) – (PAID); (LEGEND\_PAID) – (SHIPPED); (LEGEND\_PAID\_EDGE) [ guard = “`db.status.paid <> true`”\ script = “`db.status.paid = true`” ]{}; (LEGEND\_PAID\_EDGE) – (PAID\_EDGE); (LEGEND\_COMPLETED) [ entry = “$true$”\ final = “$true$” ]{}; (LEGEND\_COMPLETED) – (COMPLETED); (LEGEND\_DECLINED) [ guard = “`\simacceptable(db)`”\ script = “`db.status.final = true`” ]{}; (LEGEND\_DECLINED) – (DECLINED\_EDGE); **Definitions:** completed: $\forall \mathtt{s}\text{:}\mathtt{Status}\ .\ (\mathtt{completed}(\mathtt{s}) \Leftrightarrow ( \mathtt{s}.\mathtt{paid} = \mathtt{true} \wedge \mathtt{s}.\mathtt{shipped} = \mathtt{true} ))$ accepted: $\forall \mathtt{db}\text{:}\mathtt{DB}\ .\ (\mathtt{acceptable}(\mathtt{db}) \Leftrightarrow (\neg \mathtt{isEmpty}(\mathtt{db}.\mathtt{order})))$ readyToShip: $\forall \mathtt{s}\text{:}\mathtt{Status}\ .\ (\mathtt{readyToShip}(\mathtt{s}) \Leftrightarrow ( \mathtt{isEmpty}(\mathtt{s}.\mathtt{open})))$ …\ **Constraints:** nongold: $(\mathtt{db}.\mathtt{gold} = \mathtt{false} \Rightarrow (\mathtt{db}.\mathtt{status}.\mathtt{shipped} = \mathtt{false}\, \toWU\, \mathtt{db}.\mathtt{status}.\mathtt{paid} = \mathtt{true}))$ In this section, we introduce a simplified model of a purchase order system using process fragments. The purpose of the modelled system is to accept incoming purchase orders and process them further (packing, shipping, etc.), or to decline them straight away if there are problems. The whole model is depicted as a graph in Fig. \[fig:purchase\], where the biggest process fragment is on the left, with further atomic fragments beside it (labelled Paid, Shipped, and Completed, respectively). Both process tasks, represented as nodes in the graph, and connections are typically annotated with extra information. Node annotations determine whether or not a node is an initial and/or a final node, an entry and/or an exit node. This information is used to constrain the ways in fragments can connect. Edges can carry a guard given as a formula and a simple program written in the programming language Groovy. The purpose of the program (given in the field “script”) is to modify the underlying database, which is referred to by the variable `db`. The depicted system model has one initial node, Init, where it waits for a purchase order to arrive. Then, the system can either start to pack (, enter node Pack), or decline the order (, enter node Declined). An order can be declined if the guard ($\neg \mathit{acceptable}(\db)$) in the annotation of edge $e_2$ is satisfied. The predicate $\mathit{acceptable}$ is defined in the Definitions section of our input specification. In a nutshell, the sections Definitions and Constraints contain domain-knowledge, encoded as logical rules. (The constraint named “nongold” states that non-gold customers must pay before shipment; $\toWU$ is the “weak until” operator.) If the order is not declined, an attempt will be made to pack its constituents. If all are in stock, the process will continue to the node Packed. However, if one or more items are missing, they need to be ordered in, which is expressed in the loop between the nodes Pack and Stocktake. Informally, process fragments are linked together as follows. Starting from a state comprised of an node and a given initial database, an outgoing transition from the current state can only be executed if it satisfies the transition’s guard. If it is satisfied, the associated program is executed to determine the new value of the database, and the edge’s target node becomes the new current state. The and annotations impose implicit constraints on how fragments can be combined: the execution of a new process fragment must always start with its node[^1] coming from an node. In other words, there are implicit transitions between all and all nodes. However, if a guard is associated to an *node*, this guard sits on all its implicit incoming transitions. The computation stops if from the current state no successor can be reached, either because there is no outgoing edge, the guards of all outgoing edges are not satisfied by the current state, or a depth limit has been reached. In our example, two possible sequences are Init $\rightarrow$ Declined, or Init $\rightarrow$ Pack $\rightarrow$ …$\rightarrow$ Invoice $\rightarrow$ Paid $\rightarrow$ Shipped$ \rightarrow$ Completed. It is not required to cover all fragments, as illustrated by the first run. The database which can be modified by the programs given in the “script” annotations, is represented as a JSON object. See, for example, left hand side of Fig. \[fig:data\]. (The right hand side contains type definitions for the JSON data, see also .) The program annotated on edge $e_2$, which leads into node Declined, simply sets the field `final` inside `status` to `true`. Crucial for our example is the list of open items, under `status`, which has to be empty to be able to ship a purchase order. If it is not, constituents of the order are missing and need to be ordered until the list is empty. As for sample queries consider the formula $ \neg (\pqE\, \toF\, \db.\mathit{status}.\mathit{final}=\mathit{true})$, which can be seen as a *planning* goal. The runs on the model above that *falsify* it lead to a database $\db$ that has reached a “final” state, with $\mathit{status}.\mathit{final}$ being set to $\mathit{true}$. Planning queries are useful, , for flexible process configuration from fragments during runtime. Another interesting query is $\pqA\, \toG\, (\forall s\text{:}Stock\, . \, (s \in db.stock \Rightarrow s.available \ge 0))$. It is a safety property, saying that at all stages in the process run, and for all possible stock items, the number of available items is non-negative. Such queries are typical during design time, and pose an unrestricted model checking problem. { "order" : [1], "gold" : true, "stock" : [ { "ident" : "Mouse", "price" : 10, "available" : 0 }, { "ident" : "Monitor", "price" : 200, "available" : 2 }, { "ident" : "Computer", "price" : 1000, "available" : 4 } ], "status" : { "open" : [], "value" : 0, "shipping" : 0, "paid" : false, "shipped" : false, "final" : false } } DB = { order: List[Integer], gold: Bool, stock: List[Stock], status: Status } Stock = { ident: String, price: Integer, available: Integer } Status = { open: List[Integer], value: Integer, shipping: Integer, paid: Bool, shipped: Bool, final: Bool } Modelling Data With JSON Logic {#sec:data} ============================== Faithful modelling of business processes requires being able to model the objects (or *data*) manipulated by the processes and, of course, their evolution over time. In this section we focus on data modelling, which is based on JSON extended with a type system. JSON [@JSON] is simple, standardised, textual data representation format. In addition to a standard set of atomic values such as integers and strings, JSON supports two structuring techniques: sequencing (“arrays”) and arbitrarily nested hierarchies (through “objects”). Our choice of JSON (rather than XML, say), is based on the ease with which it can be written and understood by humans. JSON is sufficiently rich to be a plausible format for representing the data used in business processes, and its human ease-of-use is extremely helpful. Other than simply being the medium in which data is represented, there are two important functions that JSON must support. Firstly, it must be possible to manipulate JSON values in the course of executing a specification. This functionality is realised through the use of the Groovy programming notation. Secondly, it must be possible to express *logical* predicates over JSON values, both to guard process transitions and to pick out certain forms of value that are of interest. In particular, if a specification is to achieve a particular end-goal, with a database being in a particular configuration, we need to be able to describe how the various values in that database inter-relate. It is this that motivates our choice of the logically expressive capabilities of first order logic, together with sorts such as lists and numbers. In addition to first-order predicates, we also use a simple type system over JSON values. This provides a simple mapping into the sorts of our underlying first-order logic. We note that the type system is indispensable for unrestricted model checking, in order to derive from it logical axioms for object and array manipulations. A Type System for JSON ---------------------- First we briefly summarise the syntax that is fully described in the IETF RFC [@JSON]: JSON values can be numbers, booleans (`true` and `false`), strings (written between double-quotes, , `a string`) and a special value `null`. JSON’s *arrays* are written as comma-separated values between square-brackets, , `[1, string, [true]]`. JSON *objects* are similar to records or structures in languages such as Pascal and C. They are written as lists of field-name/value pairs between braces. Both forms are illustrated in Figure \[fig:data\]. Sibling field-names within an object should be unique, and are considered unordered. Therefore, an object can be thought of as a finite map from field-names to further JSON values. Following this conception, we write [`Obj`$\mathit{vf}$]{} to denote an object whose field names are the domain of finite map $\mathit{vf}$, with field $s$’s value being $\mathit{vf}(s)$. JSON does not impose any restrictions on the structure of values. For example, a list may contain both strings and integers. However, we choose to restrict this freedom with a simple type system comparable to those in third-generation languages such as C. Let JSON types be denoted by $\tau$, $\tau'$, $\tau_1$ , then [$$\begin{array}{rcl} \tau &\quad::=\quad & {\texttt{Integer}}\;\;|\;\; {\texttt{Bool}}\;\;|\;\; {\texttt{String}}\;\;|\;\; {{\texttt{List}}[\tau]} \;\;|\;\; {{\texttt{Option}}[\tau]} \;\;|\;\; {{\texttt{ObjTy}}\lb\mathit{tf}\rb} \;\;|\;\; {{\texttt{EnumTy}}[\mathit{sl}]} \end{array}$$ ]{} where $\mathit{tf}$ is a finite map from strings to types, and $\mathit{sl}$ is a list of strings. The [`Option`]{} and [`EnumTy`]{} types are the only ones that do not have a obvious connection back to a set of JSON values. The [`Option`]{} type is used to allow for values that are not necessarily always initialised, but which come to acquire values as a process progresses. We do not expect to see the option-constructor occur with multiple nestings, , a type such as ${{\texttt{Option}}[{{\texttt{Option}}[{\texttt{String}}]}]}$. The [`EnumTy`]{} type is used to model finite enumerated types, where each value is represented by one of the strings in the provided list. This flexibility in the type system allows for more natural modeling. Values are assigned types with the following inductive relation, where we write $v : \tau$ to indicate that JSON value $v$ has type $\tau$, where the meta-variables $i$ and $s$ correspond to all possible integer and string values respectively, and where we use $e \in \ell$ to mean that element $e$ is a member of list $\ell$: [$$\begin{array}{c} \infer{\texttt{true} : {\texttt{Bool}}}{} \qquad \infer{\texttt{false} : {\texttt{Bool}}}{} \qquad \infer{i : {\texttt{Integer}}}{} \qquad \infer{s : {\texttt{String}}}{} \\[3mm] \infer{s : {{\texttt{EnumTy}}[\mathit{sl}]}}{s \in \mathit{sl}} \qquad \infer{\texttt{null} : {{\texttt{Option}}[\tau]}}{} \qquad \infer{v : {{\texttt{Option}}[\tau]}}{v : \tau} \\[3mm] \infer{[ \mathit{els} ] : {{\texttt{List}}[\tau]}}{ \forall v \in \mathit{els}. \;v : \tau} \\[3mm] \infer{{\texttt{Obj}\lb\mathit{vf}\rb} : {{\texttt{ObjTy}}\lb\mathit{tf}\rb}}{ {\mathrm{dom}}(\mathit{vf}) = {\mathrm{dom}}(\mathit{tf}) & \forall s \in {\mathrm{dom}}(\mathit{vf}). \;\mathit{vf}(s) : \mathit{tf}(s) } \end{array}$$]{}This type system is simple and designed to be pragmatic. Meta-theoretically, it is not particularly elegant. In particular, values may have multiple types: if a value $v$ is of type $\tau$, then it is also of type ${{\texttt{Option}}[\tau]}$; string values are not just of type ${\texttt{String}}$, but also have an arbitrary number of possible enumeration types. From JSON to First-Order Logic ------------------------------ When a user develops a business specification, we expect them to name the various types of interest with the type system above. When concrete initial values are given for a concrete model-checking problem, we use that type system to check that these values really do have the appropriate type. The same system is used to ensure that logical guards and goal-conditions are sensible, as discussed below. It also plays a pivotal role in our reasoning procedure for the unrestricted model checking problem, which requires to reflect the semantics of a JSON type model in many-sorted first-order logic. We are going to describe that now. We fix a non-empty set $S$ of *sorts* and a first-order logic signature $\Sigma$ comprised of function and predicate symbols of given arities over $S$. We assume infinite supplies of variables, one for every sort in $S$. A *constant* is a 0-ary function symbol. The (well-sorted $\Sigma$-)terms and atoms are defined as usual. We assume $\Sigma$ contains a predicate symbol $\approx_s$ (equality) of arity $s \times s$, for every sort $s \in S$. Equational atoms, or just *equations*, are written infix, usually without the subscript $s$, as in $1+1\approx 2$. We write $\phi[x]$ to indicate that every free variable in the formula $\phi$ is among the list $x$ of variables, and we write $\phi[t]$ for the formula obtained from $\phi[x]$ by replacing all its free variables $x$ by the corresponding terms in the list $t$. We assume a sufficiently rich set of Boolean connectives (such as $\{\neg{}, {}\land{} \}$) and the quantifiers $\forall$ and $\exists$. The *well-sorted $\Sigma$-formulas*, or just *(FO) formulas* are defined as usual. We are particularly interested in signatures containing (linear) integer arithmetic. For that, we reserve the sort symbol $\mathbb{Z}$, the constants $0, \pm 1, \pm 2, \ldots$, the function symbols $+$ and $-$, and the predicate symbol $>$, each of the expected arity over $\mathbb{Z}$. The semantics of our logic is the usual one: a *$\Sigma$-interpretation* $I$ consists of non-empty, disjoint sets, called *domains*, one for each sort in $S$. We require that the domain for $\mathbb{Z}$ is the set of integers, and that every arithmetic function and predicate symbol is mapped to its obvious function over the integers. A *(variable) assignment $\alpha$* is a mapping from the variables into their corresponding domains. Given a formula $\Phi$ and a pair $(I,\alpha)$ we say that *$(I,\alpha)$ satisfies $\Phi$*, and write $(I,\alpha) \models \Phi$, iff $\Phi$ evaluates to true under $I$ and $\alpha$ in the usual sense (the component $\alpha$ is needed to evaluate the free variables in $\Phi$). If $\Phi$ is closed then $\alpha$ is irrelevant and we can write $I \models \Phi$ instead of $(I,\alpha) \models \Phi$. We say that a closed sentence $\Phi$ is *valid* (*satisfiable*) iff $I \models \Phi$ for all (some) interpretations $I$. In order to map our JSON modelling framework to FOL we let the sorts $S$ contain all the defined type names in the JSON type model of the given specification. In the example in Section \[sec:example\] these are $\mathtt{DB}$, $\mathtt{Stock}$ and $\mathtt{Status}$. Without loss of generality we assume that the top-level type in a JSON type model is always called $\mathtt{DB}$.[^2] We call any JSON term of type $\mathtt{DB}$ a *database*. See again Section \[sec:example\] for an example. We fix a dedicated variable $\db$ of sort $\DB$. Informally, $\db$ will be used to hold the database at the current time point. Furthermore, we must provide mappings into FOL from terms that are specific to JSON. In some sense, both JSON’s arrays and its objects are generic “arrays”, values that can be seen as collections of independently addressable components. The JSON syntax for that is a usual one: `a[i]`, denotes the value of the `i`$^{\mathrm{th}}$ element of array `a`; and `obj.fld`, denotes the value of `obj`’s field called `fld`. These are the *accessor* operations. Their FOL representation (as terms) is $\mathit{index}(a,i)$ and $\mathit{fld}(\mathit{obj})$, respectively. This mapping allows to formulate predicates on JSON data in FOL. For example, the guard `db.status.paid <> true` in ’s example maps to the formula $\mathit{paid}(\mathit{status}(\db)) \neq true$. We also support *updator* operations for both arrays and objects. For arrays, we have $\mathit{update}(a,i,v)$, which denotes an array that is everywhere the same as $a$ except that at index $i$ it has value $v$. For objects, we have analogous updator functions per field. If an object type had fields `fld1`, `fld2` , we would then have the term $\mathit{upd\_fld1}(\mathit{obj},v)$, denoting an object everywhere the same as $\mathit{obj}$ except with value $v$ for its field `fld1`. We note that these mappings can be automated without effort. With field and array updators to hand, we can translate a model’s scripts (Groovy fragments on graph-edges) into a logical form. This translation is to a term of one free variable $\db$, denoting the effect of that script on $\db$. Because standard FOL theorem provers do not natively support the theory of arrays and objects, we generate suitable FOL axioms from the given JSON type model. For arrays, the appropriate axioms are well-known and for objects, there are analogous axioms. For example, $\mathit{fld1}(\mathit{upd\_fld1}(\mathit{obj},v)) = v$, and $\mathit{fld2}(\mathit{upd\_fld1}(\mathit{obj},v)) = \mathit{fld2}(\mathit{obj})$. In addition, we have concrete syntax for writing complete values (, `[2,4,6]` for a list of three elements), though this is actually just syntactic sugar for a chain of updates over some underlying base object. In particular, any database has a (FOL) term representation, called “database as a term” below. Moreover, this same term language allows us to give *partial* specifications of filled databases. For example, the term $\mathit{upd\_gold}(\sfdb, true)$ stands for a (any) database represented by the constant $\sfdb$ whose `gold` field holds the value `true`, with the other fields arbitrary. Indeed, analysing such partially filled databases is one of the main goals of our research agenda. Modelling Processes {#sec:process} =================== In this section we describe our framework for modelling processes. As said earlier, it is centered around the notion of *process fragments* that manipulate databases over time. The cooperation of the fragments is described by *(temporal) constraints*. All constraints and guards in state transitions may refer to user-specified predicates on (components of) the database, which we call *(logical) definitions* here. We will introduce these components now. Process Fragments ----------------- A *guard* $\mu$ is a FOL formula with free variables at most $\{\db\}$; an *update term* $u$ is a FOL term with free variables at most $\{\db\}$. By $\Guard$ ($\Update$) we denote the set of all guards (update terms); $\GProg$ is the set of all Groovy programs. Without further formalization we assume the Groovy programs are “sensible” and describe database updates that can be characterized as update terms. A *process fragment* $F$ is directed labeled graph $(N,E,\lambda^\mathsf{N}, \lambda^\mathsf{E})$, where $N$ is a set of *nodes*, $E \subseteq N \times N$ is a set of *edges*, $\lambda^\mathsf{E}: E \mapsto \Guard \times \GProg \times \Update$ is an *edge labeling function*, and $\lambda^\mathsf{N}: N \mapsto 2^{\{ \init, \entry, \exit \} \cup \Guard}$ is an *edge labeling function*. The informal semantics of process fragments has been given in Section \[sec:example\] already. The precise semantics of a set of process fragments is given by first translating it into one single *process model* $\calP$ and then defining the semantics of $\calP$ in terms of its runs. More formally, a *process (model)* $\calP$ is a quadruple $(N, n_0, E, \lambda^\mathsf{E})$ where $N$, $E$ and $\lambda^\mathsf{E}$ are as above and $n_0 \in N$ is the *initial node*. Suppose as given a set $\calF = \{ F_1,\ldots,F_k\}$ of process fragments, for some $k \ge 1$, where $F_i = (N_i,E_i,\lambda^\mathsf{N}_i, \lambda^\mathsf{E}_i)$ and $N_i$ and $N_j$ are disjoint, for all $i\neq j$. Suppose further, without loss of generality, that exactly one node in $\bigcup_{1\le i \le k} N_i$ is labeled as an node. Let $n_0$ be that node. The *process model $\calP = (s_0, n_0, N, E, \lambda^\mathsf{E})$ associated to $\calF$* is defined as follows: $$\begin{aligned} N &= \textstyle \bigcup_{1\le i \le k} N_i & E &= (\textstyle \bigcup_{1\le i \le k} E_i) \cup E^+ & \lambda^\mathsf{E} &= (\textstyle \bigcup_{1\le i \le k} \lambda^\mathsf{E}_i) \cup \lambda^+\end{aligned}$$ where ($\epsilon$ denotes the empty Groovy program) $$\begin{aligned} E^+ &= \{ (m,n) \mid m \in N_i \text{, } n \in N_j \text {, } \exit \in \lambda^\mathsf{N_i}(m) \text { and } \entry \in \lambda^\mathsf{N_j}(n) \text{, for some } 1 \le i,j \le k \}\\[-0.3ex] \lambda^+ &= \{ (m,n) \mapsto (\gamma,\epsilon,\db) \mid (m,n) \in E^+\text{ and } \{\entry, \gamma\} \subseteq \lambda^\mathsf{N_j}(n) \text{, for some } 1 \le j \le k \}\end{aligned}$$ For the above construction to be well-defined we require that every node in every fragment $F_i$ is also labeled with a guard $\gamma$ (which could be $\top$). Definitions and Constraints --------------------------- Definitions are logical abbreviations. As such, they are not semantically necessary. Nonetheless, just as in mathematics, they are a crucial aid in the construction and comprehensibility of useful models. Formally, a *definition (for $p$)* is a closed formula of the form $\forall x\text{:}s\ .\ p(x) \Leftrightarrow \phi[x]$ where $x$ is list of variables of sorts $s \subseteq S$, $p$ is a predicate symbol of the proper arity, and $\phi$ is a formula. Constraints specify how process fragments can be combined. The idea has been pursued before, , in the Declare system [@DBLP:conf/bpm/PesicA06] which uses *propositional* (linear) temporal logic for that. In order to take data into account, we work with a fragment of over first-logic, which we refer to as . The syntax of our state formulae is given by $ \Phi ::= \zeta \mid \neg\Phi \mid \Phi \wedge \Phi \mid \pqA\psi \mid \pqE\psi, $ where $\zeta$ is a FO formula with free variables at most $\{db\}$, and $\psi$ a path formula defined via $ \psi ::= \Phi \mid \neg \psi \mid \psi \wedge \psi \mid \ltlX \psi \mid \ltlWX \psi \mid \psi \ltlU \psi $. (The operator $\ltlWX$ is “weak next”.) A *constraint* then is simply a state formula. Notice that because constraints may contain the free variable $\db$, our logic is *not* obtained from propositional by replacing propositional variables by closed formulas. Figure \[fig:purchase\] contains some examples of definitions and constraints. Specifications and Semantics ---------------------------- The modelling components describing so far are combined into *specifications*. Formally, a *specification* $\calS$ is a tuple $(\calP, \calD, \calC)$ where $\calP$ is a process, $\calD$ is a set of definitions and $\calC$ is a set of constraints. An *instance $\calI$ (of $\calS$)* is a pair $ (s_0,\calS)$, where $s_0$ is a database (as a term) and $\calS$ is a specification. We are now in the position to provide a formal definition for the model checking problems stated in the introduction. Let $\calS = (\calP, \calD, \calC)$ be as above, where $\calP$ is of the form $(N, n_0, E, \lambda^\mathsf{E})$ and $\phi$ a state formula with free variables at most $\{\db\}$, the *query*. As a first step to define the satisfaction relation $(s_0,\calS) \models \phi$ between an instance and a query we make the constraints $\calC$ part of the query. Assume $\phi$ is given in negation normal form (this is always possible) and that it starts with a path quantifier ($\pqE$ or $\pqA$). The *expanded query $\phi_\calC $* is the formula $\pqA\, (\calC \Rightarrow \psi)$ if $\phi = \pqA\, \psi$, for some formula $\psi$, and it is $\pqE\, (\calC \wedge \psi)$ if $\phi = \pqE\, \psi$. Here, $\calC$ is read as a conjunction of its elements. (The rationale for this definition is that the desired treatments of constraints is indicated by the path quantifier in the query.) Notice that with $\phi$ also $\phi_\calC$ is a query. Now define $(s_0,\calS) \models \phi$ iff $(s_0,\calP, \calD) \models \phi_\calC$, , the triple $(s_0,\calP, \calD)$ *satisfies* $\phi_\calC$. It remains to define the latter satisfaction relation, which we turn to now. As a convenience, we say that *$\calP$ contains a transition $m \stackrel{\gamma, u} \longrightarrow n$*. if $(m,n) \in E$ and $\lambda^\mathsf{E}(m,n) = (\gamma,u)$, for some guard $\gamma$ and Groovy program $u$ as an update term. A *run $r$ (of $(\calP, \calD)$) from $s_0$* is a possibly infinite sequence $(n_0,s_0) (n_1,s_1) (n_2,s_2) \cdots$ of pairs of nodes and databases, also called *states*, such that (i) $\calP$ contains transitions of the form $(n_i \stackrel{\gamma_i, u_i} \longrightarrow n_{i+1})$ , (ii) ${}\models \calD \Rightarrow \gamma_i[s_i]$ and (iii) $s_{i+1} = u_i[s_i]$. In item (i) in case $i = 0$ the node $n_0$ is meant to be the initial node $n_0$ in $\calP$. Notice that in item (ii) the definitions $\calD$ play the role of axioms from which the instantiated guard $\gamma_i[s_i]$ is to follow. Occasionally the nodes in a run are not important. and we confuse a run with its projection on the states $s_0 s_1 s_2\cdots$ . For a run $r = (n_0,s_0)(n_1,s_1)(n_2,s_2) \cdots$ and $i \ge 0$ we define $r[i] = (n_i,s_i)$, sometimes also $r[i] = s_i$. By $r^i$ we denote the truncated run $(r_i, s_i)(r_{i+1}, s_{i+1})\cdots$, by $|r|$ the number of elements in the run or $\infty$, if $r$ is, in fact, infinite. Obviously, $r^0 = r$. For any formula $\phi \in \CTLsFO$ with free variables at most $\{\db\}$ we define $(s_0, \calP, \calD) \models \phi$ as follows: $$\begin{array}{lcl} (s_0, \calP, \calD) \models \zeta & \hbox{iff} & {}\models (\calD \Rightarrow \zeta[s_0])\\ (s_0, \calP, \calD) \models \neg\psi & \hbox{iff} & (s_0, \calP, \calD) \models \psi \hbox{ is not true}\\ (s_0, \calP, \calD) \models \psi_1 \wedge \psi_2 & \hbox{iff} & (s_0, \calP, \calD) \models \psi_1 \hbox{ and } (s_0, \calP, \calD) \models \psi_2\\ (s_0, \calP, \calD) \models \pqA\psi & \hbox{iff} & (\calP, \calD, r) \models \psi \hbox{ for all runs } r \hbox{ starting in } n_0\\ (s_0, \calP, \calD) \models \pqE\psi & \hbox{iff} & (\calP, \calD, r) \models \psi \hbox{ for some run } r \hbox{ starting in } n_0, \end{array}$$ where the relation $(\calP, \calD, r) \models \psi$ is defined as $$\begin{array}{lcl} (\calP, \calD, r) \models \Phi & \hbox{iff} & (s_0, \calP, \calD) \models \Phi \\ (\calP, \calD, r) \models \neg \psi' & \hbox{iff} & (\calP, \calD, r) \models \psi' \hbox{ is not true}\\ (\calP, \calD, r) \models \psi'_1 \wedge \psi'_2 & \hbox{iff} & (\calP, \calD, r) \models \psi'_1 \hbox{ and } (\calP, \calD, r) \models \psi'_2\\ (\calP, \calD, r) \models \ltlX\psi' & \hbox{iff} & |r| > 1 \hbox{ and } (\calP, \calD, r^1) \models \psi'\\ (\calP, \calD, r) \models \ltlWX\psi' & \hbox{iff} & |r| \leq 1, \hbox{ or } |r| > 1 \hbox{ and } (\calP, \calD, r^1) \models \psi'\\ (\calP, \calD, r) \models \psi'_1 \ltlU \psi'_2 & \hbox{iff} & \hbox{there exists a $j \geq 0$, such that } |r| > j \hbox{ and } (\calP, \calD, r^j) \models \psi'_2,\\ & & \hbox{ and } (\calP, \calD, r^i) \models \psi'_1 \hbox{ for all } 0 \leq i < j\\ (\calP, \calD, r) \models \psi'_1 \ltlR \psi'_2 & \hbox{iff} & (\calP, \calD, r^i) \models \psi'_2 \hbox{ for all }i \leq |r|, \hbox{ or there exists a } j \geq 0, \hbox{ such that } \\ & & |r| > j, (\calP, \calD, r^j) \models \psi'_1 \hbox{ and } (\calP, \calD, r^i) \models \psi'_1 \hbox{ for all } 0 \leq i \leq j. \end{array}$$ We further assume the usual “syntactic sugar”, such as $\vee$, $\Rightarrow$ (implies), $\ltlG$ (always), $\ltlF$ (eventually), or $\ltlWU$ (weak until) operators, which can easily be defined in terms of the above set of operators in the expected way. Note that we distinguish a strong next operator, $\ltlX$, from a weak next operator, $\ltlWX$ as described in [@bauer:leucker:schallhart:jlc10]. This gives rise to the following equivalences: $\psi \ltlR \Phi = \Phi \wedge (\psi \vee \ltlWX \psi \ltlR \Phi)$ and $\psi \ltlU \Phi = \Phi \vee \psi \wedge \ltlX\psi \ltlU \Phi$ as one can easily verify by using the above semantics. This choice is motivated by our bounded model checking algorithm, which has to evaluate formulae over finite traces as opposed to infinite ones. For example, when evaluating a safety formula, such as $\ltlG\psi$, we want a trace of length $n$ that satisfies $\psi$ in all positions $i \leq n$ to be a model of said formula. On the other hand, if there is no position $i \leq n$, such that $\psi'$ is satisfied, we don’t want this trace to be a model for $\ltlF\psi'$. This is achieved in our logic as $\ltlG\psi = \psi \wedge \ltlWX\ltlG\psi$ and $\ltlF\psi = \psi \vee \ltlX\ltlF\psi$ hold. Note also that $\neg\ltlX\psi \neq \ltlX\neg\psi$, but $\neg\ltlX \psi = \ltlWX\neg\psi$. Reasoning with Tableaux for {#sec:ctlsfo-tableaux} ============================ Tableau calculi for temporal logics have been considered for a long time [@gore-tableau-methods e.g.] as an appropriate and natural reasoning procedure. There is also a version for propositional  [@DBLP:conf/fm/Reynolds09]. However, we are not aware of a first-order logic tableaux calculus that accommodates our requirements, hence we devise one, see below. We note that we circumvent the difficult problem of loop detection by working in a *bounded* model checking setting, where runs are artificially terminated when they become too long. Suppose we want to solve an unrestricted model checking problem, , to show that $(s_0,\calP, \calD) \models \phi_\calC$ holds, for every database $s_0$. As usual with tableau calculi, this is done by attempting to construct a countermodel for the negation of this statement. The universally quantified database $s_0$ then becomes a Skolem constant, say, $\sfdb$, representing an (unknown) initial database. A *state* then is a pair of the form $(n,u[\sfdb])$ where $n \in N$ and $u[\sfdb]$ is an update term instantiated with that initial database. We find it convenient to formulate the calculus’ inference rules as operators on (sets of) sequents. A *sequent* is an expression of the form $s \vdash_Q \Phi$ where $s$ is a state, $Q \in \{\pqE, \pqA\}$ is a path quantifier, and $\Phi[\db]$ is a (possible empty) set of formulas in negation normal form with free variables at most $\{ \db \}$. When we write $s \vdash_Q \phi,\Phi$ we mean $s \vdash_Q \{\phi\} \cup\Phi$. The informal semantics of a sequent $(n,u[\sfdb]) \vdash_Q \Phi[\db]$ is “some run of the instance $ (\sfdb, \calP, \calD)$ has reached the state $(n,u[\sfdb])$ and $(n,u[\sfdb]) \models Q\, \Phi[u[\sfdb]]$”. A *tableau* calculus, the calculus below derives trees that represent disjunctions of conjunctions of formulas. More precisely, the nodes are labeled with sets of sequents that are read conjunctively, and sibling nodes are connected disjunctively. The purpose of the calculus’ inference rules is to analyse a given sequent by breaking up the formulas in the sequent according to their boolean operators, path quantifiers and temporal operators. An additional implicit and/or structure is given by reading the formulas $\Phi$ in $s \vdash_\pqE \Phi$ conjunctively, and reading the formulas $\Phi$ in $s \vdash_\pqA \Phi$ disjunctively. The reason is that $\pqA$ does not distribute over “or” and $\pqE$ does not distribute over “and”. We need some more definitions to formulate the calculus. A formula is *classical* iff it contains no path quantifer and no temporal operator. A formula is a *modal atom* iff its top-level operator is a path quantifer or a temporal operator. A sequent $s \vdash_Q \Phi$ is *classical* if all formulas in $\Phi$ are classical. A *tableau node* is a (possibly empty) set of sequents, denoted by the letter $\Sigma$. We often write $\sigma ;\Sigma$ instead of $\{\sigma \} \cup \Sigma$. We simply speak of “nodes” instead of “tableau nodes” if confusion with the nodes in graphs is unlikely. Let $\phi_\calC$ be a given expended query and $\calS$ a specification as introduced before. The *initial sequent* is the sequent $s_0 \vdash_\pqE \neg \phi_\calC$, where $s_0 = (n_0, \sfdb)$ is the *initial state*, for some fresh constant $\sfdb$. Notice that the expanded query is negated, corresponding to the intuition of attempting to compute a countermodel for the negation of the expanded query. Because we are adopting a standard notion of tableau derivations it suffices to define the inference rules. (The root node contains the initial sequent only.) The components $\calP$ and $\calD$ are left implicit below. #### Boolean rules. The implicit reading of $\Phi$ as disjunctions/conjunctions in a $\vdash_\pqA$/$\vdash_\pqE$ sequent sanction the following rules. $$\begin{matrix} \infrule[$\pqE$-$\land$]{ s \vdash_\pqE \phi \land \psi , \Phi ; \Sigma }{ s \vdash_\pqE \phi , \psi , \Phi ; \Sigma } & \qquad \infrule[$\pqE$-$\lor$]{ s \vdash_\pqE \phi \lor \psi , \Phi ; \Sigma }{ s \vdash_\pqE \phi , \Phi ; \Sigma\quad s \vdash_\pqE \psi , \Phi ; \Sigma } \\[3ex] \infrule[$\pqA$-$\lor$]{ s \vdash_\pqA \phi \lor \psi , \Phi ; \Sigma }{ s \vdash_\pqA \phi , \psi , \Phi ; \Sigma } & \qquad \infrule[$\pqA$-$\land$]{ s \vdash_\pqA \phi \land \psi , \Phi ; \Sigma }{ s \vdash_\pqA \phi,\Phi ; s \vdash_\pqA \psi , \Phi ; \Sigma } \end{matrix}$$ if $\phi$ is not classical or $\psi$ is not classical (no need to break classical formulas apart). #### Rules to separate classical sequents. The following rules separate away the classical formulas from the modal atoms in $\Phi$. Every classical sequent can be passed on to a first-order theorem prover; if the result is “unsatisfiable” then the node is closed. $$\begin{matrix} \infrule[$\pqE$-Split]{ s \vdash_\pqE \Phi ; \Sigma }{ s \vdash_\pqE \Gamma[u[\sfdb]] ; s \vdash_\pqE \Phi \backslash \Gamma ; \Sigma } & \qquad \infrule[$\pqA$-Split]{ s \vdash_\pqA \Phi ; \Sigma }{ s \vdash_\pqA \Gamma[u[\sfdb]] ; \Sigma \qquad s \vdash_\pqA \Phi \backslash \Gamma ; \Sigma } \end{matrix}$$ if $s = (n,u[\sfdb])$ for some $n$, $\Gamma$ consists of all classical formulas in $\Phi$, $\Gamma[u[\sfdb]]$ is obtained from $\Gamma$ by replacing every free occurence of the variable $\db$ in all its formulas by $u[\sfdb]$, and $\Gamma \neq \emptyset$ and $\Gamma[u[\sfdb]] \neq \Phi$. The left rule exploits the equivalence $\pqE(\phi \land \Phi ) \equiv \pqE \phi \land \pqE \Phi$ if $\phi$ is classical, and the right rule exploits the equivalence $\pqA(\phi \lor \Phi ) \equiv \pqA \phi \lor \pqA \Phi$ if $\phi$ is classical. #### Rules for path quantifiers. —————————————————————————– The next rules eliminate path quantifiers, where $Q \in \{\pqE, \pqA\}$. $$\begin{matrix} \infrule[$\pqE$-Elim]{ s \vdash_\pqE Q\,\phi , \Phi ; \Sigma }{ s \vdash_Q \phi ; s \vdash_\pqE \Phi ; \Sigma } & \qquad \infrule[$\pqA$-Elim]{ s \vdash_\pqA Q\,\phi , \Phi ; \Sigma }{ s \vdash_Q \phi ; \Sigma\qquad s \vdash_\pqA \Phi ; \Sigma } \end{matrix}$$ The soundness of the left rule follows from the equivalences $\pqE\,(Q\,\phi \land \Phi ) \equiv \pqE\, Q\,\phi \land \pqE\,\Phi \equiv Q\,\phi \land \pqE\,\Phi$, and the soundness of the right rule follows from the equivalences $\pqA\,(Q\,\phi \lor \Phi ) \equiv \pqA\, Q\,\phi \lor \pqA\,\Phi \equiv Q\,\phi \lor \pqA\,\Phi$. The above rules apply also if $\Phi$ is empty. Notice that in this case $\Phi$ represents the empty conjunction in $s \vdash_\pqE\, \Phi$, which is equivalent to $\top$, and the empty disjunction in $s \vdash_\pqA\, \Phi$, which is equivalent to $\bot$. When applied exhaustively, the rules so far lead to sequents that all have the form $s \vdash_Q \Phi$ such that (a) $\Phi$ consists of classical formulas only, or (b) $\Phi$ consists of modal atoms only with top-level operators from $\{ \toU, \toR, \toX, \toWX \}$. #### Rules to expand $\toU$ and $\toR$ formulas. The following rules perform one-step expansions of modal atoms with $\toU$ and $\toR$ operators. $$\begin{matrix}\small \small\infrule[$\toU$-Exp]{ s \vdash_Q (\phi\, \toU\, \psi ), \Phi ; \Sigma }{ s \vdash_Q \psi \lor (\phi \land \toX\,(\phi\, \toU\, \psi )), \Phi ; \Sigma } &\qquad \small\infrule[$\toR$-Exp]{ s \vdash_Q (\phi\, \toR\, \psi ), \Phi ; \Sigma }{ s \vdash_Q (\psi \land (\phi \lor \toWX\,(\phi\, \toR\, \psi ))), \Phi ; \Sigma } \end{matrix}$$ When applied exhaustively, the rules so far lead to sequents that all have the form $s \vdash_Q \Phi$ such that (a) $\Phi$ consists of classical formulas only, or $\Phi$ consists of modal atoms only with top-level operators from $\{ \toX, \toWX \}$. #### Rules to simplify $\toX$ and $\toWX$ formulas. Below we define inference rules for one-step expansions of sequents of the form $s \vdash_Q \toX\, \phi$ and $\vdash_Q \toWX\, \phi$. The following inference rules prepare their application. $$\begin{matrix} \infrule[$\pqE$-$\toX$-Simp]{ s \vdash_\pqE \toX\,\phi _1 , \ldots , \toX\,\phi _n, \toWX\,\psi _1 , \ldots , \toWX\,\psi _m ; \Sigma }{ s \vdash_\pqE Y\,(\phi _1 \land \cdots \land \phi _n \land \psi _1 \land \cdots \land \psi _m) ; \Sigma } \end{matrix}$$ if $n+m>0$, where $Y = \toWX$ if $n = 0$ else $Y = \toX$. Intuitively, if just one of the modal atoms in the premise is an $\toX$-formula then a successor state must exist to satisfy it, hence the $\toX$-formula in the conclusion. Similarly: $$\begin{matrix} \infrule[$\pqA$-$\toX$-Simp]{ s \vdash_\pqA \toX\,\phi _1 , \ldots , \toX\,\phi _n, \toWX\,\psi _1 , \ldots , \toWX\,\psi _m ; \Sigma }{ s \vdash_\pqA Y(\phi _1 \lor \cdots \lor \phi _n \lor \psi _1 \lor \cdots \lor \psi _m) ; \Sigma }\end{matrix}$$ if $n+m>0$, where $Y = \toX$ if $m = 0$ else $Y = \toWX$. The correctness of this rule follows from the equivalences $\pqA\,(\toX\,\phi \lor \,\toWX\,\psi ) \equiv \pqA\,(\toWX\,\phi \lor \toWX\,\psi ) \equiv \pqA\,\toWX\,(\phi \lor \psi )$. To summarize, with the rules so far, all sequents can be brought into one of the following forms: (a) $s \vdash_Q \Gamma$, where $\Gamma$ consists of classical formulas only, (b) $s \vdash_Q \toX\,\phi$, or (c) $s \vdash_Q \toWX\,\phi$. #### Rule to close branches. The following rule derives no conclusions and this way indicates that a branch in a tableau is “closed”. $$\begin{matrix} \infrule[Unsat]{ s_1 \vdash_{Q_1} \Phi_1 ; \cdots ; s_n \vdash_{Q_n} \Phi_n }{ \mbox{} } \end{matrix}$$ if every $\Phi_i$ consists of closed classical formulas, and $\bigwedge (\calD \cup \Phi _1 \cup \cdots \cup \Phi _n)$ is unsatisfiable (not satisfiable). #### Rules to expand $\toX$ and $\toWX$ formulas. $$\begin{matrix} \small\infrule[$\pqE$-$\toWX$-Exp]{ (m,t) \vdash_\pqE \toWX\,\phi ; \Sigma }{ (n_1, u_1[t]) \vdash_\pqE \gamma_1[t] \land \phi ; \Sigma \quad \cdots \quad (n_k, u_k[t]) \vdash_\pqE \gamma_k[t] \land \phi ; \Sigma \quad (m,t) \vdash_\pqE \lnot \gamma_1[t] \land \cdots \land \lnot \gamma_k[t] ; \Sigma } \end{matrix}$$ if there is a $k \ge 0$ such that $m \stackrel{\gamma_i,u_i}{\longrightarrow} n_i$ are all transitions in $\calP$ emerging from $m$, where $1 \le i \le k$. This rule binds the variable $\db$ in the guards to the term $t$,which represents the current database, while it leaves the formula $\phi$ untouched. The variable $\db$ in $\toWX\, \Phi$ refers to the databases in the successor states, , the databases $u_i[t]$. The rules to separate classical sequents above will bind $\db$ in $\Phi$ correctly. There is also a rule whose premise sequent is made with the $\toX$ operator instead of $\toWX$. It differs from the rule only by leaving away the rightmost conclusion. We do not display it here for space reasons. We note that both rules are defined also if $k=0$. $$\begin{matrix} \infrule[$\pqA$-$\toX$-Exp]{ (m,t) \vdash_\pqA \toX\,\phi ; \Sigma }{ (n_1, u_1[t]) \vdash_\pqA \lnot \gamma_1[t] \vee \phi ; \cdots (n_k, u_k[t]) \vdash_\pqA \lnot \gamma_k[t] \vee \phi ; (m,t) \vdash_\pqE \gamma_1[t] \vee \cdots \vee \gamma_k[t] ; \Sigma } \end{matrix}$$ if there is a $k \ge 0$ such that $m \stackrel{\gamma_i,u_i}{\longrightarrow} n_i$ are all transitions in $\calP$ emerging from $m$, where $1 \le i \le k$. This rule will for each of the conclusion sequent lead to a case distinction (via branching) whether the guard of a transition is true or not. Only if the guard is true the transition must be taken. The conclusion sequent $(m,t) \vdash_\pqE \gamma_1[t] \vee \cdots \vee \gamma_k[t]$ forces that at least one guard is true. Analogously to above, there is also a rule for the $\toWX$ case, which does not include this sequent. This reflects that $\toWX$ formulas are true in states without successor. Both rules also work as expected if $k=0$: for the formula in the sequent $(m,t) \vdash_\pqE \gamma_1[t] \vee \cdots \vee \gamma_k[t]$ is equivalent to $\bot$ (false); for the premise sequent is deleted. If additionally $\Sigma$ is empty then the result is a node with the empty set of sequents. This does not indicate branch closure; branch closure is indicated by deriving *no* conclusions, not a unit-conclusion, even if empty. This concludes the presentation of the tableau calculus. As said above, we enforce derivations to be finite by imposing a user-specified maximal length on the number of state transitions it executes. This is realized as a check in the rules to expand $\toX$ and $\toWX$ formulas by pretending a value $k=0$ of transitions emerging from the node of the considered state, if the run to that state becomes too long. (This is not formalized above.) For this bounded model checking setting we obtain a formal soundness and completeness result for the (hence, bounded) unrestricted model checking problem. More precisely, given a specification $\calS = (\calP, \calD, \calC)$, $(s_0,\calS) \models \Phi$ holds for every database $s_0$ relative to all runs of maximal length shorter than a given finite length $l$ if and only if the fully expanded tableau with initial node $(n_0, \sfdb) \vdash_\pqE \phi_\calC$ is closed. (A tableau is closed if each of its leafs is closed as determined by the rule or the rule.) The tableau rule requires a call to a (sound) first-order theorem prover. Depending on the underlying syntactic fragment of FOL these calls may not always terminate. However, if a classical sequent is provably *satisfiable* then it is possible to extract from the tableaux branch a run that constitutes a counterexample to the given problem. Moreover, this formula will often represent *general* conditions on the initial database $s_0$ under which the query $\Phi$ is not satisfied by $(s_0,\calS)$ and this way provide more valuable feedback than a fully concrete database. Practice and Experiments {#sec:experiments} ======================== In this section, we provide some notes on the implementations of the theory presented in the preceding sections. #### Satisfiability Checking on the Nodes. Before we can model-check the truth of formulas over the graph structure of a full specification, we must be able to evaluate first-order formulas with respect to nodes within that graph. When performing checking with a concrete initial state, all subsequent states will be concrete as well, and evaluating quantified formulas is straightforward as long as quantification is over finite domains, as is typical. On the other hand, if the initial state is only characterised with a formula, then checking satisfiability of formulas with respect to that node and all its successors becomes a full-blown theorem-proving problem. We solve this problem by translating to the standard TPTP format [@tptp2009], which has recently be extended to include arithmetic [@Sutcliffe:etal:TPTP-TFFA:LPAR:2012], and then using off-the-shelf first-order provers. Our current backend is SPASS+T [@SPASST2006], which has good support for arithmetic in addition to sorted first-order logic. #### Model Checking. For concrete model checking, we assume that there are no two definitions for same predicate symbol, that definitions are not recursive, and that all quantifications inside the bodies $\phi$ range over concrete data items. With these assumptions, definitions can be expanded as necessary, and we can efficiently decide if formulas (edges’ guards and the classical sub-formulas of the model checking problem) are satisfied with respect to concrete database values. In theory, SPASS+T should do the same, but we have found that our own custom guard evaluator performs better, and is also guaranteed to terminate. When performing concrete model checking, we can also execute scripts directly as Groovy programs rather than needing to manipulate them as first order terms. We have fully implemented the preceding section’s generic tableau system for concrete model checking, giving us an efficient procedure that is guaranteed to terminate on problems given a depth-bound. In our practical experiments on the example in Section \[sec:example\] we could (dis)prove queries like the ones mentioned there in very short time. Our implementation is also capable of generating proof obligations in the TPTP format for unbounded model checking. It also emits the necessary axioms to reflect the semantics of objects and arrays, as explained in Section \[sec:data\]. We have experimented with smaller examples and found that SPASS+T is capable of handling them. At the current stage, however, the implementation is not mature enough yet, and so our experiments are too premature to report on. We also plan to consider alternatives to SPASS+T by implementing the calculus in [@Baumgartner:Tinelli:MEET:CADE:2011] and by linking in SMT-solvers. Conclusions and Future Work {#sec:conclusion} =========================== We described a novel approach to modelling and reasoning about data-centric business processes. Our modelling language treats data, process fragments, constraints and logical definitions of business rules on a par. Our research plan focuses on providing strong analytical capabilities on the corresponding models by taking all these components into account. The main ambition is to go beyond model checking from concrete initial states. To this end we have devised a novel tableau calculus that reduces what we called unrestricted model checking problems to first-order logic over arithmetic. Our main contributions so far are conceptual in nature. Our main theoretical result is the soundness and completeness of the tableau calculus, as explained at the end of Section \[sec:process\]. Our implementation is already fully functional for concrete model checking. Much remains to be done, at various levels. The tableau implementation needs to be completed and improved for efficiency, and more experiments need to be carried out. The main motivation for using JSON and Groovy is their widespread acceptance in practice and available tool support, which we exploit in our implementation. For the same reason we want to extend our modelling language by front-ends for established business process modeling techniques, in particular BPMN. This raises (also) some non-trivial interesting theoretical issues. For example, how to map BPMN’s parallel-And construct into our framework. We expect that by using process *fragments* and constraints on them an isomorphic mapping is possible. [^1]: For simplicity we assume every fragment contains exactly one entry node. [^2]: We need additional sorts, , for truth values and integers, as mentioned. The sorts in $S$ are written in italics, as in $\DB$.
--- abstract: 'In the numerical modelling of cascaded mid-infrared (IR) supercontinuum generation (SCG) we have studied how an ensemble of spectrally and temporally distributed solitons from the long-wavelength part of an SC evolves and interacts when coupled into the normal dispersion regime of a highly nonlinear chalcogenide fiber. This has revealed a novel fundamental phenomenon – the generation of a temporally and spectrally delocalized high energy rogue wave in the normal dispersion regime in the form of a strongly self-phase-modulation (SPM) broadened pulse. Along the local SPM shape the rogue wave is localized both temporally and spectrally. We demonstrate that this novel form of rogue wave is generated by inter-pulse Raman amplification between the SPM lobes of the many pulses causing the initially most delayed pulse to swallow the energy of all the other pulses. We further demonstrate that this novel type of rogue wave generation is a key effect in efficient long-wavelength mid-IR SCG based on the cascading of SC spectra and demonstrate how the mid-IR SC spectrum can be shaped by manipulating the rogue wave.' author: - 'Rasmus Eilk[œ]{}r Hansen' - Rasmus Dybbro Engelsholm - Christian Rosenberg Petersen - Ole Bang bibliography: - 'main.bib' title: Delocalized SPM rogue waves in normal dispersion cascaded supercontinuum generation --- Introduction ============ Ever since the discovery of temporally and/or spatially localized solitons and their particle-like behavior [@scott2003nonlinear], their interaction has been the subject of intense research in physics and applied mathematics [@Interaction]. The outcome of soliton collisions has been shown to be highly dependent on the relative phase and amplitude of the two solitons, where in-phase solitons attract each other and out-of-phase solitons repel each other [@Mitschke]. In physical systems described by an integrable model, solitons have the beautiful property that they are able to collide and pass through each other without changing their shape or energy [@Integrable_Collision], an effect that is often used as the definition of integrability of a nonlinear dynamical model, such as the nonlinear Schrödinger (NLS) equation describing optical fibers and a vast range of other nonlinear optical systems [@Zakharov]. Non-integrable perturbations from effects, such as higher-order dispersion or Raman scattering, will drastically change the soliton collision properties by allowing energy transfer between solitons [@Chi]. This might not seem so drastical at first if one considers only one collision between two solitons, but if one considers the outcome of a series of collisions between a large number of different solitons under the influence of a perturbation that leads to an on average preferential energy transfer, e.g., from low amplitude to high amplitude solitons, then the outcome can be an extremely localized soliton with an extremely high amplitude, also known as a rogue wave [@Islam:89; @Bang].\ Rogue waves appear in two fundamentally different forms: One type appears out-of-nowhere and disappears just as fast, such as the both spatially and temporally localized Peregrine soliton solution of the NLS equation, which locally at the point of maximum compression has an intensity 9 times higher than the background [@Kibler]. Here we focus on the second type of rogue wave that is the result of many collisions [@Islam:89], which has a relatively long life time, making them also biologically relevant in for example DNA denaturation [@Bang].\ One particularly useful testing gronud for the study and observation of rogue waves has for a long time been supercontinuum generation (SCG) in optical fibers pumped in the anomalous dispersion regime with long pulses. The first work in 1989 of Islam et al. [@Islam:89] (here termed narrow, high-intensity solitons) dates back to before the work in 1993-95 on biological rogue waves by Peyrard et al. [@Dauxois; @Bang] (here termed large amplitude breathers or high-energy localized vibrational modes). In this case SCG is initiated by modulational instability (MI) growing from noise in frequency bands symmetrically displaced by $\Omega_m \approx (2 \gamma P_0/\abs{\beta_2}) ^{1/2}$ from the pump frequency and breaking up the pump pulse into a number of solitons over an MI gain length of $1/(\gamma P_0)$, where $P_0$, is the pump peak power, $\gamma$ is the fiber nonlinearity, and $\beta_2$ is the group-velocity dispersion [@agrawal2013nonlinear]. The pulse duration of the generated solitons is roughly $\pi/\Omega_m$, and thus both the pulse length of the solitons and the distance over which they are generated depend on the pump peak power. The result will be a distributed spectrum of different solitons, since the high peak power in the center generates short solitons early in the fiber, while the low peak power wings will generate longer solitons after a longer propagation distance, as demonstrated by Islam et al. [@Islam:89].\ The rather controlled generation of a distributed soliton spectrum though MI is very important for the generation of rogue waves. The other important mechanism is Raman scattering, which represents a perturbation to the integrable NLS equation that leads to two key effects: (1) intra-pulse stimulated Raman scattering (SRS) casusing a continuous red-shift of a soliton, also known as the soliton self-frequency shift (SSFS), which scales inversely with pulse duration [@Gordon] and (2) a preferential transfer of energy from low- to high-amplitude solitons in a collision, mediated by the phase-insensitive inter-pulse SRS as demonstrated by Luan et al. in 2006 [@Luan]. Since solitons of different pulse durations red-shift by different rates due to SSFS, a distributed spectrum will inevitably lead to collisions of solitons that are different. For increasing pulse energy SCG will involve the generation of an increasing number of solitons, and thus the number of collisions will also increase. The Kerr effect will lead to an energy transfer that depends sensitively on the relative phase and amplitude of the colliding solitons, but inter-pulse SRS adds to this with a preferential transfer of energy from the most blue-shifted to the most red-shifted soliton, which is typically also the largest due to SSFS [@Luan]. This means that potentially a rare event could occur, in which the conditions are just right for one soliton to gain energy from many collisions and thereby obtain a very high amplitude and narrow width. Due to the Raman-induced SSFS in optical fibers, these high-amplitude short solitons would red-shift strongly and inevitably be located in the most red part of the SC spectrum. This was already observed numerically by Islam et al. in 1989, who also experimentally proved that different red parts of the spectrum were indeed not present in all pulses [@Islam:89]. In 2006 Frosz et al. numerically investigated CW-pumped SCG and demonstrated also the generation of a very short and high-amplitude soliton with a huge red-shift [@Frosz]. This set the stage for the work of Solli et al., who in 2007 measured the L-shaped statistics of these rare high-energy waves for the first time and gave them the name optical rogue waves, in analogy with oceanic rogue waves [@Solli]. It was later demonstrated that in fact the Raman effect is not necessary to generate optical rogue waves, even just 3rd order dispersion provides a sufficient non-integrable perturbation to provide on-average preferential energy transfer in soliton collisions and generate rogue waves [@Genty]. Since then the field of optical rogue waves has become an important and very rich scientific field with strong parallels to oceanography and hydrodynamics because of the common dynamical model – the NLS equation [@review].\ So far, fiber-optical rogue waves have only been demonstrated as highly localized high peak power solitons existing in the anomalous dispersion region because of the self-focusing nonlinearity of the fibers. In other words, they are strongly linked to the initial generation of a distributed spectrum of localized solitons, which requires either MI (of long pump pulses) or soliton fission (of short pump pulses). In fact also the spatially (along the direction of propagation) and temporally localized Peregrine rogue wave requires anomalous dispersion [@Kibler].\ *Here we demonstrate the first high-energy rogue wave, which is generated in the normal dispersion region, where no solitons exist, and which is both temporally and spectrally delocalized. This normal dispersion rogue wave takes the shape of a very high energy self-phase modulation (SPM) wave.*\ We observe the generation of the novel SPM rogue wave in the numerical modelling of cascaded mid-infrared (mid-IR) SCG in which a large ensemble of spectrally and temporally distributed solitons from the long-wavelength part of an SC generated in one fiber (here a ZBLAN fiber) is coupled into the normal dispersion regime of another fiber (here a highly nonlinear chalcogenide fiber). We demonstrate that the SPM rogue wave is generated by inter-pulse Raman amplification between the SPM lobes of the many pulses, causing the initially most delayed pulse to gradually swallow the energy of all the other SPM broadened pulses as they begin to overlap temporally due to dispersion, while also being within the Raman gain band (extending to about 10 THz in the chalcogenide fiber).\ Cascaded SCG is currently one of the most promising routes for a practical and high brightness light source covering the important mid-IR spectral region from $2-12 \mu \mathrm{m}$, with applications including: Chemical detection [@Chemical_Detection], tissue microspectroscopy [@Tissue_Imaging; @ChemicalMapping], and optical coherence tomography [@OCT]. We consider the specific cascaded mid-IR SC laser shown in Fig. \[fig:Cascade\]. and demonstrate that the SPM rogue wave generation is a key effect in efficient long-wavelength mid-IR SCG based on the cascading of SC spectra. Thereby, we demonstrate how the mid-IR SC spectrum can be shaped by manipulating the SPM rogue wave. In particular, we demonstrate that the SPM rogue wave and the left over parts of the other SPM lobes together act as a spectrally localized collective structure (here about 500-800 nm broad) whose center wavelength is slowly red-shifting towards the zero-dispersion wavelength (ZDW) through the inter-pulse Raman amplification between the SPM lobes, finally stopping at a certain distance before the ZDW where the dispersion is so weak that effectively no further temporal overlap is possible. This slowly red-shifting collective structure was recently reported in a Master’s project [@RasmusMSC] and in a similar study of cascaded mid-IR SCG by coupling a ZBLAN fiber SC into a chalcogenide fiber with normal dispersion [@Venck]. In [@Venck] the authors explain the red-shift as being due to intra-pulse Raman scattering, but here our detailed investigations, based on series of spectrograms, clarifies that the slow red-shift is actually due to collective inter-pulse Raman amplification and the generation of an SPM rogue wave. The numerical model and fiber parameters ======================================== To model the fiber cascade shown in Fig. \[fig:Cascade\] the propagation of the pulse envelope of a single scalar mode $G(t,z)$ (or $\Tilde{G}(\Omega,z)$ in the frequency domain) is simulated through the generalized NLS equation (GNLSE) $$\begin{split} & \frac{\partial}{\partial z}\left[\exp\left( i \beta_{\mathrm{eff}}(\Omega, \Omega_P, \Omega_0) z \right)^* \right] \Tilde{G}(\Omega,z) \\ & = i \gamma(\Omega) K(\Omega,\Omega_0) \exp\left( i \beta_{\mathrm{eff}}(\Omega, \Omega_P, \Omega_0) z \right)^* \label{eq:GNLSE} \\ & \mathcal{F} \left\{ H \mathcal{F}^{-1} \left[ \Tilde{R}(\Omega) \mathcal{F} \Big[ H^* H \Big] \right] \right\} \end{split}$$ as introduced in [@Dybbro_Noise], where $\mathcal{F}\{...\}$ and $\mathcal{F}^{-1}\{...\}$ are the Fourier transform and its inverse, respectively, superscript $*$ is the complex conjugate, and $$\begin{split} K(\Omega, \Omega_0) & = \left[ \frac{\mathrm{A_{eff}}(\Omega_0)}{\mathrm{A_{eff}}(\Omega)} \right]^{\frac{1}{4}}\\ H &= H(t,z) = \mathcal{F}^{-1}\left[\Tilde{G}(\Omega,z) K(\Omega,\Omega_0) \right] \end{split}$$ The introduction of $K(\Omega,\Omega_0)$ treats mode profile dispersion, as described in [@MFD_Jesper]. The effective propagation constant $\beta_{\mathrm{eff}}=\beta_{\mathrm{eff}}(\Omega,\Omega_p,\Omega_0)$ is defined as $$\beta_{\mathrm{eff}}(\Omega,\Omega_p,\Omega_0) \equiv \beta(\Omega) - \beta_0(\Omega_0) - \beta_1(\Omega_p)[\Omega - \Omega_0]$$ where the propagation constant $\beta(\Omega)$ is related to the effective index by $\beta(\Omega) = \Omega n_{\mathrm{eff}}(\Omega)/c$. As such the full frequency dependant propagation constant is used, where the evaluation of the inverse group velocity $\beta_1(\Omega_p)$ at the pump frequency, $\Omega_P$, ensures that the pump wavelength is stationary in the time domain. $\Omega$ is the physical frequency, and $\Omega_0$ is the center of the frequency grid. Thus, the pulse envelope $\tilde{G}(\Omega,z)$ is related to the real pulse envelope $\tilde{A}(\Omega,z)$, used by e.g Agrawal [@agrawal2013nonlinear], by a simple phaseshift as $\tilde{G}(\Omega,z) = \exp(i \beta_{\mathrm{eff}} z) \tilde{A}(\Omega,z)$.\ In Fig. \[fig:Disp\_N\_Loss\] the attenuation and dispersion $D = - \frac{\lambda}{c} \pdv[2]{n_{\mathrm{eff}}}{\lambda}$ of the ZBLAN and $\mathrm{As_2Se_3}$ fibers are shown. The ZBLAN fiber has two ZDWs at approximately $1.5 \mu \mathrm{m}$ and $4.2 \mu \mathrm{m}$, with an anomalous dispersion regime in between them, allowing soliton propagation. The $\mathrm{As_2Se_3}$ fiber has only one ZDW at approximately $6 \mu \mathrm{m}$, which means that the output of the ZBLAN fiber will be coupled into the normal dispersion regime. The background loss of the $\mathrm{As_2Se_3}$ fiber is orders of magnitude higher than that of the ZBLAN fiber, guidance is however allowed much further into the mid-IR, and the nonlinearity is much higher. The frequency dependent nonlinear parameter $\gamma(\Omega)$ is given by $$\gamma(\Omega) = \frac{n_2 n_0^2 \Omega}{c n_{eff}(\Omega)^2 A_{\mathrm{eff}}(\Omega_0)}$$ where $n_0$ is the refractive index at $\Omega_0$, and $n_2$ is the nonlinear refractive index. The usual definition of the effective area is used: $$A_{\mathrm{eff}}(\Omega) = \frac{\qty(\int \int \abs{\mathbfcal{E}(\Omega,x,y)}^2\mathrm{d}x \mathrm{d} y)^{2}}{\int \int \abs{\mathbfcal{E}(\Omega,x,y)}^4 \mathrm{d}x\mathrm{d}y}$$ In this implementation the transverse fields $\mathbfcal{E}(\Omega,x,y)$ have been normalized such that the unit of the envelope in the time domain is $\sqrt{\mathrm{W}}$. Finally, the nonlinear response function of the material is given by $$\Tilde{R}(\Omega) = 1-f_\mathrm{R} + f_\mathrm{R}\Tilde{h}_r(\Omega)$$ where the term $1-f_{\rm R}$ is the instantaneous Kerr response, $\Tilde{h}_r(\Omega)$ is the delayed Raman response function, and $f_R$ is the fractional Raman contribution. The Raman reponse functions are shown in Fig. \[fig:Raman\] and the key fiber parameters are given in Table \[tab:fiber\_Param\].\ \ [c c c c c c]{} [@c@]{}fiber\ & -------------------- a $[\mu \mathrm{m}]$ -------------------- : Constant fiber parameters in the simulations, where a is the fiber core radius and NA is the numerical aperture.[]{data-label="tab:fiber_Param"} & [@c@]{}NA\ & --------------------- $n_2 \cdot 10^{20}$ $[\mathrm{m^2/W}]$ --------------------- : Constant fiber parameters in the simulations, where a is the fiber core radius and NA is the numerical aperture.[]{data-label="tab:fiber_Param"} & [@c@]{}$f_R$\ & [@c@]{}Manufacturer\ \ ZBLAN & 3.5 & 0.265 & 2.1 & 0.0969 & FiberLabs\ $\mathrm{As_2Se_3}$ & 6 & 0.76 & 1500 & 0.0103 & IRflex\ In all our simulations we use $N_t = 2^{19}$ grid points, a temporal resolution of $\mathrm{d}t = 1.5 \mathrm{fs}$, and a central angular frequency of $\Omega_0 = 2 \pi \cdot 350 \cdot 10^{12} \mathrm{rad \cdot s}^{-1}$. The narrowest solitons observed are approximately 20 fs, which means the temporal resolution should be adequate. This gives a wavelength range that spans from $438.7 \mathrm{nm}$ to $17989 \mathrm{nm}$. We use a variable step-size controller, based on an embedded fourth and fifth order Runge-Kutta stepper. Due to the high nonlinear refractive index of $\mathrm{As_2Se_3}$, proper convergence necessitates step-sizes as small as $0.1 \mu \mathrm{m}$.\ In all the presented figures illustrating the evolution of the power spectral density (PSD) as a function of propagation distance $z$ a running spectral average has been applied with a stepsize of $3 \mathrm{nm}$, and a bandwidth of $15 \mathrm{nm}$. All spectrograms are calculated using a Hamming window with a width of $150 \mathrm{fs}$. Cascaded supercontinuum generation ================================== In the following we focus on the $\mathrm{As_2Se_3}$ fiber stage of the cascaded SC source shown in Fig. \[fig:Cascade\]. The input to the $\mathrm{As_2Se_3}$ fiber is the output of the ZBLAN fiber, which is obtained by propagating noise seeded Gaussian shaped 40ps pulses from a directly modulated 1560nm seed diode through a 5m long Er-doped silica fiber amplifier (EDFA), followed by a 1.5m long Tm-doped fiber amplifier (TDFA), similar to the one described in [@ZBLAN_Dispersion], and finally the 7m long ZBLAN fiber. During amplification in the EDFA, the input pulse undergoes MI and breaks up into a number of solitons to generate an in-amplifier SC [@In_Amp_SCG; @Kyei], resulting in an SC with a spectral edge around 2.3$\mu \mathrm{m}$ and 800mW of average power at $1\mathrm{MHz}$ repetition rate. In the TDFA the spectral edge is extended to 2.8$\mu$m before being coupled into the 7m ZBLAN fiber to generate the ensemble averaged output spectrum shown in the top of Fig. \[fig:Full\_Evo\] (averaged over 10 noise seeds), which reaches 4.5$\mu$m and has 220mW average power at 1MHz repetition rate. Essentially, we now consider how 10 different versions of the single shot (one noise seed) ZBLAN output spectrogram shown in Fig. \[fig:ZBLAN\_output\](a) evolve in the $\mathrm{As_2Se_3}$ fiber. All detailed parameters of all fibers and amplifiers not given here can be found in the supplementary material (see also [@RasmusMSC]).\ The result of propagating the full output of the ZBLAN fiber in 2.5m $\mathrm{As_2Se_3}$ fiber is shown in Fig. \[fig:Full\_Evo\], with the input and output spectra shown above. The spectrum was averaged over an ensemble of 10 noise seeds, with both one-photon-per-mode noise and 1 % relative intensity noise. A part of the power is seen to immediately shift across the ZDW at 6$\mu$m out to about 12$\mu$m, but as the pulse propagates along the fiber, the mid-IR power above 6$\mu$m attenuates rather quickly again. However, an about 500-800nm broad localized excitation carrying a major part of the power is seen to slowly red-shift from $\sim 3 \mu \mathrm{m}$ to $\sim 5 \mu \mathrm{m}$, while also pushing power across the ZDW and far into the mid-IR. This red-shifting collective excitation was also recently observed in [@RasmusMSC; @Venck]. The main peak in the spectrum ends up approximately at $5 \mu \mathrm{m}$ after 1.2m of propagation. As a result the mid-IR power above 6$\mu$m peaks after 2m of propagation. At the output, the spectrum reaches 11$\mu \mathrm{m}$ and at 1MHz repetition rate the total power is 77mW with 12mW above the ZDW at 6$\mu \mathrm{m}$.\ Understanding the physics behind the slowly red-shifting high PSD localized structure is the focus of this article. Using detailed spectrograms we will show that it is due to a collective effect of inter-pulse Raman amplification between a large number of pulses undergoing SPM, which finally leads to the generation of a novel type of high-energy SPM rogue wave.\ In Fig. \[fig:ZBLAN\_output\] we show spectrograms of the power distribution $\abs{G(z,t)}^2=|A(z,t)|^2$ of the ZBLAN fiber output and how it looks after having propagated 0.6m and 1.2m in the $\mathrm{As_2Se_3}$ fiber. At the ZBLAN fiber output, seen in Fig. \[fig:ZBLAN\_output\](a), a large number of solitons have been created in the anomalous dispersion regime between the ZDWs at 1.5$\mu$m and 4.2$\mu$m (dashed black lines), while in the normal dispersion regimes dispersive waves have been generated by the solitons, giving a spectrum extending from $1 \mu \mathrm{m}$ to about $4.5 \mu \mathrm{m}$. The inset shows a close up of the area marked by the red dashed rectangle, in which several intense and distinct solitons are clearly visible. In the following section the three numbered solitons 1-3 will be investigated separately to clearly illustrate the fundamental physics behind the observed spectral evolution. The pulse parameters of the three solitons (amplitude fitted to a sech profile) are given in Table \[tab:PulseParameters\]. [c c c c c c c]{} \# & ----------------- $P_0$ $[\mathrm{kW}]$ ----------------- : Parameters for solitons 1-3 marked in Fig. \[fig:ZBLAN\_output\](a): Peak power $P_0$, temporal Full Width Half Max $T_{\rm FWHM}$ (measured in power), center wavelength $\lambda_0$, group velocity dispersion $\beta_2$, fiber nonlinearity $\gamma$, and OWB distance $z_{\mathrm{OWB}}$ calculated from the analytical formula given in [@Anderson_Wave_Breaking], all evaluated at $\lambda_0$.[]{data-label="tab:PulseParameters"} & --------------------- $\mathrm{T_{FWHM}}$ $[\mathrm{fs}]$ --------------------- : Parameters for solitons 1-3 marked in Fig. \[fig:ZBLAN\_output\](a): Peak power $P_0$, temporal Full Width Half Max $T_{\rm FWHM}$ (measured in power), center wavelength $\lambda_0$, group velocity dispersion $\beta_2$, fiber nonlinearity $\gamma$, and OWB distance $z_{\mathrm{OWB}}$ calculated from the analytical formula given in [@Anderson_Wave_Breaking], all evaluated at $\lambda_0$.[]{data-label="tab:PulseParameters"} & -------------------- $\lambda_0$ $[\mu \mathrm{m}]$ -------------------- : Parameters for solitons 1-3 marked in Fig. \[fig:ZBLAN\_output\](a): Peak power $P_0$, temporal Full Width Half Max $T_{\rm FWHM}$ (measured in power), center wavelength $\lambda_0$, group velocity dispersion $\beta_2$, fiber nonlinearity $\gamma$, and OWB distance $z_{\mathrm{OWB}}$ calculated from the analytical formula given in [@Anderson_Wave_Breaking], all evaluated at $\lambda_0$.[]{data-label="tab:PulseParameters"} & --------------------- $\gamma$ $[\mathrm{1/(Wm)}]$ --------------------- : Parameters for solitons 1-3 marked in Fig. \[fig:ZBLAN\_output\](a): Peak power $P_0$, temporal Full Width Half Max $T_{\rm FWHM}$ (measured in power), center wavelength $\lambda_0$, group velocity dispersion $\beta_2$, fiber nonlinearity $\gamma$, and OWB distance $z_{\mathrm{OWB}}$ calculated from the analytical formula given in [@Anderson_Wave_Breaking], all evaluated at $\lambda_0$.[]{data-label="tab:PulseParameters"} & --------------------- $\beta_2$ $[\mathrm{ps^2/m}]$ --------------------- : Parameters for solitons 1-3 marked in Fig. \[fig:ZBLAN\_output\](a): Peak power $P_0$, temporal Full Width Half Max $T_{\rm FWHM}$ (measured in power), center wavelength $\lambda_0$, group velocity dispersion $\beta_2$, fiber nonlinearity $\gamma$, and OWB distance $z_{\mathrm{OWB}}$ calculated from the analytical formula given in [@Anderson_Wave_Breaking], all evaluated at $\lambda_0$.[]{data-label="tab:PulseParameters"} & -------------------- $z_{\mathrm{OWB}}$ $[\mathrm{mm}]$ -------------------- : Parameters for solitons 1-3 marked in Fig. \[fig:ZBLAN\_output\](a): Peak power $P_0$, temporal Full Width Half Max $T_{\rm FWHM}$ (measured in power), center wavelength $\lambda_0$, group velocity dispersion $\beta_2$, fiber nonlinearity $\gamma$, and OWB distance $z_{\mathrm{OWB}}$ calculated from the analytical formula given in [@Anderson_Wave_Breaking], all evaluated at $\lambda_0$.[]{data-label="tab:PulseParameters"} \ 1 & 42.8 & 50 & 2.55 & 0.58 & 523 & 1.3\ 2 & 78.6 & 30 & 2.72 & 0.54 & 476 & 0.6\ 3 & 44.2 & 90 & 2.68 & 0.55 & 487 & 2.5\ After 0.6m a considerable amount of power has been moved above the ZDW in the $\mathrm{As_2Se_3}$ fiber, but the bulk of the energy is now concentrated in a narrow band around $4 \mu \mathrm{m}$ as also clearly visible in Fig. \[fig:Full\_Evo\]. Several temporally elongated SPM profiles are visible and from the inset it is clear that the most delayed SPM profile has much higher PSD than the others. After 1.2m the localized band has red-shifted to an even narrower band closer to the ZDW at approximately $5 \mu \mathrm{m}$, as also seen in Fig. \[fig:Full\_Evo\]. Less SPM profiles are now visible and the most delayed is still the one with the highest PSD. A new type of rogue wave with an SPM shape appears to have been formed To further investigate this behaviour, we now focus on the interaction of the three solitons marked in Fig. \[fig:ZBLAN\_output\](a). Inter-pulse Raman amplification and the SPM rogue wave ====================================================== The spectral evolution of only the three solitons selected from the ZBLAN output spectrum \[see Fig. \[fig:ZBLAN\_output\](a)\] is shown in Fig. \[fig:3Solitons\]. From now on we refer to them as pulses when discussing their evolution inside the $\mathrm{As_2Se_3}$ fiber. Initially, very strong spectral broadening due to SPM happens almost immediately due to the extremely high nonlinearity of the fiber, which shifts energy across the ZDW at 6$\mu \mathrm{m}$, just as in the evolution of the full spectrum with hundreds of solitons in Fig. \[fig:Full\_Evo\]. Even with only these 3 pulses starting relatively far apart a localized high PSD structure again becomes visible at around 0.3m and continuously red-shifts towards the ZDW. The localized structure is marked SRS, but at this point we have not shown whether it is inter-pulse or intra-pulse SRS that generates it. The fact that the structure appears later than for the full spectrum with many more temporally much closer solitons seems to indicate that it is an inter-pulse interaction effect.\ To look even closer into the dynamics we use spectrograms. In Fig. \[fig:3Sol\_Spectrogram\] we focus on the initial dynamics out to z=2cm, where the 3 pulses evolve individually without significant overlap. In Fig. \[fig:3Sol\_Spectrogram\](b) we see the clear signature of initial SPM for all pulses at z=0.2mm. As expected SPM happens faster for pulse 2 because it has the highest peak power (see Table \[tab:PulseParameters\]). At z=2mm dispersion has started to shift the SPM sidelobes temporally, and optical wave breaking (OWB) has started to set in for pulse 1 and 2, initially at the tails where the dispersion is the highest. At z=1cm pulse 2 has spectrally broadened to the ZDW and started to push energy across the ZDW, while pulse 3 now also clearly is undergoing OWB. At z=2cm pulse 2 has shed off what appears as a soliton in the anomalous dispersion regime above the ZDW and left is now in the normal dispersion regime three elongated SPM pulses still without overlap.\ In Fig. \[fig:3Sol\_Spectrogram\_Last\] we focus on the normal dispersion region and the evolution between z=20cm and z=60cm, in which the SPM pulses begin to overlap enough due to dispersion to start interacting through SRS. The interaction will be through inter-pulse Raman amplification, which will be effective for frequency separations up to approximately 10THz and strongest at about 7THz, as shown in Fig. \[fig:Raman\]. In the spectrograms we mark the 10THz separation with a black indicator at the wavelength where it first connects two SPM lobes to show when the SPM lobes can transfer power between each other and when not.\ In Fig. \[fig:3Sol\_Spectrogram\_Last\](a) for z=20cm we see that the tail of pulse 2 now can transfer energy to the most delayed pulse 3 at wavelength below approximately 2.8$\mu$m, while energy can be transferred from pulse 1 to 2 at wavelengths below 2.7$\mu$m. This is illustrated by the 10THz Raman indicator on pulse 2 (pulse 1) exactly reaching pulse 3 (pulse 2) at 2.8$\mu \mathrm{m}$ (2.7$\mu \mathrm{m}$). The energy transfer between the SPM lobes is clearly visible in Fig. \[fig:3Sol\_Spectrogram\_Last\](b) for z=28cm, where the trailing edge of pulse 2 below 2.3$\mu \mathrm{m}$ (delayed more than 35ps) has now been completely swallowed by pulse 3. The cut-off wavelength for energy transfer from pulse 2 (pulse 1) to pulse 3 (pulse 2) has now increased to 3.0$\mu \mathrm{m}$ (2.8$\mu \mathrm{m}$), as marked by the 10THz Raman indicators. In Fig. \[fig:3Sol\_Spectrogram\_Last\](c) for z=36cm pulse 2 has now been completely swallowed by pulse 3 at wavelength shorter 2.8$\mu$m (delayed more than about 18ps). The Raman indicator between pulse 1 and 3 further shows that pulse 3 now also can swallow energy directly from pulse 1 at wavelengths below 2.6$\mu \mathrm{m}$. In Figs. \[fig:3Sol\_Spectrogram\_Last\](d-e) for z=54cm and 70cm the most delayed pulse 3 is swallowing more and more energy from the other two and at 70cm both pulse 1 and 2 have disappeared at wavelengths below 3.5$\mu \mathrm{m}$, leaving pulse 3 almost alone as a high energy SPM rogue wave containing more than 89% of the total energy, where it initially contained 47% of the energy. No further noticeable energy transfer is expected because of the weak dispersion close to the ZDW, which will prevent further temporal overlap between the pulses.\ The SPM rogue wave is clearly being generated through inter-pulse Raman amplification. To demonstrate the dominant role of SRS we turn off the Raman effect, by setting $f_R=0$, and repeat the simulation out to z=70cm. The result is shown in Fig. \[fig:3Sol\_Spectrogram\_Last\](f), for a direct comparison with Fig. \[fig:3Sol\_Spectrogram\_Last\](e) with the Raman effect present. Without the Raman effect the three SPM pulses are seen to not transfer energy between each other, but be completely intact despite a strong temporal overlap and no SPM rogue wave is generated. The full spectral evolution over 1.5m with the Raman effect turned of is given in the supplementary Fig. S5, which confirms the absence of a collective localized SRC structure when the Raman effect is absent.\ When the Raman effect is turned off by setting $f_R$=0 obviously both inter-pulse and intra-pulse SRS are turned off. In the supplementary Fig. S6 we therefore show the spectral evolution of a single soliton 3 over 0.7m with the Raman term present and in the available movie we show the evolution of the spectral-temporal structure as spectrograms. Both clearly demonstrate no noticeable intra-pulse SRS, confirming that the high energy SPM rogue wave is generated by inter-pulse Raman amplification.\ The demonstration of the generation of the high energy SPM rogue wave was performed with 3 different solitons 1-3 from an actual SC appearing in actual realized cascaded mid-IR SCG, in which the rogue wave started as the most delayed soliton, which in this case also had the highest energy already at the input. To even more beautifully demonstrate the generation we simulated the evolution of 10 identical copies of soliton 3 separated by 20ps and set the fiber loss to zero in order to avoid spectral signatures originating for example from loss peaks. The spectral evolution over 1.5m in the lossless $\mathrm{As_2Se_3}$ fiber shown in Fig. \[fig:10Solitons\] now even more clearly demonstrates the excitation and slow red-shift of the localized SRS structure. More importantly, the SPM rogue wave is still generated as seen in the spectrogram at 1m in Fig. \[fig:10Solitons\], where it has swallowed most of the energy of the other 9 pulses and now contains over 55% of the total energy, where it initially had only 10% of the energy. The full spectrogram series of this lossless 10-soliton case is shown in Figs. S8-S10 in the supplementary material. Manipulating the localized SRS structure ======================================== Our modelling has clearly shown that the red-shifting localized SRS structure is due to inter-pulse SRS, by which energy is transferred between SPM pulses towards the most delayed pulse (the SPM rogue wave), and that it is mediated by dispersion continuously causing spectrally longer and longer wavelength parts of the trailing edge of an SPM lobe to begin to overlap with the neighbouring slightly more delayed and red-shifted SPM lobe. This energy transfer between the SPM lobes is also what creates the localized SRS structure and makes it red-shift. If the dispersion is very weak there will be no new energy transfer, since all spectral components travel at the same speed.\ We thus anticipate that the red-shift of the localized SRS stucture will stop close to the ZDW when the dispersion becomes too weak, and not be able to cross the ZDW. We further anticipate that the bigger the slope of the dispersion at the ZDW ($\beta_3$) is the closer to the ZDW the localized SRS structure can come and the narrower it will be spectrally when it stops. This is exactly what we observe in Fig. \[fig:Parallel\], where we propagate the output spectrum of the ZBLAN fiber shown in Fig. \[fig:ZBLAN\_output\](a) in 1m of the original $\mathrm{As_2Se_3}$ fiber (a) with ZDW=6.0$\mu$m and in 1m of two other $\mathrm{As_2Se_3}$ fibers with increasingly smaller ZDW of (b) 5.5$\mu$m (With increased NA=1 by decreasing the cladding index) and (c) 5.1$\mu$m (NA=1 and decreased core radius a=4$\mu$m). The localized structure never crosses the ZDW. In addition the localized structure comes closest to the ZDW in fiber (b) with the largest dispersion slope at the ZDW of $\beta_3$=2.4ps$^3$/km, where fibers (a) and (c) have $\beta_3$=2.1ps$^3$/km and $\beta_3$=1.9ps$^3$/km, respectively.\ The ZDW and the slope of the dispersion at the ZDW can thus be used to control and manipulate the SPM rogue wave generation process and the center wavelength and bandwidth of the high PSD localized SRC structure. For example, if a strong signal is required at $7 \mu \mathrm{m}$ for an imaging system similar to the one reported in [@Tissue_Imaging], it would be beneficial to increase the ZDW to approximately $8 \mu \mathrm{m}$, by for example choosing a fiber with $a = 6 \mu \mathrm{m}$, and NA$=0.56$. Conclusion ========== We have through rigorous numerical modelling demonstrated a novel fundamental physical phenomenon – the generation of high energy optical SPM rogue waves in the normal dispersion regime in the form of a strongly SPM broadened pulse containing most of the energy originally spread out over many pulses. Along the local SPM shape the rogue wave is localized both temporally and spectrally, but seen as a whole entity it is both temporally and spectrally delocalized. This is in sharp contrast to the well-known optical rogue waves being generated in the anomalous dispersion regime where solitons and MI exist, taking either the form of the temporally localized high peak power fundamental bright solitons or the both spatially and temporally localized Peregine soliton. We have demonstrated that this SPM rogue wave is naturally generated from an ensemble of originally separated input pulses that undergo spectral broadening through SPM and temporal broadening through dispersion to temporally overlap and transfer energy between each other through inter-pulse SRS. The inter-pulse SRS transfers energy from one SPM lobe to the neighbouring slightly red-shifted and delayed SPM lobe, so that most of the energy finally becomes localized in the most delayed SPM-shaped pulse - the SPM rogue wave.\ Technically we have demonstrated the generality of the SPM rogue wave by generating it from different ensembles of pulses and, in particular, by showing how it is naturally appearing in fiber-based cascaded SCG in which the distributed soliton spectrum of an SC generated in the anomalous dispersion regime of one fiber is coupled into the normal dispersion regime of a subsequent fiber. Cascaded SCG is the most promising technique behind the future table-top and low-cost mid-IR fiber-based SC sources, which are spatially coherent and have a brightness two orders of magnitude higher than synchrotrons [@Syncrotron]. Understanding and being able to control and manipulate the SPM rogue wave is therefore not just of fundamental but also advanced technical importance. We have shown how the SPM rogue wave generation can be seen spectrally as a localized collective high PSD structure that slowly red-shifts due to the energy transfer by inter-pulse SRS, until finally being stopped by the ZDW. We have demonstrated how the SPM rogue wave and this localized high PSD structure can be manipulated through the dispersion to tailor the spectrum of a mid-IR SC source towards applications in for example imaging and spectroscopy. Funding Information {#funding-information .unnumbered} =================== We acknowledge the financial support from Innovation Fund Denmark through UVSUPER Grant No. 8090-00060A. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Kyei Kwarkye for measuring the ZBLAN loss curve, and Mikkel Jensen for fruitful discussions. Disclosures {#disclosures .unnumbered} =========== The authors declare no conflicts of interest. Supporting content {#supporting-content .unnumbered} ================== See supplement 1 for supporting content. fiber parameters ================ To allow reproduction we here present the necessary fiber parameters for the entire cascade. In the following we use Si:Er for the initial stage in the fiber cascade, which is an Erbium/Ytterbium doped silica fiber, and Si:Tm for the second stage, which is a Thulium doped silica fiber. The constant fiber parameters for the two fiber amplifiers are given in table \[tab:fiber\_Param\_sup\]\ [c c c c c]{} [@c@]{}fiber\ & -------------------- a $[\mu \mathrm{m}]$ -------------------- : Constant fiber parameters for the silica gain fibers. where a is the fiber core radius and NA is the numerical aperture. In a SEM analysis conducted in , it was found that the core of the Thulium doped silica fiber amplifier contains large amounts of Ge. This is the reason why $n_2$ and $f_\mathrm{R}$ differ between the two fiber amplifiers.[]{data-label="tab:fiber_Param_sup"} & [@c@]{}NA\ & --------------------- $n_2 \cdot 10^{20}$ $[\mathrm{m^2/W}]$ --------------------- : Constant fiber parameters for the silica gain fibers. where a is the fiber core radius and NA is the numerical aperture. In a SEM analysis conducted in , it was found that the core of the Thulium doped silica fiber amplifier contains large amounts of Ge. This is the reason why $n_2$ and $f_\mathrm{R}$ differ between the two fiber amplifiers.[]{data-label="tab:fiber_Param_sup"} & [@c@]{}$f_R$\ \ Si:Er & 6 & 0.2 & 3.2 & 0.18\ Si:Tm & 5 & 0.22 & 4.23 & 0.13\ The effective index can be derived from the fiber parameters given in Table 1 of the manuscript, using either analytical results for step-index fibers, or numerical tools, like FEM analysis. The dispersion of both the ZBLAN and the $\mathrm{As_2Se_3}$ fibers are shown in Figure 2, in the main document.\ The effective area can be derived using the same tools. However, as the effective areas used in the simulations are not included in the main document, we will show them here. A parametrisation of the silica loss curve is given in for both of the fiber amplifiers. The parametrisation is given by: $$\begin{split} &\alpha_{dB/m}(\lambda) = \frac{A_{Ray}}{\lambda^4} + A_{ip} + A_{uv} \exp\qty(\frac{\lambda_{uv}}{\lambda}) + A_{ir} \exp\qty(\frac{-\lambda_{ir}}{\lambda}) \\ &\alpha_{1/m} = \frac{\ln(10)}{10} \alpha_{dB/m} \end{split} \label{eq:si_loss}$$ Where $$\begin{split} &A_{Ray} = 1.3 \times 10^{-3} \frac{\mathrm{dB} \mu \mathrm{m^4}}{\mathrm{m}} \quad A_{ip} = 10^{-3} \mathrm{\frac{dB}{m}} \quad A_{uv} = 10^{-6} \mathrm{\frac{dB}{m}} \quad A_{ir} = 6 \times 10^{8} \mathrm{\frac{dB}{m}} \\ & \lambda_{uv} = 4.67 \mu \mathrm{m} \quad \lambda_{ir} = 47.8 \mu \mathrm{m} \end{split}$$ For the Si:Er fiber the gain cross-section is found in . The gain cross section for the Si:Tm fiber is given in , except that the high wavelength gain is given in , the sum of these two curves is the reason for the kink seen around 2100nm. The gain curves have been rescaled by an average (along the length of the fiber) excited dopant concentration, such that the output spectra of the simulations of the fiber amplifiers matched experimentally measured data as well as possible. As such a constant gain is used in the fibers, which is obviously a hard assumption. Three solitons with Raman interaction turned off ================================================ A simulation where the Raman effect was turned off ($f_R = 0$) was run, to clearly show which effects are due to Raman, and which are purely Kerr interaction. Simulation with a single input soliton ====================================== To underline that the red-shift is indeed a collective effect, we show the dynamics of a single soliton (soliton “3”), in figure \[fig:Single\_Sol\].\ Ten similar solitons, with no loss ================================== As an additional illustrative example of SRS we present the dynamics of 10 spectrally identical solitons, at $\lambda_0 = 2.68 \mu \mathrm{m}$ initially shifted by 20ps, in figure \[fig:10Solitons\_Dynamics\]. This is the same simulation as shown in Figure 9 in the main document, but here we show it in more detail. The fiber parameters are all the same as the ones used in the main paper. The spectral dynamics are very similar to the test example of soliton “1,2 and 3”, except for that the SRS structure is seen more clearly. In the following we will show spectrograms breaking down both the temporal and spectral dynamics of the 10 solitons.\ ![The initial dynamics shown in spectrograms. The red line in the plots is where the maximum PSD is found. In the bottom two plots only two of the pulses are shown in order to clearly illustrate optical wave breaking. In the bottom-most plot the pulse centres are almost depleted, and the main PSD is localised in two new bands.[]{data-label="fig:FirstSpectrograms"}](10Solitons_0m.png "fig:"){width="0.5\linewidth"} ![The initial dynamics shown in spectrograms. The red line in the plots is where the maximum PSD is found. In the bottom two plots only two of the pulses are shown in order to clearly illustrate optical wave breaking. In the bottom-most plot the pulse centres are almost depleted, and the main PSD is localised in two new bands.[]{data-label="fig:FirstSpectrograms"}](10Solitons_0001m.png "fig:"){width="0.5\linewidth"} ![The initial dynamics shown in spectrograms. The red line in the plots is where the maximum PSD is found. In the bottom two plots only two of the pulses are shown in order to clearly illustrate optical wave breaking. In the bottom-most plot the pulse centres are almost depleted, and the main PSD is localised in two new bands.[]{data-label="fig:FirstSpectrograms"}](10Solitons_0005m.png "fig:"){width="0.5\linewidth"} ![The onset of SRS shown in spectrograms. The red line in the plots is where the maximum PSD is found. At 0.5m the dispersion has not yet ensured the needed overlap for SRS. At 0.6m the red-shift has initiated. This is mostly observable from the depletion of the tail of leading pulse, and from the fact that the maximum PSD is now found at a longer wavelength. At 0.7m the trailing pulse has absorbed most of the short wavelength tail of the neighbouring pulse.[]{data-label="fig:MiddleSpectrograms"}](10Solitons_05m.png "fig:"){width="0.5\linewidth"} ![The onset of SRS shown in spectrograms. The red line in the plots is where the maximum PSD is found. At 0.5m the dispersion has not yet ensured the needed overlap for SRS. At 0.6m the red-shift has initiated. This is mostly observable from the depletion of the tail of leading pulse, and from the fact that the maximum PSD is now found at a longer wavelength. At 0.7m the trailing pulse has absorbed most of the short wavelength tail of the neighbouring pulse.[]{data-label="fig:MiddleSpectrograms"}](10Solitons_06m.png "fig:"){width="0.5\linewidth"} ![The onset of SRS shown in spectrograms. The red line in the plots is where the maximum PSD is found. At 0.5m the dispersion has not yet ensured the needed overlap for SRS. At 0.6m the red-shift has initiated. This is mostly observable from the depletion of the tail of leading pulse, and from the fact that the maximum PSD is now found at a longer wavelength. At 0.7m the trailing pulse has absorbed most of the short wavelength tail of the neighbouring pulse.[]{data-label="fig:MiddleSpectrograms"}](10Solitons_07m.png "fig:"){width="0.5\linewidth"} ![The evolution of SRS shown in spectrograms. The red line in the plots is where the maximum PSD is found. The maximum PSD is continuously shifted towards longer wavelengths. At 0.9m the vacuum noise floor has gained on the pulses, giving a much more noisy spectrogram. This is further enhanced at 1m. It is important to, once again, note that loss was not included in this simulation. As such the extent of the noise is exaggerated.[]{data-label="fig:LastSpectrograms"}](10Solitons_08m.png "fig:"){width="0.5\linewidth"} ![The evolution of SRS shown in spectrograms. The red line in the plots is where the maximum PSD is found. The maximum PSD is continuously shifted towards longer wavelengths. At 0.9m the vacuum noise floor has gained on the pulses, giving a much more noisy spectrogram. This is further enhanced at 1m. It is important to, once again, note that loss was not included in this simulation. As such the extent of the noise is exaggerated.[]{data-label="fig:LastSpectrograms"}](10Solitons_09m.png "fig:"){width="0.5\linewidth"} ![The evolution of SRS shown in spectrograms. The red line in the plots is where the maximum PSD is found. The maximum PSD is continuously shifted towards longer wavelengths. At 0.9m the vacuum noise floor has gained on the pulses, giving a much more noisy spectrogram. This is further enhanced at 1m. It is important to, once again, note that loss was not included in this simulation. As such the extent of the noise is exaggerated.[]{data-label="fig:LastSpectrograms"}](10Solitons_1m.png "fig:"){width="0.5\linewidth"}
--- abstract: 'Non-standard distributional approximations have received considerable attention in recent years. They often provide more accurate approximations in small samples, and theoretical improvements in some cases. This paper shows that the seemingly unrelated many instruments asymptotics and small bandwidth asymptotics share a common structure, where the object determining the limiting distribution is a V-statistic with a remainder that is an asymptotically normal degenerate U-statistic. We illustrate how this general structure can be used to derive new results by obtaining a new asymptotic distribution of a series estimator of the partially linear model when the number of terms in the series approximation possibly grows as fast as the sample size, which we call many terms asymptotics.' author: - 'Matias D. Cattaneo[^1]' - 'Michael Jansson[^2]' - 'Whitney K. Newey[^3]' bibliography: - 'altasym\_19Dec2014.bib' title: 'Alternative Asymptotics and the Partially Linear Model with Many Regressors[^4]' --- **JEL classification***:* C13, C31. **Keywords**: non-standard asymptotics, partially linear model, many terms, adjusted variance. Introduction\[section:intro\] ============================= Many instrument asymptotics, where the number of instruments grows as fast as the sample size, has proven useful for instrumental variables (IV) estimators. [@Kunitomo_1980_JASA] and [@Morimune_1983_ECMA] derived asymptotic variances that are larger than the usual formulae when the number of instruments and sample size grow at the same rate, and [@Bekker_1994_ECMA] and others provided consistent estimators of these larger variances. [@Hansen-Hausman-Newey_2008_JBES] showed that using many instrument standard errors provides a theoretical improvement for a range of number of instruments and a practical improvement for estimating the returns to schooling. Thus, many instrument asymptotics and the associated standard errors have been demonstrated to be a useful alternative to the usual asymptotics for instrumental variables. Instrumental variable estimators implicitly depend on a nonparametric series estimator. Many instrument asymptotics has the number of series terms growing so fast that the series estimator is not consistent. Analogous asymptotics for kernel-based density-weighted average derivative estimators has been considered by @Cattaneo-Crump-Jansson_2010_JASA [@Cattaneo-Crump-Jansson_2014a_ET]. They show that when the bandwidth shrinks faster than needed for consistency of the kernel estimator, the variance of the estimator is larger than the usual formula. They also find that correcting the variance provides an improvement over standard asymptotics for a range of bandwidths. The purpose of this paper is to show that these results share a common structure, and to illustrate how this structure can be used to derive new results. The common structure is that the object determining the limiting distribution is a V-statistic, which can be decomposed into a bias term, a sample average, and a remainder that is an asymptotically normal degenerate U-statistic. Asymptotic normality of the remainder distinguishes this setting from other ones involving V-statistics. Here the asymptotically normal remainder comes from the number of series terms going to infinity or bandwidth shrinking to zero, while the behavior of a degenerate U-statistic tends to be more complicated in other settings. When the number of terms grows as fast as the sample size, or the bandwidth shrinks to zero at an appropriate rate, the remainder has the same magnitude as the leading term, resulting in an asymptotic variance larger than just the variance of the leading term. The many instrument and small bandwidth results share this structure. In keeping with this common structure, we will henceforth refer to such results under the general heading of alternative asymptotics. The alternative asymptotics that we discuss in this paper applies to statistics that take a specific V-statistic representation, or may be approximated by it sufficiently accurately, and therefore it does not apply broadly to all possible semiparametric settings. Nonetheless, as we illustrate below, this structure arises naturally in several interesting problems in Economics and Statistics. In particular, we show formally that applying this common structure to a series estimator of the partially linear model leads to new results. These results allow the number of terms in the series approximation to grow as fast as the sample size. The asymptotic distribution of the estimator is derived and it is shown to have a larger asymptotic variance than the usual formula, which is in fact a natural and generic consequence of the specific structure that we highlight in this paper. We also find that under homoskedasticity, the classical degrees of freedom adjusted homoskedastic standard error estimator from linear models is consistent even when the number of terms is large relative to the sample size. This result offers a large sample, distribution free justification for the degrees of freedom correction when many series terms are employed. Constructing automatic consistent standard error estimator under (conditional) heteroskedasticity of unknown form in this setting turns out to be quite challenging. In [@Cattaneo-Jansson-Newey_2015_HCSEManyCov], we present a detailed discussion of heteroskedasticity-robust standard errors for general linear models with increasing dimension, which covers the partially linear model with many terms studied herein as a special case. The rest of the paper is organized as follows. Section \[section:CommonStructure\] describes the common structure of many instrument and small bandwidth asymptotics, and also shows how the structure leads to new results for the partially linear model. Section \[section:ManyReg\] formalizes the new distributional approximation for the partially linear model. Section \[section:simuls\] reports results from a small simulation study aimed to illustrate our results in small samples. Section \[section:conclusion\] concludes. The appendix collects the proofs of our results. A Common Structure\[section:CommonStructure\] ============================================= To describe the common structure of many instrument and small bandwidth asymptotics, let $W_{1},\ldots,W_{n}$ denote independent random vectors. We consider an estimator $\hat{\beta}$ of a generic parameter of interest $\beta_{0}\in\mathbb{R}^{d}$ satisfying$$\sqrt{n}(\hat{\beta}-\beta_{0})=\hat{\Gamma}_{n}^{-1}S_{n},\text{\qquad}S_{n}=\sum_{1\leq i,j\leq n}u_{ij}^{n}(W_{i},W_{j}),\label{eq:vstat}$$ where $u_{ij}^{n}(\cdot)$ is a function that can depend on $i$, $j$, and $n$. We allow $u$ to depend on $n$ to account for number of terms or bandwidths that change with the sample size. Also, we allow $u$ to vary with $i$ and $j$ to account for dependence on variables that are being conditioned on in the asymptotics, and so treated as nonrandom. We assume throughout this section that there exists a sequence of non-random matrices $\Gamma_{n}$ satisfying $\Gamma_{n}^{-1}\hat{\Gamma}_{n}\rightarrow_{p}I_{d}$ for $I_{d}$ the $d\times d$ identity matrix, and hence we focus on the V-statistic $S_{n}$. (All limits are taken as $n\rightarrow \infty$ unless explicitly stated otherwise.) This V-statistic has a well known (Hoeffding-type) decomposition that we describe here because it is an essential feature of the common structure. For notational implicitly we will drop the $W_{i}$ and $W_{j}$ arguments and set $u_{ij}^{n}=u_{ij}^{n}(W_{i},W_{j})$ and $\tilde{u}_{ij}^{n}=u_{ij}^{n}+u_{ji}^{n}-\mathbb{E}[u_{ij}^{n}+u_{ji}^{n}]$. Letting $\Vert\cdot\Vert$ denote the Euclidean norm, and if $\mathbb{E}[\Vert u_{ij}^{n}\Vert]<\infty$ for all $i,j,n$, then$$S_{n}=B_{n}+\Psi_{n}+U_{n},\label{eq:decomp}$$ where$$B_{n}=\mathbb{E}[S_{n}],\text{\qquad}\Psi_{n}=\sum_{1\leq i\leq n}\psi_{i}^{n}(W_{i}),\text{\qquad}U_{n}=\sum_{2\leq i\leq n}D_{i}^{n}(W_{i},...,W_{1}),$$$$\psi_{i}^{n}(W_{i})=u_{ii}^{n}-\mathbb{E}[u_{ii}^{n}]+\sum_{1\leq j\leq n,j\neq i}\mathbb{E}[\tilde{u}_{ij}^{n}|W_{i}],$$$$D_{i}^{n}(W_{i},...,W_{1})=\sum_{1\leq j\leq n,j<i}(\tilde{u}_{ij}^{n}-\mathbb{E}[\tilde{u}_{ij}^{n}|W_{i}]-\mathbb{E}[\tilde{u}_{ij}^{n}|W_{j}]),$$ so that $\mathbb{E}[\psi_{i}^{n}(W_{i})]=0$, $\mathbb{E}[D_{i}^{n}(W_{i},...,W_{1})|W_{i-1},...,W_{1}]=0$, and $\mathbb{E}[\Psi_{n}U_{n}]=0$. This decomposition of a V-statistic is well known (e.g., [@vanderVaart_1998_Book Chapter 11]), and shows that $S_{n}$ can be decomposed into a sum $\Psi_{n}$ of independent terms, a U-statistic remainder $U_{n}$ that is a martingale difference sum and uncorrelated with $\Psi_{n}$, and a pure bias term $B_{n}$.[^5] The decomposition is important in many of the proofs of asymptotic normality of semiparametric estimators, including [@Powell-Stock-Stoker_1989_ECMA], with the limiting distribution being determined by $\Psi_{n}$, and $U_{n}$ being treated as a remainder that is of smaller order under a particular restriction on the tuning parameter sequence (e.g., when the bandwidth shrinks slowly enough). An interesting feature of the decomposition (\[eq:decomp\]) in semiparametric settings is that $U_{n}$ is asymptotically normal at some rate when the number of series terms grow or the bandwidth shrinks to zero. To be specific, under regularity conditions and appropriate tuning parameter sequences that we make precise below, it turns out that$$\left[ \begin{array} [c]{c}\mathbb{V}[\Psi_{n}]^{-1/2}\Psi_{n}\\ \mathbb{V}[U_{n}]^{-1/2}U_{n}\end{array} \right] \rightarrow_{d}\mathcal{N}(0,I_{2d}).$$ In other settings, where the underlying kernel of the U-statistic does not vary with the sample size, the asymptotic behavior of $U_{n}$ is usually more complicated: because it is a degenerate U-statistic, it would converge to a weighted sum of independent chi-square random variables (e.g., [@vanderVaart_1998_Book Chapter 12]). However, in semiparametric-type settings as those considered here, the kernel of the underlying U-statistic forming $U_{n}$ changes with the sample size and hence, under particular tuning parameter configurations, the individual contributions $D_{i}^{n}(W_{i},...,W_{1})$ to $U_{n}$ can be made small enough to satisfy a Lindeberg-Feller condition and thus obtain a Gaussian limiting distribution (usually employing the martingale property of $U_{n}$). For an interesting discussion of this phenomenon, see [@deJong_1987_PTRF]. The asymptotic normality property of $U_{n}$ has been shown for certain classes of both series and kernel based estimators, as further explained below. Alternative asymptotics occurs when the number of series terms grows or the bandwidth shrinks fast enough so that $\mathbb{V}[\Psi_{n}]$ and $\mathbb{V}[U_{n}]$ have the same magnitude in the limit. Because of uncorrelatedness of $\Psi_{n}$ and $U_{n}$, the asymptotic variance will be larger than the usual formula which is $\lim_{n\rightarrow\infty}\mathbb{V}[\Psi_{n}]$ (assuming the limit exists). As a consequence, consistent variance estimation under alternative asymptotics requires accounting for the contribution of $U_{n}$ to the (asymptotic) sampling variability of the statistic. Accounting for the presence of $U_{n}$ should also yield improvements when numbers of series terms and bandwidths do not satisfy the knife-edge conditions of alternative asymptotics, since $U_{n}$ is part of the semiparametric statistic. For instance, if the number of series terms grows just slightly slower than the sample size then accounting for the presence of $U_{n}$ should still give a better large sample approximation. [@Hansen-Hausman-Newey_2008_JBES] show such an improvement for many instrument asymptotics. It would be good to consider such improved approximations more generally, though it is beyond the scope of this paper to do so. Distribution theory under alternative asymptotics may be seen as a generalization of the conventional large sample distributional approximation approach in the sense that under conventional sequences of tuning parameters the asymptotic variances emerging from both approaches coincide. But, the alternative asymptotic approximation also allows for other tuning parameter sequences and, in this case, the limiting asymptotic variance is seen to be larger than usual. Thus, in general, there is no reason to expect that the usual standard error formulas derived under conventional asymptotics will remain valid more generically. From this perspective, alternative asymptotics are useful to provide theoretical justification for new standard error formulas that are consistent under more general sequences of tuning parameters, that is, under both conventional and alternative asymptotics. We refer to the latter standard error formulas as being more robust than the usual standard error formulas available in the literature. For instance, using these ideas, the need for new, more robust standard errors formulas was made before for many instrument asymptotics in IV models ([@Hansen-Hausman-Newey_2008_JBES]) and small bandwidth asymptotics in kernel-based semiparametrics ([@Cattaneo-Crump-Jansson_2014a_ET]). To illustrate these ideas, we show next that both many instrument asymptotics and small bandwidth asymptotics have the structure described above, and we also employ this approach to derive new results in the case of a series estimator of the partially linear model, which we refer to as many terms asymptotics. Example 1: Many Instrument Asymptotics {#example-1-many-instrument-asymptotics .unnumbered} -------------------------------------- The first example is concerned with the case of many instrument asymptotics. For simplicity we focus on the JIVE2 estimator of [@Angrist-Imbens-Krueger_1999_JAE], but the idea applies to other IV estimators such as the limited information maximum likelihood estimator. See [@Chao-Swanson-Hausman-Newey-Woutersen_2012_ET] for more details, including regularity conditions under which the following discussion can be made rigorous. Let $(y_{i},x_{i}^{\prime},z_{i}^{\prime})^{\prime}$, $i=1,\ldots,n$, be a random sample generated by the model$$y_{i}=x_{i}^{\prime}\beta_{0}+\varepsilon_{i},\text{\hspace{0.2in}}\mathbb{E}[\varepsilon_{i}|z_{i}]=0,\label{eq:IVmodel}$$ where $y_{i}$ is a scalar dependent variable, $x_{i}\in\mathbb{R}^{d}$ is a vector of endogenous variables, $\varepsilon_{i}$ is a disturbance, and $z_{i}\in\mathbb{R}^{K}$ is a vector of instrumental variables. To describe the JIVE2 estimator of $\beta_{0}$ in (\[eq:IVmodel\]), let $Q_{ij}$ denote the $(i,j)$-th element of $Q=Z(Z^{\prime}Z)^{-1}Z^{\prime}$, where $Z=[z_{1},\cdots,z_{n}]^{\prime}$. After centering and scaling, the JIVE2 estimator $\hat{\beta}$ satisfies$$\sqrt{n}(\hat{\beta}-\beta_{0})=(\frac{1}{n}\sum_{1\leq i,j\leq n,j\neq i}Q_{ij}x_{i}x_{j}^{\prime})^{-1}(\frac{1}{\sqrt{n}}\sum_{1\leq i,j\leq n,j\neq i}Q_{ij}x_{i}\varepsilon_{j}).$$ Conditional on $Z,$ $\hat{\beta}$ has the structure in (\[eq:vstat\]) with $W_{i}=(x_{i}^{\prime},\varepsilon_{i})^{\prime}$ and$$\hat{\Gamma}_{n}=\frac{1}{n}\sum_{1\leq i,j\leq n,j\neq i}Q_{ij}x_{i}x_{j}^{\prime},\text{\qquad}u_{ij}^{n}(W_{i},W_{j})={{\rm 1\hspace*{-0.4ex}\rule{0.1ex}{1.52ex}\hspace*{0.2ex}}}(i\neq j)Q_{ij}x_{i}\varepsilon_{j}/\sqrt{n},$$ where ${{\rm 1\hspace*{-0.4ex}\rule{0.1ex}{1.52ex}\hspace*{0.2ex}}}(\cdot)$ is the indicator function. For $i\neq j$, $\mathbb{E}[u_{ij}^{n}(W_{i},W_{j})|Z]=0$ and$$\mathbb{E}[u_{ij}^{n}(W_{i},W_{j})|W_{i},Z]=Q_{ij}x_{i}\mathbb{E}[\varepsilon_{j}|Z]=0,\text{\hspace{0.2in}}\mathbb{E}[u_{ji}^{n}(W_{j},W_{i})|W_{i},Z]=Q_{ij}\Upsilon_{j}\varepsilon_{i}/\sqrt{n},$$ where $\Upsilon_{i}=\mathbb{E}[x_{i}|z_{i}]$ can be interpreted as the reduced form for observation $i$. As a consequence, (\[eq:decomp\]) is satisfied with $B_{n}=0$,$$\psi_{i}^{n}(W_{i})=(\sum_{1\leq j\leq n,j\neq i}Q_{ij}\Upsilon_{j})\varepsilon_{i}=\Upsilon_{i}(1-Q_{ii})\varepsilon_{i}/\sqrt{n}-(\Upsilon _{i}-\sum_{1\leq j\leq n}Q_{ij}\Upsilon_{j})\varepsilon_{i}/\sqrt{n},$$$$D_{i}^{n}(W_{i},...,W_{1})=\sum_{1\leq j\leq n,j<i}Q_{ij}\left( v_{i}\varepsilon_{j}+v_{j}\varepsilon_{i}\right) /\sqrt{n},\text{\hspace {0.2in}}v_{i}=x_{i}-\Upsilon_{i}.$$ Because $\Upsilon_{i}-\sum_{j=1}^{n}Q_{ij}\Upsilon_{j}$ is the $i$-th residual from regressing the reduced form observations on $Z$, by appropriate definition of the reduced form this can generally be assumed to vanish as the sample size grows. In that case,$$\Psi_{n}=\frac{1}{\sqrt{n}}\sum_{1\leq i\leq n}\Upsilon_{i}(1-Q_{ii})\varepsilon_{i}+o_{p}(1).$$ Furthermore, under standard asymptotics $Q_{ii}$ will go to zero, so the limiting variance of the leading term in $\Psi_{n}$ corresponds to the usual asymptotic variance for IV. The degenerate U-statistic term is$$U_{n}=\frac{1}{\sqrt{n}}\sum_{1\leq i,j\leq n,j<i}Q_{ij}\left( v_{i}\varepsilon_{j}+v_{j}\varepsilon_{i}\right) .$$ [@Chao-Swanson-Hausman-Newey-Woutersen_2012_ET] apply a martingale central limit theorem to show that this $U_{n}$ will be asymptotically normal when $K\rightarrow\infty$ and certain regularity conditions hold. The conditions of the martingale central limit theorem are verified by showing that certain linear combinations with coefficients depending on the elements of $Q$ go to zero as $K\rightarrow\infty$. In the proof, this makes individual terms asymptotically negligible, with a Lindeberg-Feller condition being satisfied. Alternative asymptotics occurs when $K$ grows as fast as $n$, resulting in $\mathbb{V}[\Psi_{n}]$ and $\mathbb{V}[U_{n}]$ having the same magnitude in the limit. Example 2: Small Bandwidth Asymptotics {#example-2-small-bandwidth-asymptotics .unnumbered} -------------------------------------- The second example shows that small bandwidth asymptotics for certain kernel-based semiparametric estimators also has the structure outlined above. To keep the exposition simple we focus on an estimator of the integrated squared density, but the structure of this estimator is shared by the density-weighted average derivative estimator of [@Powell-Stock-Stoker_1989_ECMA] treated in [@Cattaneo-Crump-Jansson_2014a_ET] and more generally by estimators of density-weighted averages and ratios thereof (see, e.g., [@Newey-Hsieh-Robins_2004_ECMA Section 2] and references therein). Suppose $x_{i}$, $i=1,\ldots,n$, are i.i.d. continuously distributed $p$-dimensional random vectors with smooth p.d.f. $f_{0}$ and consider estimation of the integrated squared density$$\beta_{0}=\int_{\mathbb{R}^{p}}f_{0}(x)^{2}\text{d}x=\mathbb{E}[f_{0}(x_{i})].$$ A leave-one-out kernel-based estimator is$$\hat{\beta}=\sum_{1\leq i,j\leq n,i\neq j}\mathcal{K}_{h}(x_{i}-x_{j})/n(n-1),$$ where $\mathcal{K}(u)$ is a symmetric kernel and $\mathcal{K}_{h}(u)=h^{-p}\mathcal{K}(u/h)$. This estimator has the V-statistic form of (\[eq:vstat\]) with $W_{i}=x_{i}$ and$$\hat{\Gamma}_{n}=1,\text{\hspace{0.2in}}u_{ij}^{n}(W_{i},W_{j})={{\rm 1\hspace*{-0.4ex}\rule{0.1ex}{1.52ex}\hspace*{0.2ex}}}(i\neq j)\{\mathcal{K}_{h}(x_{i}-x_{j})-\beta_{0}\}/\sqrt{n}(n-1).$$ Let $f_{h}(x)=\int_{\mathbb{R}^{p}}\mathcal{K}(u)f_{0}(x+hu)\text{d}u$ and $\beta_{h}=\int_{\mathbb{R}^{p}}f_{h}(x)f_{0}(x)$d$x$. By symmetry of $\mathcal{K}(u)$,$$\mathbb{E}[u_{ij}^{n}(W_{i},W_{j})|W_{i}]=\mathbb{E}[u_{ji}^{n}(W_{j},W_{i})|W_{i}]=\{f_{h}(x_{i})-\beta_{0}\}/\sqrt{n}(n-1),$$$$\mathbb{E}[u_{ij}^{n}(W_{i},W_{j})]=\{\beta_{h}-\beta_{0}\}/\sqrt{n}(n-1),$$ so the terms in the decomposition (\[eq:decomp\]) are of the form$$B_{n}=\sqrt{n}\{\beta_{h}-\beta_{0}\},\text{\qquad}\Psi_{n}=\frac{1}{\sqrt{n}}\sum_{1\leq i\leq n}2\{f_{h}(x_{i})-\beta_{h}\},$$$$U_{n}=\frac{2}{\sqrt{n}(n-1)}\sum_{1\leq i,j\leq n,j<i}\{\mathcal{K}_{h}(x_{i}-x_{j})-f_{h}(x_{i})-f_{h}(x_{j})+\beta_{h}\}.$$ Here, $2\{f_{h}(x_{i})-\beta_{h}\}$ is an approximation to the well known influence function $2\{f_{0}(x_{i})-\beta_{0}\}$ for estimators of the integrated squared density. Under regularity conditions, $f_{h}(x_{i})$ converges to $f_{0}(x_{i})$ in mean square as $h\rightarrow0$, so that$$\Psi_{n}=\frac{1}{\sqrt{n}}\sum_{1\leq i\leq n}2\{f_{0}(x_{i})-\beta _{0}\}+o_{p}(1).$$ A martingale central limit theorem can be applied as in [@Cattaneo-Crump-Jansson_2014a_ET] to show that the degenerate U-statistic term $U_{n}$ will be asymptotically normal as $h\rightarrow0$ and $n\rightarrow\infty$, provided that $n^{2}h^{p}\rightarrow\infty$. It is easy to show that $n^{2}h^{p}\mathbb{V}[U_{n}]\rightarrow\Delta=\beta_{0}\int_{\mathbb{R}^{p}}\mathcal{K}(u)^{2}$d$u$ (under regularity conditions). Alternative asymptotics occurs when $h^{p}$ shrinks as fast as $1/n$, resulting in $\mathbb{V}[\Psi_{n}]$ and $\mathbb{V}[U_{n}]$ having the same magnitude in the limit. Example 3: Many Terms Asymptotics {#example-3-many-terms-asymptotics .unnumbered} --------------------------------- The previous two examples show how several estimators share the common structure outlined above. To illustrate how this structure can be applied to derive new results, the third example studies series estimation in the context of the partially linear model. The results will shed light on the asymptotic behavior of this estimator, and the associated inference procedures, when the number of terms are allowed to grow as fast as the sample size. Let $(y_{i},x_{i}^{\prime},z_{i}^{\prime})^{\prime}$, $i=1,\ldots,n$, be a random sample of generated by the partially linear model$$y_{i}=x_{i}^{\prime}\beta_{0}+g(z_{i})+\varepsilon_{i},\qquad\mathbb{E}[\varepsilon_{i}|x_{i},z_{i}]=0,\label{eq:PLmodel}$$ where $y_{i}$ is a scalar dependent variable, $x_{i}\in\mathbb{R}^{d}$ and $z_{i}\in\mathbb{R}^{d_{z}}$ are explanatory variables, $\varepsilon_{i}$ is a disturbance, $g(\cdot)$ is an unknown function, and $\mathbb{E}[\mathbb{V}[x_{i}|z_{i}]]$ is of full rank. A series estimator of $\beta_{0}$ is obtained by regressing $y_{i}$ on $x_{i}$ and approximating functions of $z_{i}$. To describe the estimator, let $p^{1}(z),$ $p^{2}(z),$ $\ldots$ be approximating functions, such as polynomials or splines, and let $p_{K}(z)=(p^{1}(z),\ldots,p^{K}(z))^{\prime}$ be a $K$-dimensional vector of such functions. Letting $M_{ij}$ denote the $(i,j)$-th element of $M=I_{n}-P_{K}(P_{K}^{\prime}P_{K})^{-1}P_{K}^{\prime},$ where $P_{K}=[p_{K}(z_{1}),\ldots,p_{K}(z_{n})]^{\prime}$, a series estimator of $\beta_{0}$ in (\[eq:PLmodel\]) is given by$$\hat{\beta}=(\sum_{1\leq i,j\leq n}M_{ij}x_{i}x_{j}^{\prime})^{-1}(\sum_{1\leq i,j\leq n}M_{ij}x_{i}y_{j}).$$ [@Donald-Newey_1994_JMA] gave conditions for asymptotic normality of this estimator using standard asymptotics. See also for example [@Linton_1995_ECMA], references therein, for related asymptotic results when using kernel estimators. Conditional on $Z=[z_{1},\ldots,z_{n}]^{\prime}$, $\hat{\beta}$ has the structure outlined earlier:$$\sqrt{n}(\hat{\beta}-\beta_{0})=\hat{\Gamma}_{n}^{-1}S_{n},\label{root-n expand}$$ with$$\hat{\Gamma}_{n}=\frac{1}{n}\sum_{1\leq i,j\leq n}M_{ij}x_{i}x_{j}^{\prime },\qquad S_{n}=\frac{1}{\sqrt{n}}\sum_{1\leq i,j\leq n}x_{i}M_{ij}(g_{j}+\varepsilon_{j}),$$ where $g_{i}=g(z_{i}).$ In other words, $\hat{\beta}$ has the V-statistic form of (\[eq:vstat\]) with $W_{i}=(x_{i}^{\prime},\varepsilon_{i})^{\prime}$ and $u_{ij}^{n}(W_{i},W_{j})=x_{i}M_{ij}(g_{j}+\varepsilon_{j})/\sqrt{n}$. By $\mathbb{E}[\varepsilon_{i}|x_{i},z_{i}]=0$ we have $\mathbb{E}[x_{i}\varepsilon_{i}|Z]=0$. Therefore, letting $u_{ij}^{n}=u_{ij}^{n}(W_{i},W_{j})$ as we have done previously, we have $$\mathbb{E}[u_{ij}^{n}|Z]=h_{i}M_{ij}g_{j}/\sqrt{n},\text{\qquad}u_{ij}^{n}-\mathbb{E}[u_{ij}^{n}|Z]=M_{ij}\left( v_{i}g_{j}+x_{i}\varepsilon _{j}\right) /\sqrt{n},$$$$\tilde{u}_{ij}^{n}=M_{ij}\left( v_{j}g_{i}+v_{i}g_{j}+x_{j}\varepsilon _{i}+x_{i}\varepsilon_{j}\right) /\sqrt{n},\qquad\mathbb{E}[\tilde{u}_{ij}^{n}|W_{i},Z]=M_{ij}\left( v_{i}g_{j}+h_{j}\varepsilon_{i}\right) /\sqrt{n},$$ for $i\neq j$, where $h_{i}=h(z_{i})=\mathbb{E}[x_{i}|z_{i}]$ and $v_{i}=x_{i}-h_{i}$. In this case, the bias term in (\[eq:decomp\]) is$$B_{n}=\frac{1}{\sqrt{n}}\sum_{1\leq i,j\leq n}M_{ij}h_{i}g_{j},$$ which will be negligible under regularity conditions, as shown in the next section. Moreover,$$\Psi_{n}=\frac{1}{\sqrt{n}}\sum_{1\leq i\leq n}M_{ii}v_{i}\varepsilon _{i}+R_{n},\qquad R_{n}=\frac{1}{\sqrt{n}}\sum_{1\leq i,j\leq n}M_{ij}(v_{i}g_{j}+h_{i}\varepsilon_{j}),$$ where $R_{n}$ has mean zero and converges to zero in mean square as $K$ grows, as further discussed below. Under standard asymptotics $M_{ii}$ will go to one and hence the limiting variance of the leading term in $\Psi_{n}$ corresponds to the usual asymptotic variance. Finally, we find that the degenerate U-statistic term is$$U_{n}=\frac{1}{\sqrt{n}}\sum_{1\leq i,j\leq n,j<i}M_{ij}\left( v_{i}\varepsilon_{j}+v_{j}\varepsilon_{i}\right) =-\frac{1}{\sqrt{n}}\sum_{1\leq i,j\leq n,j<i}Q_{ij}\left( v_{i}\varepsilon_{j}+v_{j}\varepsilon_{i}\right) .$$ Remarkably, this term is essentially the same as the degenerate U-statistic term for JIVE2 that was discussed above. Consequently, the central limit theorem of [@Chao-Swanson-Hausman-Newey-Woutersen_2012_ET] is applicable to this problem. We will employ it to show that $U_{n}$ is asymptotically normal as $K\rightarrow\infty$, even when $K/n$ does not converge to zero. This example highlights a new approach to studying the asymptotic distribution of semi-linear regression under many terms asymptotics. The alternative asymptotic approximation is useful, for instance, when the number of covariates entering the nonparametric part is large relative to the sample size, as is often the case in empirical applications. Many Terms Asymptotics\[section:ManyReg\] ========================================= In this section we make precise the discussion given in Example 3, and also discuss consistent standard error estimation under homoskedasticity. The estimator $\hat{\beta}$ described in Example 3 can be interpreted as a two-step semiparametric estimator with tuning parameter $K$, the first step involving series estimation of the the unknown (regression) functions $g(z)$ and $h(z)$. [@Donald-Newey_1994_JMA] gave conditions for asymptotic normality of this estimator when $K/n\rightarrow0$. Here we generalize their findings by obtaining an asymptotic distributional result that is valid even when $K/n$ is bounded away from zero. The analysis proceeds under the following assumption. (**Partially Linear Model**)  \(a) $(y_{i},x_{i}^{\prime},z_{i}^{\prime})^{\prime}$, $i=1,\ldots,n $, is a random sample. (b) There is a $C<\infty$ such that $\mathbb{E}[\varepsilon_{i}^{4}|x_{i},z_{i}]\leq C$ and $\mathbb{E}[\Vert v_{i}\Vert^{4}|z_{i}]\leq C$. \(c) There is a $C>0$ such that $\mathbb{E}[\varepsilon_{i}^{2}|x_{i},z_{i}]\geq C$ and $\lambda_{\min}(\mathbb{E}[v_{i}v_{i}^{\prime}|z_{i}])\geq C$. \(d) $\operatorname*{rank}(P_{K})=K$ (a.s.) and there is a $C>0$ such that $M_{ii}\geq C$. \(e) For some $\alpha_{g},\alpha_{h}>0$, there is a $C<\infty$ such that$$\min_{\eta_{g}\in\mathbb{R}^{K}}\mathbb{E}[|g(z_{i})-\eta_{g}^{\prime}p_{K}(z_{i})|^{2}]\leq CK^{-2\alpha_{g}},\qquad\min_{\eta_{h}\in \mathbb{R}^{K\times d}}\mathbb{E}[\Vert h(z_{i})-\eta_{h}^{\prime}p_{K}(z_{i})\Vert^{2}]\leq CK^{-2\alpha_{h}}.$$ Because $\sum_{i=1}^{n}M_{ii}=n-K$, an implication of part (d) is that $K/n\leq1-C<1$, but crucially Assumption PLM does not imply that $K/n\rightarrow0$. Part (e) is implied by conventional assumptions from approximation theory. For instance, when the support of $z_{i}$ is compact commonly used basis of approximation, such as polynomials or splines, will satisfy this assumption with $\alpha_{g}=s_{g}/d_{z}$ and $\alpha_{h}=s_{h}/d_{z}$, where $s_{g}$ and $s_{h}$ denotes the number of continuous derivatives of $g(z)$ and $h(z)$, respectively. Further discussion and related references for several basis of approximation may be found in [@Newey_1997_JoE], [@Chen_2007_Handbook] and [@Belloni-Chernozhukov-Chetverikov-Kato_2015_JoE], among others. Asymptotic Distribution ----------------------- From equation (\[root-n expand\]), and the discussion in the previous section, we see that the asymptotic distribution of $\hat{\beta}$ will be determined by the behavior of $\hat{\Gamma}_{n}$ and $S_{n}$. The following lemma approximates $\hat{\Gamma}_{n}$ without requiring that $K/n\rightarrow0$. \[lemma:GammaHat\]If Assumption PLM is satisfied and if $K\rightarrow \infty$, then$$\hat{\Gamma}_{n}=\Gamma_{n}+o_{p}\left( 1\right) ,\text{\qquad}\Gamma _{n}=\frac{1}{n}\sum_{1\leq i\leq n}M_{ii}\mathbb{E}[v_{i}v_{i}^{\prime}|z_{i}].$$ Because $\sum_{i=1}^{n}M_{ii}=n-K$, it follows from this result that in the homoskedastic $v_{i}$ case (i.e., when $\mathbb{E}[v_{i}v_{i}^{\prime}|z_{i}]=\mathbb{E}[v_{i}v_{i}^{\prime}]$) $\hat{\Gamma}_{n}$ is close to$$\Gamma_{n}=(1-K/n)\Gamma,\qquad\Gamma=\mathbb{E}[v_{i}v_{i}^{\prime}],$$ in probability. More generally, with heteroskedasticity, $\hat{\Gamma}_{n}$ will be close to the weighted average $\Gamma_{n}$. Importantly, this result includes standard asymptotics as a special case when $K/n\rightarrow0 $, where $\sum_{i=1}^{n}(1-M_{ii})/n=K/n$, the law of large numbers and iterated expectations imply$$\begin{aligned} \Gamma_{n} & =\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[v_{i}v_{i}^{\prime}|z_{i}]-\frac{1}{n}\sum_{i=1}^{n}(1-M_{ii})\mathbb{E}[v_{i}v_{i}^{\prime }|z_{i}]+o_{p}(1)\\ & =\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[v_{i}v_{i}^{\prime}|z_{i}]+o_{p}(1)=\Gamma+o_{p}(1).\end{aligned}$$ Next, we study $$S_{n}=\frac{1}{\sqrt{n}}\sum_{1\leq i,j\leq n}M_{ij}v_{i}\varepsilon_{j}+B_{n}+R_{n}.$$ The following lemma quantifies the magnitude of the bias term $B_{n}$ as well as the additional variability arising from the (remainder) term $R_{n}$. \[lemma:Bias&Remainder\]If Assumption PLM is satisfied and if $K\rightarrow\infty,$ then $B_{n}=O_{p}(\sqrt{n}K^{-\alpha_{g}-\alpha_{h}})$ and $R_{n}=o_{p}(1)$. Like the previous lemma, this lemma does not require $K/n\rightarrow0$. Interestingly, the bias term $B_{n}$ involves approximation of both unknown functions $g(z)$ and $h(z)$, implying an implicit trade-off between smoothness conditions for $g(z)$ and $h(z)$. The implied bias condition $K^{2(\alpha _{g}+\alpha_{h})}/n\rightarrow\infty$ only requires that $\alpha_{g}+\alpha_{h}$ be large enough, but not necessarily that $\alpha_{g}$ and $\alpha_{h}$ separately be large. It follows that if this bias condition holds, then$$S_{n}=\frac{1}{\sqrt{n}}\sum_{1\leq i,j\leq n}M_{ij}v_{i}\varepsilon_{j}+o_{p}(1),$$ as claimed in Example 3 above. Having dispensed with asymptotically negligible contributions to $S_{n}$, we turn to its leading term. This term is shown below to be asymptotically Gaussian with asymptotic variance given by$$\Sigma_{n}=\frac{1}{n}\mathbb{V}[\sum_{1\leq i,j\leq n}M_{ij}v_{i}\varepsilon_{j}|Z]=\frac{1}{n}\sum_{1\leq i\leq n}M_{ii}^{2}\mathbb{E}[v_{i}v_{i}^{\prime}\varepsilon_{i}^{2}|z_{i}]+\frac{1}{n}\sum_{1\leq i,j\leq n,j\neq i}M_{ij}^{2}\mathbb{E}[v_{i}v_{i}^{\prime}\varepsilon_{j}^{2}|z_{i},z_{j}].$$ Here, the first term following the second equality corresponds to the usual asymptotic approximation, while the second term adds an additional term that accounts for large $K$. Once again it is interesting to consider what happens in some special cases. Under homoskedasticity of $\varepsilon_{i}$ (i.e., when $\mathbb{E}[\varepsilon_{i}^{2}|x_{i},z_{i}]=\mathbb{E}[\varepsilon_{i}^{2}]$),$$\Sigma_{n}=\frac{\sigma_{\varepsilon}^{2}}{n}\sum_{1\leq i,j\leq n}M_{ij}^{2}\mathbb{E}[v_{i}v_{i}^{\prime}|z_{i}]=\frac{\sigma_{\varepsilon}^{2}}{n}\sum_{1\leq i\leq n}M_{ii}\mathbb{E}[v_{i}v_{i}^{\prime}|z_{i}]=\sigma_{\varepsilon}^{2}\Gamma_{n},\qquad\sigma_{\varepsilon}^{2}=\mathbb{E}[\varepsilon_{i}^{2}],$$ because $\sum_{j=1}^{n}M_{ij}^{2}=M_{ii}$. If, in addition, $\mathbb{E}[v_{i}v_{i}^{\prime}|z_{i}]=\mathbb{E}[v_{i}v_{i}^{\prime}]$, then $\Sigma _{n}=\sigma_{\varepsilon}^{2}\left( 1-K/n\right) \Gamma$. Also, if $K/n\rightarrow0$, then by $\sum_{1\leq i,j\leq n,i\neq j}M_{ij}^{2}/n\leq K/n$ and the law of large numbers, we have$$\Sigma_{n}=\frac{1}{n}\sum_{1\leq i\leq n}M_{ii}^{2}\mathbb{E}[v_{i}v_{i}^{\prime}\varepsilon_{i}^{2}|z_{i}]+o_{p}\left( 1\right) =\mathbb{E}[v_{i}v_{i}^{\prime}\varepsilon_{i}^{2}]+o_{p}\left( 1\right) ,$$ which corresponds to the standard asymptotics limiting variance. The following theorem combines Lemmas \[lemma:GammaHat\] and \[lemma:Bias&Remainder\] with a central limit theorem for quadratic forms to show asymptotic normality of $\hat{\beta}$. \[thm:AsyNorm\]If Assumption PLM is satisfied and if $K^{2(\alpha _{g}+\alpha_{h})}/n\rightarrow\infty$, then$$\Omega_{n}^{-1/2}\sqrt{n}(\hat{\beta}-\beta_{0})\rightarrow_{d}\mathcal{N}(0,I_{d}),\text{\qquad}\Omega_{n}=\Gamma_{n}^{-1}\Sigma_{n}\Gamma_{n}^{-1}.$$ If, in addition, $\mathbb{E}[\varepsilon_{i}^{2}|x_{i},z_{i}]=\sigma _{\varepsilon}^{2}$, then $\Omega_{n}=\sigma_{\varepsilon}^{2}\Gamma_{n}^{-1}$. This theorem shows that $\hat{\beta}$ is asymptotically normal when $K/n$ need not converge to zero. An implication of this result is that inconsistent series-based nonparametric estimators of the unknown functions $g(z)$ and $h(z)$ may be employed when forming $\hat{\beta}$, that is, $K/n\nrightarrow0$ is allowed (increasing the variability of the nonparametric estimators), provided that $K\rightarrow\infty$ (to remove nonparametric smoothing bias). This asymptotic distributional result does not rely on asymptotic linearity, nor on the actual convergence of the matrices $\Gamma_{n}$ and $\Sigma_{n}$, and leads to a new (larger) asymptotic variance that captures terms that are assumed away by the classical result. The asymptotic distribution result of [@Donald-Newey_1994_JMA] is obtained as a special case where $K/n\rightarrow0 $. More generally, when $K/n$ does not converge to zero, the asymptotic variance will be larger than the usual formula because it accounts for the contribution of remainder $U_{n}$ in equation (\[eq:decomp\]). For instance, when both $\varepsilon_{i}$ and $v_{i}$ are homoskedastic, the asymptotic variance is$$\Gamma_{n}^{-1}\Sigma_{n}\Gamma_{n}^{-1}=\sigma_{\varepsilon}^{2}\Gamma _{n}^{-1}=\sigma_{\varepsilon}^{2}\Gamma^{-1}(1-K/n)^{-1}\text{,}$$ which is larger than the usual asymptotic variance $\sigma_{\varepsilon}^{2}\Gamma^{-1}$ by the degrees of freedom correction $(1-K/n)^{-1}.$ Asymptotic Variance Estimation under Homoskedasticity ----------------------------------------------------- Consistent asymptotic variance estimation is useful for large sample inference. If the assumptions of Theorem \[thm:AsyNorm\] are satisfied and if $\hat{\Sigma}_{n}-\Sigma_{n}\rightarrow_{p}0$, then$$\hat{\Omega}_{n}^{-1/2}\sqrt{n}(\hat{\beta}-\beta_{0})\rightarrow _{d}\mathcal{N}(0,I_{d}),\text{\qquad}\hat{\Omega}_{n}=\hat{\Gamma}_{n}^{-1}\hat{\Sigma}_{n}\hat{\Gamma}_{n}^{-1},$$ implying that valid large-sample confidence intervals and hypothesis tests for linear and nonlinear transformations of the parameter vector $\beta$ can be based on $\hat{\Omega}_{n}$.[^6] Under (conditional) heteroskedasticity of unknown form, constructing a consistent estimator $\hat{\Sigma}_{n}$ turns out to be very challenging if $K/n\nrightarrow0$. Intuitively, the problem arises because the estimated residuals entering the construction of $\hat{\Sigma}_{n}$ are not consistent unless $K/n\rightarrow0$, implying that $\hat{\Sigma}_{n}-\Sigma _{n}\nrightarrow_{p}0$ in general. Solving this problem is beyond the scope of this paper. Under homoskedasticity of $\varepsilon_{i}$, however, the asymptotic variance $\Sigma_{n}$ simplifies and admits a correspondingly simple consistent estimator. To describe this result, note that if $\mathbb{E}[\varepsilon_{i}^{2}|x_{i},z_{i}]=\sigma_{\varepsilon}^{2}$ then $\Sigma_{n}=\sigma_{\varepsilon}^{2}\Gamma_{n}$, where $\hat{\Gamma}_{n}-\Gamma_{n}\rightarrow_{p}0$ by Lemma \[lemma:GammaHat\]. It therefore suffices to find a consistent estimator of $\sigma_{\varepsilon}^{2}$. Let$$s^{2}=\frac{1}{n-d-K}\sum_{1\leq i\leq n}\hat{\varepsilon}_{i}^{2},\qquad \hat{\varepsilon}_{i}=\sum_{1\leq j\leq n}M_{ij}(y_{j}-\hat{\beta}^{\prime }x_{j}),$$ denote the usual OLS estimator of $\sigma_{\varepsilon}^{2}$ incorporating a degrees of freedom correction. The following theorem shows that $s^{2}$ is a consistent estimator, even when the number of terms is large relative to the sample size. \[thm:HatAsyVarHOM\]Suppose the conditions of Theorem \[thm:AsyNorm\] are satisfied. If $\mathbb{E}[\varepsilon_{i}^{2}|x_{i},z_{i}]=\sigma _{\varepsilon}^{2}$, then $s^{2}\rightarrow_{p}\sigma_{\varepsilon}^{2}$ and $\hat{\Sigma}_{n}^{\mathtt{HOM}}-\Sigma_{n}\rightarrow_{p}0$, where $\hat{\Sigma}_{n}^{\mathtt{HOM}}=s^{2}\hat{\Gamma}_{n}$. This theorem provides a distribution free, large sample justification for the degrees-of-freedom correction required for exact inference under homoskedastic Gaussian errors. Intuitively, accounting for the correct degrees of freedom is important whenever the number of terms in the semi-linear model is large relative to the sample size. Small Simulation Study\[section:simuls\] ======================================== We conducted a Monte Carlo experiment to explore the extent to which the asymptotic theoretical results obtained in the previous section are present in small samples. Using the notation already introduced, we consider the following partially linear model:$$\begin{tabular} [c]{lllll}$y_{i}=x_{i}^{\prime}\beta+g(z_{i})+\varepsilon_{i},$ & & $\mathbb{E}[\varepsilon_{i}|x_{i},z_{i}]=0,$ & & $\mathbb{E}[\varepsilon_{i}^{2}|x_{i},z_{i}]=\sigma_{\varepsilon}^{2},$\\ $x_{i}=h(z_{i})+v_{i},$ & & $\mathbb{E}[v_{i}|z_{i}]=0,$ & & $\mathbb{E}[v_{i}^{2}|z_{i}]=\sigma_{v}^{2}(z_{i}),$\end{tabular}$$ where $d=1$, $\beta=1$, $d_{z}=5$, $z_{i}=(z_{1i},\cdots,z_{d_{z}i})^{\prime}$ with $z_{\ell i}\sim$ i.i.d. $\mathsf{Uniform}(-1,1) $, $\ell=1,\cdots,d_{z}$. The unknown regression functions are set to $g(z_{i})=h(z_{i})=\exp(\Vert z_{i}\Vert^{2})$, which are not additive separable in the covariates $z_{i}$. The simulation study is based on $S=5,000$ replications, each replication taking a random sample of size $n=500 $ with all random variables generated independently. We consider $6$ data generating processes (DGPs) as follows:$$\begin{tabular} [c]{cccc}\multicolumn{4}{c}{Data Generating Process for Monte Carlo Experiment}\\\hline\hline & \multicolumn{3}{c}{$(\varepsilon_{i},v_{i})$ -- Distributions}\\\cline{2-4} & Gaussian & Asymmetric & Bimodal\\\hline \multicolumn{1}{l}{$\sigma_{v}^{2}(z_{i})=1$} & Model 1 & Model 3 & Model 5\\ \multicolumn{1}{l}{$\sigma_{v}^{2}(z_{i})=\varsigma(1+\Vert z_{i}\Vert ^{2})^{2}$} & Model 2 & Model 4 & Model 6\\\hline \end{tabular}$$ Specifically, Models 1, 3 and 5 correspond to homoskedastic (in $v_{i}$) DGPs, while Models 2, 4 and 5 correspond to heteroskedastic (in $v_{i}$) DGPs. For the latter models, the constant $\varsigma$ was chosen so that $\mathbb{E}[v_{i}^{2}]=1$. The three distributions considered for the unobserved error terms $\varepsilon_{i}$ and $v_{i}$ are: the standard Normal (labelled Gaussian) and two Mixture of Normals inducing either an asymmetric or a bimodal distribution; their Lebesgue densities are depicted in Figure \[figure:simuls\]. We explored other specifications for the regression functions, heteroskedasticity form, and distributional assumptions, but we do not report these additional results because they were qualitative similar to those discussed here. The estimators considered in the Monte Carlo experiment are constructed using power series approximations. We do not impose additive separability on the basis, though we do restrict the interaction terms to not exceed degree 5. To be specific, we consider the following polynomial basis expansion:$$\begin{tabular} [c]{ccc}\multicolumn{3}{c}{Polynomial Basis Expansion: $d_{z}=5$ and $n=500$}\\\hline\hline $K$ & $p_{K}(z_{i})$ & $K/n$\\\hline $6$ & $(1,z_{1i},z_{2i},z_{3i},z_{4i},z_{5i})^{\prime}$ & $0.012$\\ $11$ & $(p_{6}(z_{i})^{\prime},z_{1i}^{2},z_{2i}^{2},z_{3i}^{2},z_{4i}^{2},z_{5i}^{2})^{\prime}$ & $0.022 $\\ $21$ & $p_{11}(z_{i})$ $+$ first-order interactions & $0.042$\\ $26$ & $(p_{21}(z_{i})^{\prime},z_{1i}^{3},z_{2i}^{3},z_{3i}^{3},z_{4i}^{3},z_{5i}^{3})^{\prime}$ & $0.052 $\\ $56$ & $p_{26}(z_{i})$ $+$ second-order interactions & $0.112$\\ $61$ & $(p_{56}(z_{i})^{\prime},z_{1i}^{4},z_{2i}^{4},z_{3i}^{4},z_{4i}^{4},z_{5i}^{4})^{\prime}$ & $0.122 $\\ $126$ & $p_{61}(z_{i})$ $+$ third-order interactions & $0.252$\\ $131$ & $(p_{126}(z_{i})^{\prime},z_{1i}^{5},z_{2i}^{5},z_{3i}^{5},z_{4i}^{5},z_{5i}^{5})^{\prime}$ & $0.262 $\\ $252$ & $p_{131}(z_{i})$ $+$ fourth-order interactions & $0.504$\\ $257$ & $(p_{252}(z_{i})^{\prime},z_{1i}^{6},z_{2i}^{6},z_{3i}^{6},z_{4i}^{6},z_{5i}^{6})^{\prime}$ & $0.514 $\\ $262$ & $(p_{257}(z_{i})^{\prime},z_{1i}^{7},z_{2i}^{7},z_{3i}^{7},z_{4i}^{7},z_{5i}^{7})^{\prime}$ & $0.524 $\\ $267$ & $(p_{262}(z_{i})^{\prime},z_{1i}^{8},z_{2i}^{8},z_{3i}^{8},z_{4i}^{8},z_{5i}^{8})^{\prime}$ & $0.534 $\\ $272$ & $(p_{267}(z_{i})^{\prime},z_{1i}^{9},z_{2i}^{9},z_{3i}^{9},z_{4i}^{9},z_{5i}^{9})^{\prime}$ & $0.544 $\\ $277$ & $(p_{272}(z_{i})^{\prime},z_{1i}^{10},z_{2i}^{10},z_{3i}^{10},z_{4i}^{10},z_{5i}^{10})^{\prime}$ & $0.554$\\\hline \end{tabular}$$ Thus, our simulations explore the consequences of introducing many terms in the partially linear model by varying $K$ on the grid above from $K=6$ to $K=277$, which gives a range for $K/n$ of $\{0.012,\cdots,0.554\}$. For each point on the grid of $K/n$, we report average bias, average standard deviation, mean square error and average standarized bias of $\hat{\beta}$ across simulations. We also consider the coverage error rates and interval length for two asymptotic $95\%$ confidence intervals:$$\text{CI}_{0}=\left[ \hat{\beta}-\Phi_{1-\alpha/2}^{-1}\frac{\hat{\sigma}\hat{\Gamma}_{n}^{-1/2}}{\sqrt{n}}\quad,\quad\hat{\beta}+\Phi_{1-\alpha /2}^{-1}\frac{\hat{\sigma}\hat{\Gamma}_{n}^{-1/2}}{\sqrt{n}}\right] ,$$$$\text{CI}_{1}=\left[ \hat{\beta}-\Phi_{1-\alpha/2}^{-1}\frac{s\hat{\Gamma }_{n}^{-1/2}}{\sqrt{n}}\quad,\quad\hat{\beta}+\Phi_{1-\alpha/2}^{-1}\frac{s\hat{\Gamma}_{n}^{-1/2}}{\sqrt{n}}\right] ,$$ where $\hat{\sigma}^{2}=(n-d-K)s^{2}/n$, and $\Phi_{u}^{-1}=\Phi^{-1}(u)$ denotes the inverse of the Gaussian distribution function. That is, CI$_{0}$ and CI$_{1}$ are formed employing the t-statistic constructed using the homoskedasticity-consistent variance estimators without and with degrees of freedom correction, respectively. The main findings from the Monte Carlo experiment are presented in Tables \[table:simuls1\]–\[table:simuls3\]. All results are consistent with the theoretical conclusions presented in the previous section. First, the results for standard Normal and non-Normal errors are qualitatively similar. This indicates that the Gaussian approximation obtained in Theorem \[thm:AsyNorm\] is a good approximation in finite samples, even when $K$ is a nontrivial fraction of the sample size. Second, as expected, a small choice of $K$ leads to important smoothing biases. This affects the finite sample properties of the point estimators as well as the distributional approximations obtained in this paper. In particular, it affects the empirical size of all the confidence intervals. Third, in all cases the results under homoskedasticity or heteroskedasticity in $v_{i}$ are qualitatively similar, showing that our theoretical results provide a good finite sample approximation in both cases, even when $K$ is a nontrivial fraction of the sample size. Fourth, as suggested by Theorem \[thm:HatAsyVarHOM\], confidence intervals without degrees of freedom correction (CI$_{0}$) are under-sized, while the analogue confidence intervals with degrees of freedom correction (CI$_{1}$) have close-to-correct empirical size in all cases. This result shows that the degrees of freedom correction is crucial to achieve close-to-correct empirical size when $K/n$ is non-negligible. In conclusion, we found in our small-scale simulation study that our theoretical results for the partially linear model with possibly many terms provide good approximation in samples of moderate size. In particular, under homoskedasticity of $\varepsilon_{i}$, we showed that confidence intervals constructed using $s^{2}$ exhibit good empirical coverage even when $K/n$ is large. We also confirmed that the Gaussian distributional approximation given in Theorem \[thm:AsyNorm\] represents well the finite sample distribution of $\hat{\beta}$ even when $K/n$ is large. Conclusion\[section:conclusion\] ================================ This paper showed that the many instrument asymptotics and the small bandwidth asymptotics shared a common structure based on a V-statistic, with a remainder term that is asymptotically normal when the number of term diverges to infinity or the bandwidth shrinks to zero. This feature is particularly useful to obtain new results for other semiparametric estimators. In this paper we employ this common structure to derive a new alternative large-sample distributional approximation for a series estimator of the partially linear model, which implied a new (larger) asymptotic variance formula. Our results apply to a class of semiparametric estimators $\hat{\beta}$ satisfying$$\sqrt{n}(\hat{\beta}-\beta_{0})=\hat{\Gamma}_{n}^{-1}S_{n}+o_{p}(1),$$ where $\hat{\Gamma}_{n}$ and $S_{n}$ take a particular V-stastistic form, as discussed in Section \[section:CommonStructure\]. This class of semiparametric estimators covers several interesting problems, but it is by no means exhaustive. For example, [@Cattaneo-Jansson_2015_wp-Kernels] show that a large class of (kernel-based) semiparametric estimators admit an expansion of the form$$\sqrt{n}(\hat{\beta}-\beta_{0})=\hat{\Gamma}_{n}^{-1}S_{n}-\mathcal{B}_{n}+o_{p}(1),$$ where the bias term $\mathcal{B}_{n}$ is quantitatively and conceptually distinct from the smoothing bias $B_{n}$ described in Section \[section:CommonStructure\] and, crucially, dominates the quadratic term $U_{n} $ arising from the V-statistic $S_{n}$; that is, $U_{n}=o_{p}(\mathcal{B}_{n})$ in that setting. Nevertheless, the structure we have considered in this paper is useful, providing new results for the partially linear model and a common structure for disparate literatures on many instruments and small bandwidths. Appendix: Proofs ================ All statements involving conditional expectations are understood to hold almost surely. Qualifiers such as a.s. will be omitted to conserve space. Throughout the appendix, $C$ will denote a generic constant that may take different values in each case. **Proof of Lemma \[lemma:GammaHat\]**. Let $X=[x_{1},\ldots ,x_{n}]^{\prime}$, $H=[h_{1},\ldots,h_{n}]^{\prime}$, and $V=[v_{1},\ldots,v_{n}]^{\prime}$. By Assumption PLM and the Markov inequality,$$\operatorname*{tr}(\frac{1}{n}H^{\prime}MH)=\min_{\eta_{h}\in\mathbb{R}^{K\times d}}\frac{1}{n}\sum_{1\leq i\leq n}\Vert h(z_{i})-\eta_{h}^{\prime }p_{K}(z_{i})\Vert^{2}=O_{p}(K^{-2\alpha_{h}})\rightarrow_{p}0.$$ Also, $V^{\prime}V/n=O_{p}(1)$ by Assumption PLM and the Markov inequality, so by the Cauchy-Schwarz inequality and $M$ idempotent, $\Vert H^{\prime }MV/n\Vert\leq\lbrack\operatorname*{tr}(H^{\prime}MH/n)\operatorname*{tr}(V^{\prime}V/n)]^{1/2}\rightarrow_{p}0.$ By the triangle inequality, we then have$$\hat{\Gamma}_{n}=\frac{1}{n}X^{\prime}MX=\frac{1}{n}(V+H)^{\prime}M(V+H)=\frac{1}{n}V^{\prime}MV+o_{p}(1).$$ Next, by Lemma A1 of [@Chao-Swanson-Hausman-Newey-Woutersen_2012_ET],$$\frac{1}{n}V^{\prime}MV=\frac{1}{n}\sum_{1\leq i\leq n}M_{ii}v_{i}v_{i}^{\prime}+\frac{1}{n}\sum_{1\leq i,j\leq n,j\neq i}M_{ij}v_{i}v_{j}^{\prime}=\frac{1}{n}\sum_{1\leq i\leq n}M_{ii}v_{i}v_{i}^{\prime}+o_{p}(1).$$ Finally, by the Markov inequality and using $\mathbb{E}[n^{-1}\sum_{1\leq i\leq n}M_{ii}v_{i}v_{i}^{\prime}|Z]=\Gamma_{n}$,$$\frac{1}{n}\sum_{1\leq i\leq n}M_{ii}v_{i}v_{i}^{\prime}-\Gamma_{n}\rightarrow_{p}0$$ because Assumption PLM implies that $v_{i}v_{i}^{\prime}$ and $v_{j}v_{j}^{\prime}$ are uncorrelated conditional on $Z$ and that $\mathbb{E}[M_{ii}^{2}\Vert v_{i}\Vert^{4}|Z]\leq C$.$\blacksquare$ **Proof of Lemma \[lemma:Bias&Remainder\]**. Let $G=[g_{1},\ldots ,g_{n}]^{\prime}$ and $\varepsilon=[\varepsilon_{1},\ldots,\varepsilon _{n}]^{\prime}$. By the Cauchy-Schwarz inequality, $M$ idempotent, Assumption PLM, and the Markov inequality, $$\Vert\frac{1}{n}G^{\prime}MH\Vert\leq\sqrt{\operatorname*{tr}(\frac{1}{n}G^{\prime}MG)}\sqrt{\operatorname*{tr}(\frac{1}{n}H^{\prime}MH)}=O_{p}(K^{-\alpha_{g}-\alpha_{h}}),$$ which gives $B_{n}=G^{\prime}MH/\sqrt{n}=O_{p}(\sqrt{n}K^{-\alpha_{g}-\alpha_{h}})$. Also, $R_{n}=(V^{\prime}MG+H^{\prime}M\varepsilon)/\sqrt{n}=O_{p}(K^{-\alpha_{g}}+K^{-\alpha_{h}})=o_{p}(1)$ because$$\mathbb{E}[\Vert\frac{1}{\sqrt{n}}V^{\prime}MG\Vert^{2}|Z]=\frac{1}{n}G^{\prime}M\mathbb{E}[VV^{\prime}|Z]MG\leq C\frac{1}{n}G^{\prime}MG=O_{p}(K^{-2\alpha_{g}})$$ and$$\mathbb{E}[\Vert\frac{1}{\sqrt{n}}H^{\prime}M\varepsilon\Vert^{2}|Z]=\operatorname*{tr}(\frac{1}{n}H^{\prime}M\mathbb{E}[\varepsilon \varepsilon^{\prime}|Z]MH)\leq C\operatorname*{tr}(\frac{1}{n}H^{\prime }MH)=O_{p}(K^{-2\alpha_{h}})$$ by Assumption PLM and the Markov inequality.$\blacksquare$ **Proof of Theorem \[thm:AsyNorm\]**. By Lemma A2 of [@Chao-Swanson-Hausman-Newey-Woutersen_2012_ET], $$\Sigma_{n}^{-1/2}\frac{1}{\sqrt{n}}\sum_{1\leq i,j\leq n}M_{ij}v_{i}\varepsilon_{j}\rightarrow_{d}\mathcal{N}(0,I_{d})$$ under Assumption PLM. Combining this result with Lemmas \[lemma:GammaHat\] and \[lemma:Bias&Remainder\], we obtain the results stated in the theorem.$\blacksquare$ **Proof of Theorem \[thm:HatAsyVarHOM\]**. Let $Y=[y_{1},\ldots,y_{n}]$ and $\hat{\varepsilon}=[\hat{\varepsilon}_{1},\ldots,\hat{\varepsilon}_{n}]^{\prime}=M(Y-X\hat{\beta})$. It follows similarly to the proof of Lemma \[lemma:GammaHat\] that$$\begin{aligned} \frac{1}{n}\varepsilon^{\prime}M\varepsilon & =\frac{1}{n}\sum_{1\leq i\leq n}M_{ii}\varepsilon_{i}^{2}+\frac{1}{n}\sum_{1\leq i,j\leq n,j\neq i}\varepsilon_{i}M_{ij}\varepsilon_{j}\\ & =\frac{1}{n}\sum_{1\leq i\leq n}M_{ii}\mathbb{E}[\varepsilon_{i}^{2}|z_{i}]+o_{p}\left( 1\right) =\frac{n-K}{n}\sigma_{\varepsilon}^{2}+o_{p}(1),\end{aligned}$$ so it suffices to show that $\hat{\varepsilon}^{\prime}\hat{\varepsilon }/n=\varepsilon^{\prime}M\varepsilon/n+o_{p}(1)$. Lemma \[lemma:GammaHat\] and $\hat{\beta}-\beta=o_{p}(1)$ imply $(\hat {\beta}-\beta)^{\prime}X^{\prime}MX(\hat{\beta}-\beta)/n=o_{p}\left( 1\right) $, which together with the Cauchy-Schwarz inequality and $\varepsilon^{\prime}M\varepsilon/n=O_{p}(1)$ gives$$\begin{aligned} \frac{1}{n}(Y-X\hat{\beta}-G)^{\prime}M(Y-X\hat{\beta}-G) & =\frac{1}{n}\varepsilon^{\prime}M\varepsilon+\frac{1}{n}(\hat{\beta}-\beta)^{\prime }X^{\prime}MX(\hat{\beta}-\beta)-\frac{1}{n}2\varepsilon^{\prime}MX(\hat {\beta}-\beta)\\ & =\frac{1}{n}\varepsilon^{\prime}M\varepsilon+o_{p}(1).\end{aligned}$$ Similarly, $G^{\prime}MG/n=o_{p}\left( 1\right) $ together with $(Y-X\hat{\beta}-G)^{\prime}M(Y-X\hat{\beta}-G)/n=O_{p}\left( 1\right) $ and the Cauchy-Schwarz inequality gives$$\frac{1}{n}\hat{\varepsilon}^{\prime}\hat{\varepsilon}=\frac{1}{n}(Y-X\hat{\beta})^{\prime}M(Y-X\hat{\beta})=\frac{1}{n}(Y-X\hat{\beta }-G)^{\prime}M(Y-X\hat{\beta}-G)+o_{p}(1).$$ The conclusion follows by the triangle inequality.$\blacksquare$ ![Lebesgue Densities of Error Terms Distributions.\[figure:simuls\]](Figure1.pdf) \ Notes:(i) columns $\mathsf{Bias}$, $\mathsf{SD}$, $\mathsf{RMSE}$ and $\frac{\mathsf{Bias}}{\mathsf{SD}}$ report, respectively, average bias, average standard deviation, root mean square error, and average standarized bias of the estimator $\hat {\beta}$ across simulations;(ii) columns CI$_0$ and CI$_1$ report empirical coverage for homoskedastic-consistent confidence intervals, respectively, without and with degrees of freedom correction;(iii) columns $\hat\sigma $ and $s$ report the average across simulations of the standard errors estimators, respectively, without and with degrees of freedom correction. \ Notes:(i) columns $\mathsf{Bias}$, $\mathsf{SD}$, $\mathsf{RMSE}$ and $\frac{\mathsf{Bias}}{\mathsf{SD}}$ report, respectively, average bias, average standard deviation, root mean square error, and average standarized bias of the estimator $\hat {\beta}$ across simulations;(ii) columns CI$_0$ and CI$_1$ report empirical coverage for homoskedastic-consistent confidence intervals, respectively, without and with degrees of freedom correction;(iii) columns $\hat\sigma $ and $s$ report the average across simulations of the standard errors estimators, respectively, without and with degrees of freedom correction. \ Notes:(i) columns $\mathsf{Bias}$, $\mathsf{SD}$, $\mathsf{RMSE}$ and $\frac{\mathsf{Bias}}{\mathsf{SD}}$ report, respectively, average bias, average standard deviation, root mean square error, and average standarized bias of the estimator $\hat {\beta}$ across simulations;(ii) columns CI$_0$ and CI$_1$ report empirical coverage for homoskedastic-consistent confidence intervals, respectively, without and with degrees of freedom correction;(iii) columns $\hat\sigma $ and $s$ report the average across simulations of the standard errors estimators, respectively, without and with degrees of freedom correction. [^1]: Department of Economics, University of Michigan. [^2]: Department of Economics, UC Berkeley and *CREATES*. [^3]: Department of Economics, MIT. [^4]: The authors thank comments from Alfonso Flores-Lagunes, Lutz Kilian, seminar participants at Bristol, Brown, Cambridge, Exeter, Indiana, LSE, Michigan, MSU, NYU, Princeton, Rutgers, Stanford, UCL, UCLA, UCSD, UC-Irvine, USC, Warwick and Yale, and conference participants at the 2010 Joint Statistical Meetings and the 2010 LACEA Impact Evaluation Network Conference. The first author gratefully acknowledges financial support from the National Science Foundation (SES 1122994). The second author gratefully acknowledges financial support from the National Science Foundation (SES 1124174) and the research support of CREATES (funded by the Danish National Research Foundation). The third author gratefully acknowledges financial support from the National Science Foundation (SES 1132399). [^5]: In time series contexts, the exact decomposition is less useful, but approximations thereof with properties similar to those we discuss herein can be developed. For an example and related references see [@Atchade-Cattaneo_2014_SPA]. [^6]: Another approach to inference would be via the bootstrap. For small bandwidth asymptotics, [@Cattaneo-Crump-Jansson_2014b_ET] showed that the standard nonparametric bootstrap does not provide a valid distributional approximation in general. We conjecture that the standard nonparametric bootstrap will also fail to provide valid inference for other alternative asymptotics frameworks.
--- abstract: 'Suzaku observed the region including HESS J1809$-$193, one of the TeV unidentified (unID) sources, and confirmed existence of the extended hard X-ray emission previously reported by ASCA, as well as hard X-ray emission from the pulsar PSR J1809$-$1917 in the region. One-dimensional profile of the diffuse emission is represented with a Gaussian model with the best-fit $\sigma$ of $7\pm1$ arcmin. The diffuse emission extends for at least 21 pc (at the 3$\sigma$ level, assuming the distance of 3.5 kpc), and has a hard spectrum with the photon index of $\Gamma \sim$1.7. The hard spectrum suggests the pulsar wind nebula (PWN) origin, which is also strengthened by the hard X-ray emission from PSR J1809$-$1917 itself. Thanks to the low background of Suzaku XIS, we were able to investigate spatial variation of the energy spectrum, but no systematic spectral change in the extended emission is found. These results imply that the X-ray emitting pulasr wind electrons can travel up to 21 pc from the pulsar without noticable energy loss via synchrotron emission.' author: - 'Takayasu <span style="font-variant:small-caps;">Anada</span>, Aya [Bamba]{},[^1], Ken <span style="font-variant:small-caps;">Ebisawa</span>, and Tadayasu [Dotani]{}' title: 'X-Ray Studies of HESS J1809–193 with Suzaku' --- Introduction ============ Galactic plane survey with the H.E.S.S. Cherenkov telescope system revealed dozens of the new very-high-energy (VHE) $\gamma$-ray sources [@2005Sci...307.1938A; @2006ApJ...636..777A]. Many of them have no counterparts in other wave-lengths, thus called “unidentified (unID) TeV sources”. Today, about 40 such unidentified TeV sources are known on the Galactic plane [@2007arXiv0712.3352H]. Most of them are located within a height of $\pm$ 1 degree from the Galactic plane, and some are intrinsically extended. Despite a large number of intensive studies in the last several years, their origin is unclear [@2007arXiv0712.3352H]. X-ray follow-up observations of the unID TeV sources are now on-going. Although supernova remnants (SNRs) or hypernova remnants were suggested to be major counterpart candidates of these TeV unID sources [@yamazaki2006; @ioka2009], only a few sources have been actually identified as SNRs . On the other hand, rather surprisingly, several unID TeV sources have been identified as pulsar wind nebulae (PWNe) [@2009PASJ...61S.183A; @2009PASJ...61S..189T for example]. They seem to be rather old, previously unknown PWNe, compared to the PWNe already identified and well studied in X-rays. The first HESS observations of the region around PSR J1809–1917 were made from May through June 2004 as part of the systematic survey of the inner Galaxy [@2005Sci...307.1938A; @2006ApJ...636..777A]. Because marginal VHE $\gamma$-ray signals were detected, HESS J1809–193 was observed again in 2004 and 2005, and significant $\gamma$-ray emission was confirmed . Recent study of this source by HESS was reported by @2008AIPC.1085..285R. Fitting the excess map with a 2-D symmetric Gaussian, the best fit position and intrinsic source extension (in rms) were determined as (RA, Dec) = ($18^{\mathrm{h}}09^{\mathrm{m}}52^{\mathrm{s}}$, $-19^{\circ}23'42''$) and $0^\circ.25\pm0^\circ.02$, respectively. PSR J1809–1917 is a radio pulsar discovered by the Parkes Multibeam Pulsar Survey [@2002MNRAS.335..275M]. The pulsar is located at the position of (RA, Dec) = (, ) with a pulse period of $P=82.7$ ms and the period derivative of $\dot{P} = 2.55\times10^{-14}$ s s$^{-1}$. The distance to the source was estimated to be $d=3.5$ kpc from the pulsar’s dispersion measure using the NE2001 Galactic electron-density model [@2002astro.ph..7156C]. The characteristic age and the spin-down luminosity are $\tau {c}=51$ kyr and $\dot{E} = 1.8\times10^{36}$ ergs s$^{-1}$, respectively. The $\gamma$-ray spectral analysis performed by @2007arXiv0709.2432K indicated that the spectral slope is different between the regions near the pulsar and far from the pulsar. This is the second case that spatial variation of the spectral slope is revealed in the VHE $\gamma$-ray emission. The first case is HESS J1825–137 , which is largely extended in both VHE $\gamma$-ray and X-ray bands [@2009PASJ...61S..189T]. ASCA observation revealed diffuse, non-thermal emission in the vicinity of PSR J1809–1917 [@2003ApJ...589..253B]. @2007ApJ...670..655K detected a bright point X-ray source which was positionally consistent with the pulsar PSR J1809$-$1917, and resolved the surrounding compact PWN utilizing the very high angular resolution of Chandra. The PWN has a “head-tail” profile, consisting of the southern-head, which is coincident to the pulsar, and the northern-tail. @2007ApJ...670..655K claimed that this cometary morphology is attributed to a bow shock created by the pulsar moving supersonically to the southern direction. However, it is still unknown whether the faint diffuse emission discovered by ASCA is related to the PWN or not. In this paper, we make detailed analysis of the diffuse emission for the first time using Suzaku. Suzaku, characterized by the low detector background compared to Chandra and XMM-Newton, is very suitable for the analysis of faint and diffuse X-ray emission such as HESS 1809–193. Observations {#sec:observation} ============ (80mm,80mm) [fig/hessj1809\_3.eps]{} We observed HESS J1809–193 with Suzaku [@mitsuda2007] in April, 2008. Suzaku is equipped with two types of instruments: the X-ray Imaging Spectrometers (XIS: [@2007PASJ...59S..23K]) at the foci of four X-Ray Telescopes (XRT: [@2007PASJ...59S...9S]) and the Hard X-ray Detector (HXD: [@2007PASJ...59S..35T], [@2007PASJ...59S..53K]). The observation was carried out with two pointings at north and south of the source region (figure \[fig:1809:hessj1809\]) in order to cover the pulsar and extended VHE $\gamma$-ray emission along the direction of the elongated shape of the PWN [@2007ApJ...670..655K]. Three XISs (XIS 0, 1, 3) out of four were operated in the normal clocking mode with the Spaced-row Charge Injection (SCI) [@2008PASJ...60S...1N]. We analyzed the data processed by the version 2.2 pipeline. We are interested in the spatial variations of a scale of arcminutes. Hence, we concentrated on the analysis of the XIS data in this paper, since the HXD does not have a spatial resolution within a field of view of $\sim$30 arcmin. (FWHM). We applied the standard screening criteria to the XIS data[^2] to obtain the cleaned event lists. After the data screening, the net exposures was 51.5 ks and 44.2 ks for the north and south pointing of XIS, respectively. The Suzaku observation log and exposure are summarized in table \[tbl:1809:obslog\]. We used HEADAS version 6.5 software package for the data analysis. [ccc]{} & North & South\ Sequence ID & 503078010 & 503079010\ Start time (UT)& 2008/03/31 14:06 & 2008/04/01 16:34\ End time (UT)& 2008/04/01 16:30 & 2008/04/02 14:47\ Aim point R.A. (J2000.0) & $18^{\mathrm{h}}09^{\mathrm{m}}37^{\mathrm{s}}.4$ & $18^{\mathrm{h}}09^{\mathrm{m}}21^{\mathrm{s}}.0$\ Aim point Decl. (J2000.0) & $-19^{\circ}21'24''$ & $-19^{\circ}32'02''$\ Net exposure (ks) & 51.5 & 44.2\ Results {#sec:results} ======= X-ray Image {#sec:image} ----------- (60mm,60mm) [fig/sum013\_0548-2740\_ns\_expcor\_hesscon.eps]{} (60mm,60mm) [fig/profile\_fit.eps]{} In order to make images of this field, we corrected the vignetting effect by dividing the image by the flat sky image simulated using [xissim]{} [@2007PASJ...59S.113I] after subtracting non X-ray background [@2008PASJ...60S..11T]. In this simulation, we assumed the input energy spectrum as that extracted from the red rectangular region (see §\[sec:1809:ana:spec\]). Hereafter, all images are vignetting corrected. Figure \[fig:1809:img reg-hess\] shows Suzaku XIS 2–10 keV image of the HESS J1809–193 region. We confirmed largely extended emission reported by ASCA [@2003ApJ...589..253B]. In addition, we found the extended emission has several peaks. The pulsar is the brightest in the FOV, and the western edge of the FOV is also significantly bright, which was already hinted by ASCA [@2003ApJ...589..253B]. We have determined extension of the diffuse emission as follows: We created a 1-dimensional profile of the surface brightness from the rectangle region shown in figure \[fig:1809:img reg-hess\] (left) which runs from north to south. We selected the direction which enabled us to create the longest profile. We did not take a symmetric region around the pulsar, because the faint and diffuse emission is significantly asymmetric. Although there is a point source in the south of the pulsar at (272.42, $-$19.43), its flux in the integrated region is only 1 % of the central pulsar, thus negligible. The 1-dimensional profile thus created is shown in figure \[fig:1809:img reg-hess\] (right). Note that the surface brightness is normalized to the peak brightness. We fitted the profile with a Gaussian function plus a constant to evaluate its extension. In the fitting, we ignored the brightest part around the pulsar with a width of $2'.9$ (corresponding to the point-spread function of the XRT). Consequently, we found the diffuse emission extends up to $\sim20'$ away from the pulsar. The Gaussian center was found to be offset by $\sim\!3'$ from the pulsar and the rms width to be $\sigma=7'\pm1'$. Energy Spectra {#sec:1809:ana:spec} -------------- We studied spatial variation of the X-ray energy spectra of the diffuse emission. We generated the detector response file and an auxiliary response file using [xisrmfgen]{}, and [xissimarfgen]{} [@2007PASJ...59S.113I] and performed model fitting using XSPEC version 12.4.0. First we examined energy spectrum from an outskirt of the HESS source region (the red rectangular region in figure \[fig:1809:img reg-hess\]) in order to estimate contribution of the Galactic Ridge X-ray Emission (GRXE) on this particular area. The bright source at the south-east corner (circled in red) was excluded in this analysis. We subtracted the non X-ray background (NXB) estimated using [xisnxbgen]{} (detail of the method described in [@2008PASJ...60S..11T]), and averaged all of the available XISs (XIS 0,1,3) data. Because the NXB becomes high above 7.2 keV (in particular for XIS1 which has the back-illuminated chip), we used the data only below 7.2 keV. We fitted the spectra in 4.0–7.2 keV and 2.0–4.0 keV separately following the reproduction procedure of GRXE in @2008PASJ...60S.223E. The model adopted is a power-law plus three narrow Gaussian (intrinsic width fixed to zero) in 4.0 – 7.2 keV, or four gaussian lines in 2.0 – 4.0 keV. The spectra and the best-fit models are shown in figure \[fig:1809:bgd spec\]. The equivalent widths of the iron lines are summarized in table \[tbl:1809:eqwidth\]. These equivalent widths are comparable to those of GRXE described in @2008PASJ...60S.223E. Thus we conclude that the X-ray emission in this region can be regarded as pure GRXE and no significant contribution from the pulsar is present. (80mm,80mm) [fig/sum013\_bgd2\_nocal\_3line.pha.ps]{} (80mm,80mm) [fig/sum013\_bgd2\_nocal\_fitall.pha.ps]{} [lccc]{}\ Center energy (keV) & $6.44\pm0.07$ & $6.69\pm0.02$ & $6.98\pm0.08$\ Equivalent width (eV) & 70 (0–140) & 240 (160–550) & 50 (0–170)\ \ Center Energy (keV) & $6.41\pm0.02$ & $6.670\pm0.006$ & $7.00\pm0.03$\ Equivalent width (eV) $\quad$ & 80 (60–100) & 350 (310–390) & 70 (40–100)\ In the next step, we divided the sky region of the north pointing into a “check pattern” as shown in figure \[fig:1809:sum013 reg 0548-2740 expcor\] (left) in order to find out the possible spatial variations of the spectral slope. Here we refer to these regions as the number indicated in figure \[fig:1809:sum013 reg 0548-2740 expcor\] (left) with the prefix “Grid” (Grid1, Grid2, ...). We used a two component model for the fit: an absorbed power-law plus the GRXE. We used the model GRXE spectrum (as explained below) to subtract the diffuse background. The model GRXE spectrum was constructed as follows: The background spectrum was fitted in the 2.0–7.2 keV band with the model of an absorbed power-law plus 7 narrow Gaussians as explained above. The absorption column density was fixed to $1.0 \times 10^{22}$ cm$^{-2}$. The best-fit parameters, besides the iron line parameters in table \[tbl:1809:eqwidth\], are listed in table \[tbl:1809:bgd spec fit\]. Normalization for each extraction region was adjusted taking account of differences of the vignetting effects, size of the extraction region, and exposure time between the north and south pointings. The correction factors of the vignetting effect (shown in figure \[fig:1809:sum013 reg 0548-2740 expcor\] right) were determined by simulation. We subtracted the non X-ray background (NXB) estimated by using [xisnxbgen]{}. Figure \[fig:1809:multi sum013 gridall min50 fit\] shows the XIS spectra (averaged for XIS 0, 1 and 3) and the best-fit models for all the 16 regions in the 2.0–10 keV band. Because Grid 1, 4, 13 were illuminated with the calibration sources, the data between 5.73–6.67 keV were removed from the fit. Fit results are summarized in table \[tbl:1809:spec fit para\]. The spatial distribution of the spectral indices is shown in figure \[fig:1809:phoindex grid\]. In order to make statistics better, we also fit the combined spectra of Grid 6 and 7 (for the region close to the emission peak) and Grid 11, 14, and 15 (for the region far from the emission peak). The results are included in Table \[tbl:1809:spec fit para\]. From these analyses, we concluded that there is no systematic trend of the spatial variations in the spectral index in this field of view. [llll]{} Model component & Parameter && Value\ Continuum & & 1.0 (fixed)\ & & $1.40_{-0.06}^{+0.05}$\ Emission lines & Energy (keV) & S & $2.43 \pm 0.02$\ && S & $2.61 \pm 0.04$\ && Ar & $3.13 \pm 0.02$\ && Ar & $3.43_{-0.07}^{+0.13}$\ (80mm,80mm) [fig/sum013\_reg\_0548-2740\_expcor.img.eps]{} (80mm,80mm) [fig/vigcorfactor\_num.eps]{} [cccc]{} Grid id & $\Gamma$ & Flux$^*$ (10$^{-13}$ ergs cm$^{-2}$ s$^{-1}$) & $\chi^2$/d.o.f.\ 1 & $1.52 \pm 0.21$ & $8.7 \pm 0.9$ & 52.2/46\ 2 & $1.83 \pm 0.18$ & $7.8 \pm 0.6$ & 70.6/67\ 3 & $1.79 \pm 0.21$ & $5.9 \pm 0.6$ & 54.1/62\ 4 & $1.39 \pm 0.28$ & $6.1_{-0.8}^{+0.9}$ & 49.1/37\ 5 & $1.50 \pm 0.14$ & $10.6 \pm 0.7$ & 74.0/73\ 6$^\dagger$ & $2.07 \pm 0.14$ & $8.2 \pm 0.5$ & 105.0/102\ 7$^\dagger$ & $1.66 \pm 0.17$ & $6.7 \pm 0.5$ & 121.7/93\ 8 & $1.71_{-0.38}^{+0.40}$ & $3.0 \pm 0.5$ & 59.2/48\ 9 & $1.71 \pm 0.18$ & $7.1 \pm 0.6$ & 58.1/61\ 10 & $1.94 \pm 0.18$ & $5.9 \pm 0.5$ & 122.9/86\ 11 & $2.74_{-0.35}^{+0.39}$ & $2.4 \pm 0.4$ & 87.6/70\ 12 & $1.72_{-0.37}^{+0.38}$ & $2.9 \pm 0.5$ & 65.2/47\ 13 & $1.19 \pm 0.26$ & $8.3 \pm 1.0$ & 51.4/41\ 14 & $1.54 \pm 0.20$ & $7.2_{-0.6}^{+0.7}$ & 74.3/59\ 15 & $1.78 \pm 0.25$ & $4.4_{-0.5}^{+0.6}$ & 55.5/52\ 16 & $1.66 \pm 0.32$ & $4.9 \pm 0.7$ & 60.8/40\ 6$^\dagger$+7$^\dagger$ & $1.91 \pm 0.11$ & $14.7 \pm 0.9$ & 235.9/196\ 11+14+15 & $1.84\pm 0.14$ & $13.7_{-1.4}^{+1.5}$ & 243.2/183\ (80mm,80mm) [fig/phoindex.img.eps]{} (80mm,80mm) [fig/phoindex\_grid.ps]{} (40mm,40mm)[fig/sum013\_grid01\_nocal\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid02\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid03\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid04\_nocal\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid05\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid06\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid07\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid08\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid09\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid10\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid11\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid12\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid13\_nocal\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid14\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid15\_min50\_fit.pha.ps]{} (40mm,40mm)[fig/sum013\_grid16\_min50\_fit.pha.ps]{} Discussion {#sec:discussion} ========== Using Suzaku, we have clearly shown presence of the large scale diffuse emission around HESS J1809–193, which was already suggested by ASCA [@2003ApJ...589..253B]. The extension of the diffuse emission is at least $\sim\!21'$ ($3\sigma$ of the gaussian approximation). The diffuse spectrum has the photon index of $\Gamma \sim$1.7, which is much harder than those of SNRs with synchrotron X-ray emitting shells ($\Gamma \sim$2–3), such as SN 1006, RX J1713–3946, and others [@bamba2008; @takahashi2008 for example]. Such a hard spectrum, on the other hand, reminds us the PWN origin [@kargaltsev2008]. This hypothesis is, in fact, strengthened by the very existence of the pulsar PSR J1809$-$1917. The TeV emission also may have the PWN origin, because it positionally coincides with the region including the pulsar and the diffuse X-ray emission, although the center of the TeV emission is offset by $\sim\!6'$ (a projected distance of 6 pc at 3.5 kpc) from the pulsar. If both the TeV and X-ray emissions are from the same PWN, their origin is accelerated electrons via inverse Compton (TeV) and synchrotron mechanisms (X-ray). Typical energies of responsible electrons are $\sim$20 TeV for the TeV emission and $\sim$80 TeV for X-rays, respectively. The former has much longer synchrotron life timescales than latter [@2009ApJ...694...12M for example]. If we assume that the electrons which emit TeV gamma-rays have the same age as the pulsar ($\tau c = $51 kyr) and those for the X-rays are very fresh, the positional offset may be explained by the proper motion of the pulsar. To explain the offset of $\sim\!6'$, the transverse velocity of the pulsar needs to be $\sim$120 km s$^{-1}$, which can be explained with the average transverse velocity of radio pulsars [@1994Natur.369..127L $\sim 300$ km s$^{-1}$]. [**On the other hand, the tails seen in the [*Chandra*]{} image (if it is indeed a tail not a jet) suggests that the pulsar is moving southward and its velocity vector is at the angle of $\sim$190–200 deg. East of North [@2007ApJ...670..655K]. Now if we assume that the pulsar was born at the center of the TeV source and moved to its current position, its velocity vector would have to be at the angle of 10–20 deg. West of North. Therefore, the angle between these two velocity vectors will be $\sim$140–160 deg., which makes this scenario unlikely.** ]{} Another possible scenario to make such an offset is collision between the reverse shock and the PWN [@2008ApJ...675..683P] like the case for Vela X [@2001ApJ...563..806B; @2003ApJ...588..441G]. In the young PWNe, it is suggested that the X-ray spectra become softer with the distance from the pulsars due to synchrotron energy loss of the accelerated electrons [@mori2004 for example]. However, we could not find any hint of such softening (Figure \[fig:1809:phoindex grid\]). This means that electrons do not lose significant energies via the synchrotron emission. The X-ray size of HESS J1809–193 is $\sim\!21'$ ($=3\sigma$), or 21 pc with the assumed distance of 3.5 kpc. The synchrotron lifetime of an accelerated electrons ($\tau {syn}$) is $\tau {syn} = 6.8{\rm kyr}(B/3~\mu{\rm G})^{-3/2}(E {syn}/2~{\rm keV})^{-1/2}$, where $B$ and $E {syn}$ are magnetic field and mean energy of synchrotron emission from accelerated electrons, respectively. In order to explain the size of the X-ray diffuse emission, the transport velocity of accelerated electrons should be higher than 21 pc/$\tau {syn} \sim 3.0\times 10^3 \;{\rm km \; s^{-1}} \; (B/3~\mu{\rm G})^{3/2}(E {syn}/2~{\rm keV})^{1/2}$, which seems to be very fast for an old PWN, and even comparable to the forward shock velocities of young SNRs. It is known that the diffusion coefficient in young PWNe and SNRs is too small to diffuse out to such a large scale, when the magnetic field is turbulent [@shibata2003; @2003ApJ...589..827B; @2005ApJ...621..793B for example]. Therefore, the current observation suggests that the turbulence of magnetic fields in old PWN systems are smaller than those in young systems. If turbulence of the magnetic fields in old PWN systems is smaller than that in young systems, such a fast diffusion may be explained, although we have no such a model for old PWNe. In supernova remnants, we have some indication that the turbulence becomes smaller when the SNR becomes older [@2005ApJ...621..793B], similar mechanism might work for PWNe. We have a similar case, HESS J1825–137, which is another unID TeV source with possible PWN origin. @2009PASJ...61S..189T found that the X-ray photon index is nearly constant over the emission region, which may suggest HESS J1825–137 also might have small turbulence of the magnetic field. Therefore, we suppose fast transportations of accelerated electrons may be rather common phenomena in old PWNe. Further similar samples of unID TeV sources may confirm this hypothesis. Summary {#sec:1809:summary} ======= - We observed the TeV PWN candidate HESS J1809–193 with Suzaku, and confirmed an extended emission around the pulsar PSR J1809–1917. Size of the X-ray emission is at least $\sim 21'$, or 21 pc at 3.5 kpc. - The extended emission has very hard nonthermal spectrum with the photon index of $\sim$1.7. No systematic spatial variation of the photon index is found, which implies that accelerated electrons do not lose their energy when they run from the pulsar to the edge of the emission region. We have to consider very fast diffusion in an old PWN to reproduce such phenomena. If turbulence of the magnetic fields in old PWN systems is smaller than that in young systems, such a fast diffusion may be explained. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank the anonymous referee for useful comments and suggestions. We acknowledge all the Suzaku team members for their gracious supports. The authors also thank K. Mori and R. Yamazaki for their fruitful comments. A. Bamba is supported by JSPS Research Fellowship for Young Scientists (19-1804). Aharonian, F., et al. 2005a, Science, 307, 1938 Aharonian, F., et al. 2006a, ApJ, 636, 777 Aharonian, F., et al. 2006b, A&A, 460, 365 Aharonian, F., et al. 2007, A&A, 472, 489 Aharonian, F., et al. 2008, , 477, 353 Anada, T., Ebisawa, K., Dotani, T., & Bamba, A. 2009, , 61, 183 Bamba, A., Ueno, M., Koyama, K., & Yamauchi, S. 2003a, ApJ, 589, 253 Bamba, A., Yamazaki, R., Ueno, M., & Koyama, K. 2003b, , 589, 827 Bamba, A., Yamazaki, R., Yoshida, T., Terasawa, T., & Koyama, K. 2005, , 621, 793 Bamba, A., et al. 2008, , 60, 153 Blondin, J. M., Chevalier, R. A., & Frierson, D. M. 2001, , 563, 806 Cordes, J. M., & Lazio, T. J. W. 2002, arXiv:astro-ph/0207156 Ebisawa, K., et al. 2008, PASJ, 60, 223 Gaensler, B. M., Schulz, N. S., Kaspi, V. M., Pivovaroff, M. J., & Becker, W. E. 2003, , 588, 441 Hinton, J. 2007, ArXiv e-prints, 712, arXiv:0712.3352 Ioka, K., & Meszaros, P. 2009, arXiv:0901.0744 Ishisaki, Y., et al. 2007, PASJ, 59, 113 Kargaltsev, O., & Pavlov, G. G. 2007, ApJ, 670, 655 Kargaltsev, O., & Pavlov, G. G. 2008, 40 Years of Pulsars: Millisecond Pulsars, Magnetars and More, 983, 171 (Astro-ph/0801.2602) Kokubun, M., et al. 2007, , 59, 53 Komin, N., Carrigan, S., Djannati-Ata[ï]{}, A., Gallant, Y. A., Kosack, K., Puehlhofer, G., Schwemmer, S., & for the H. E. S. S. Collaboration 2007, arXiv:0709.2432 Koyama, K., et al. 2007, , 59, 23 Lyne, A. G., & Lorimer, D. R. 1994, , 369, 127 Mattana, F., et al. 2009, , 694, 12 Mewe, R., Gronenschild, E. H. B. M., & van den Oord, G. H. J. 1985, A&AS, 62, 197 Mitsuda, K. et al. 2007, , 59, S1 Mori, K., Burrows, D. N., Hester, J. J., Pavlov, G. G., Shibata, S., & Tsunemi, H. 2004, , 609, 186 Morris, D. J., et al. 2002, MNRAS, 335, 275 Nakajima, H., et al. 2008, PASJ, 60, 1 Nakamura, R., Bamba, A., Ishida, M., Nakajima, H., Yamazaki, R., Terada, Y., P[ü]{}hlhofer, G., & Wagner, S. J. 2009, , 61, 197 Pavlov, G. G., Kargaltsev, O., & Brisken, W. F. 2008, , 675, 683 Renaud, M., Hoppe, S., Komin, N., Moulin, E., Marandon, V., & Clapson, A.-C. 2008, American Institute of Physics Conference Series, 1085, 285 Serlemitsos, P. J., et al. 2007, , 59, 9 Shibata, S., Tomatsuri, H., Shimanuki, M., Saito, K., & Mori, K. 2003, , 346, 841 Takahashi, T., et al. 2007, , 59, 35 Takahashi, T., et al. 2008, , 60, 131 Tawa, N., et al. 2008, PASJ, 60, 11 Uchiyama, H., Matsumoto, H., Tsuru, T. G., Koyama, K., & Bamba, A. 2009, PASJ, 61, S189 Yamazaki, R., Kohri, K., Bamba, A., Yoshida, T., Tsuribe, T., & Takahara, F. 2006, , 371, 1975 [^1]: Corresponding author: abamba@cp.dias.ie [^2]: http://www.astro.isas.jaxa.jp/suzaku/process/v2changes/criteria xis.html
--- abstract: 'We consider a dilute two-component atomic fermion gas with unequal populations in a harmonic trap potential using the mean field theory and the local density approximation. We show that the system is phase separated into concentric shells with the superfluid in the core surrounded by the normal fermion gas in both the weak-coupling BCS side and near the Feshbach resonance. In the strong-coupling BEC side, the composite bosons and left-over fermions can be mixed. We calculate the cloud radii and compare axial density profiles systemically for the BCS, near resonance and BEC regimes.' address: - '$^1$Department of Physics, National Chung Cheng University, Chiayi 621, Taiwan' - '$^2$Institute of Physics, Academia Sinica, Nankang, Taipei 115, Taiwan' author: - 'C.-H. Pao$^1$ and S.-K. Yip$^2$' title: Asymmetric Fermi superfluid in a harmonic trap --- \[intro\]Introduction: ====================== The original BCS state for superconductors considers pairing between two species of fermions with equal populations. For a long time, theorists studied the fermion system with unequal species, or mismatched Fermi surfaces, and proposed this system may have different ground state [@FFLO], in particular the so called Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phase. Experimentally, however, such superfluid states remain unclear because of the difficulty in preparing the magnetized superconductors. Experiments with ultra-cold atoms have opened a new era to study this fermion system with unequal populations. Through the Feshbach resonance [@Feshbach], the effective interaction between atoms can be varied over a wide range such that the ground state can be turned from a weak-coupling BCS superfluid to a strong-coupling Bose-Einstein condensation (BEC) regime. In the homogeneous system, theoretical studies [@Carlson05; @pao06; @Son05] of the unequal fermion species show that the phase transition must occur when the resonance is crossed, in contrast to the equal population case where a smooth crossover takes place [@EL; @SRE93]. Breached pair phase [@LW], phase separated states are also proposed [@Bedaque03; @Caldas04] in this system. Two recent experiments [@ketterle06; @hulet06] studied the trapped $^6$Li atoms with imbalanced spin populations and obtained the density profiles for various population differences. Both groups found the system contains a superfluid core surrounded by a normal fermions and provide evidence for phase separation near the crossover. In this paper, we study this imbalanced fermion system by mean field approximation and evaluate the density profiles for various coupling strengths from weak-coupling BCS superfluid to strong coupling BEC regime. In particular, we calculate the axial density profiles, superfluid and minority cloud radii, and distinguish between the phase separation and Bose-Fermi mixture regimes. This paper is organized as follows: In Sec. II we briefly review the mean-field approximation for the dilute fermion atoms with unequal populations. In Sec. III we present our results for various polarizations from weaking-coupling BCS superfluid to strong-coupling BEC side. We show that the axial density profiles are constant within the superfluid core and decrease beyond the phase boundary for the phase separations but are smoothly decreasing functions for entire trap for the mixtures. Finally, we conclude with a briefly summary in Sec. IV. While this work was in progress, several theoretical papers have also studied the same problem under similar approximations [@Yi; @deSilva; @Haque; @chevy06] or going beyond [@kinnunen05; @pieri05]. Basically, ref [@Yi; @deSilva; @Haque; @chevy06; @kinnunen05] also conclude the system is phase separated into concentric shells with superfluid in the center and surrounding by leftover fermions near the resonance. The strong-coupling BEC limit has also been studied [@Yi; @deSilva; @pieri05]. In this case, the composite bosons and unpaired fermions can mix. As the population difference increases the unpaired fermions can even penetrate into the superfluid core. Our paper provides a more systematic study of the entire BCS-BEC regimes for all polarizations. \[form\]Formalism: ================== Restricting ourselves to wide Feshbach resonance, the two-component fermion system can be described by an effective one-channel Hamiltonian $$H\ =\ \label{eqh} \sum_{{\bf k}, \sigma} \xi_\sigma ({\bf k}) c^\dagger_{{\bf k},\sigma} c_{{\bf k},\sigma}\ +\ g \sum_{\bf k,k^\prime, q} c^\dagger_{{\bf k+q},\uparrow} c^\dagger_{{\bf k^\prime -q},\downarrow} c_{{\bf k^\prime},\downarrow} c_{{\bf k},\uparrow}\ ,$$ where $\xi_{\sigma}({\bf k})\, =\, \hbar^2 k^2/2m - \mu_\sigma$ and the index $\sigma$ runs over the two spin components. Within the BCS mean field approximation at zero temperature, the excitation spectrum in a homogeneous system for each spin is (see e.g. [@WY03] for details) $$E_\sigma ({\bf k})\, = \, \frac{ \xi_{\sigma}({\bf k}) - \xi_{- \sigma}({\bf k})}{ 2}\ +\ \sqrt{ \left ( \frac{ \xi_{\sigma}({\bf k}) + \xi_{-\sigma} ({\bf k})} {2 } \right )^2\, + \, \Delta^2}\ , \label{eqdisp}$$ where $\xi_{\sigma}({\bf k})\, =\, \hbar^2 k^2/2m - \mu_\sigma$ are the quasi-particle excitation energies for normal fermions, and $-\uparrow \equiv \downarrow$. For an inhomogeneous system, $e.g.$ the system in a harmonic trap, a finite system sized effect should be included [@bruun99]. However, the system can be treated as homogeneous locally if the number of particles are sufficiently large. A local density approximation, or Thomas-Fermi approximation (TFA), is applied and the chemical potential for spin $\sigma$ is replaced by $$\mu_\sigma (r) \ =\ \mu^0_\sigma\ - \frac{1}{2} m \omega^2 r^2\ ,$$ with $\omega$ the isotropic trap frequency and $r$ the distance from the trap center. We shall show results explicitly only for the isotropic trap. In the local density approximation, the densities $n_\sigma ({\bf r})$ depends only on the local chemical potentials $\mu_\sigma({\bf r})$. Hence the density profile of an anisotropic trap can be related to an isotropic one by rescaling the spatial coordinates appropriately. We then introduce the average chemical potential $$\mu (r)\ \equiv\ \frac{1}{2} [\mu_\uparrow (r) + \mu_\downarrow (r)]\ =\ \mu_0\ -\ \frac{1}{2} m \omega^2 r^2\ ,$$ and the difference $h\ \equiv \ [\mu_\uparrow (r) - \mu_\downarrow (r) ]/2\, =\, (\mu^0_\uparrow - \mu^0_\downarrow)/2 $. The dispersion relation in Eq. (\[eqdisp\]) becomes $$E_{\uparrow,\downarrow} ({\bf k,r})\, = \, \sqrt{ \xi({\bf k,r})^2 + \Delta^2({\bf r})} \mp h \label{Ek}$$ where $\xi({\bf k,r}) \equiv \hbar^2 k^2/2m - \mu (r)$. We take spin up to be the majority species so that $h$ and $E_\uparrow$ are positive always. Then the density profiles in a harmonic trap are $$\begin{aligned} n_s(r)& = & n_\uparrow(r) + n_\downarrow (r)\ =\ \int { d^3 k \over (2 \pi)^3} \left [ 1 - {2 \xi({\bf k, r}) \over E_\uparrow + E_\downarrow} f(-E_\uparrow)\right ]\ , \label{eqns}\\ n_d (r) & = & n_\uparrow(r) - n_\downarrow (r)\ =\ \int { d^3 k \over (2 \pi)^3} f(E_\uparrow)\ , \label{eqnd}\end{aligned}$$ and the total number of particles $N\, = \, \int d^3 r n_s(r) $. Here $f$ is the Fermi function. The polarization of the system is defined as $$P \ \equiv\ {N_\uparrow - N_\downarrow \over N}\ =\ {1 \over N}\int d^3 r n_d(r)\ .$$ Now the pairing field $\Delta$ depends on position also. In the local density approximation, it obeys an equation similar to the homogeneous case [@pao06; @WY03]: $$- \frac{ m }{ 4 \pi a} \Delta (r)\, =\, \Delta(r) \int \frac{ d^3 k}{ (2\pi)^3} \left [ \frac{ 1 - f(E_\uparrow) - f(E_\downarrow) }{ E_\uparrow + E_\downarrow }\, -\, \frac{ m }{ \hbar^2 k^2} \right ]\ . \label{eqgap}$$ For a given scattering length $a$, we solve equations (\[eqns\]), (\[eqnd\]) and (\[eqgap\]) self-consistently for fixed total number of particles $N$ and polarization $P$. The solutions to the“gap equation” (\[eqgap\]) may not be unique. The physical solution is determined by the condition of minimum free energy among the multiple solutions. We describe the detail procedures in \[freeenergy\]. \[result\]Results and Discussions ================================= In this section, we investigate the density profiles for various polarizations and coupling strengths from positive detuning BCS superfluid to negative detuning BEC side. With the aid of density profiles, we evaluate the radii of the superfluid phase boundaries for various cases and compare to the current experimental results. We close this section with a discussion of axial density profiles. Phase separation versus Bose-Fermi mixture can be clarified through the axial density profiles of the population difference. In Fig. \[fig1\], we plot the radial density profiles for three different coupling strengths $1 / k_F a\, =\, -0.61, \, 0.03$, and $2.44$ (for different columns) with polarization $P\, =\, 0.2\, , 0.5$, and $0.9$ (for different rows). The total number of particles are fixed to $2 \times 10^{5}$. Here $k_F$ is the Fermi wavevector at the trap center for an ideal symmetric Fermi gas with the same total number of particles. In all these plots, the system shows a superfluid cloud surrounded by a normal Fermi gas except the case in Fig. \[fig1\](c) where the system is completely in the normal state with the polarization $P = 0.9$. It is consistent with the experimental observation [@ketterle06] that the superfluid is destroyed by a sufficiently large population difference. We remark here however that the critical population difference $P_c$ for the destruction of superfluidity obtained here is much larger than that found in the experiment [@ketterle06]. For $1/k_F a = -0.61$ here, $P_c > 0.6$ theoretically whereas an extrapolation of the data of [@ketterle06] gives $P_c < 0.3$ (see further discussions below). For less polarized system on the BCS side \[Fig. \[fig1\](a) and (b)\], it shows a clear phase separation between the superfluid and a normal Fermi gas. Note that within the superfluid cloud, the population difference is zero and the system is just like the typically unpolarized superfluid. Outside the superfluid cloud, both components of the fermions exist in the normal state which indicates a fraction of the fermions which are not paired-up even at the zero temperature. The density profile of the population difference $n_d(r)$ peaks at the superfluid phase boundary and decreases gradually toward the edge of the trap. Its value is equal to the density profile of the majority when $r \ge r_\downarrow$, the radii of the minority cloud. At the superfluid phase boundary, both the majority and minority density profiles exhibit discontinuous which has been observed also by the others [@Yi; @deSilva; @Haque]. Near resonance \[Fig. \[fig1\](d)-(f)\], similar phase separated states are observed. However, most of the minority are paired up in this regime such that the density profiles contains mainly the excess fermions outside the superfluid cloud at all $P$’s. This is consistent with the homogeneous normal phase boundary extends near resonance to large population difference [@pao06]. Near resonance, the superfluid core survives at $P=0.9$ in our calculations whereas experimentally [@ketterle06] it vanishes already at $P \approx 0.7$. On the BEC side \[Fig. \[fig1\](g)-(i)\], all of the minority are paired up and the excess fermions can penetrate into the superfluid core. The system contains a superfluid core for any $P$ (if sufficiently deep in the BEC regime, see Fig. \[fig4\] below). The system then contains three different phases: the purely superfluid, the mixture phase, and the normal fermions from the trap center to the edge of the trap. The mixture phase extends toward the trap center as the polarization increases. In Fig. \[fig1\](i) ($P = 0.9$), the system is highly polarized and the excess fermions extend deeply into the trap center. In \[BEClimit\], we give an analytic discussion of the density profiles in the BEC limit. The radius $r_s$ of the superfluid core is one of the most interesting quantities in current studies of the imbalanced fermion system [@hulet06; @deSilva; @Haque; @chevy06; @kinnunen05]. From Fig. \[fig1\], the density profiles of the population difference $n_d(r)$ have maxima at the phase boundaries between the superfluid and the normal fermions. $r_s$ is thus also the peak position of $n_d(r)$. In Fig. \[fig2\], we plot $r_s$ as function of $P$ for three different coupling strengths. $r_s$ behaves quite differently above and below the resonance. On the BCS side the superfluid is eliminated when the polarization reaches around 0.65 for $1 /(k_F a) = -0.61$ and the system becomes completely normal with mismatch Fermi surfaces beyond this critical polarization. For large coupling strengths, $r_s$ is finite except when $P$ is exactly $1$, since the superfluid is stable for any finite ($\ne 1$) polarization in the homogeneous case [@pao06]. Except for small ($P \le 0.1$) or large ($P \ge 0.9$) polarizations, the sizes of the superfluid clouds have maxima near the Feshbach resonance at fixed polarization. We also plot the radii ($r_\downarrow$) of the minority (spin down fermions) cloud in Fig. \[fig3\]. Below the Feshbach resonance, this radius is the same as the radius of the superfluid core because all the minority of fermions are paired up. However, this won’t be true above the Feshbach resonance where part of minority are not paired up in this regime. Unlike $r_s$, $r_\downarrow$ decreases monotonically as the coupling strength increases. These two radii become identical when the minority are paired up completely when the system reaches the BEC regime. Due to the experimental constraints, one can not measure the radial density profiles directly. Instead the axial density profiles are reported in [@hulet06]. In Fig. \[fig4\], we plot the normalized axial density profiles $n_a(z)$ $[ \equiv \int dx dy n_d (\vec r) ]$ of the population difference for different coupling strengths at three fixed polarization $P$. For the cases with phase separations \[Figs. \[fig1\](a), (b), and (d)-(f)\] , the corresponding $n_a(z)$ are constants for $z \le r_s$ and have a kink at the phase boundary (for $z = r_s$). This feature results from the population difference $n_d(r)$ being zero inside the superfluid cloud (see \[adensity\] in detail) such that $n_a (z)$ remains the same value as at the phase boundary $z= r_s$. For a system with a mixed phase region \[Figs. \[fig1\](g)-(i)\], $n_a (z)$ increases smoothly toward the trap center even as $z \le r_s$. It reaches a constant value within the cloud containing superfluid only \[ $e.g.$, the cases with solid lines in Figs. \[fig4\](a) and (b)\]. At large polarization \[solid line in Fig. \[fig4\](c)\], the excess fermions mix with superfluid entirely for $1 / k_F a\, =\, 2.44$ such that $n_a (z)$ increases monotonically toward the trap center. The completely different features for the axial density profiles of the population difference inside the superfluid cloud can help us to clarify whether the system is phase separation or mixture. \[conclusion\]Conclusion ======================== We have investigated the radial density profiles of the two-component fermion system with unequal spin-populations under Feshbach resonance. The system shows a superfluid cloud in the trap center surrounding by normal fermions. In the weak-coupling BCS side, the superfluid is destroyed completely at the polarization $P \, \gtrsim\, 0.65$ for $1 /(k_F a) = -0.61$. Near the Feshbach resonance, almost all the minority are paired up and the system is phase separated into superfluid and the normal fermions. In the strong-coupling BEC side, the excess fermions can mix with the superfluid and the system contains three different phases, the purely superfluid cloud, the mixture phase, and the normal fermions from the trap center to the edge of the trap. In particular, we emphasize the difference in the axial density difference profiles between the phase separation and Bose-Fermi mixture regimes. The former shows a constant for $z < r_s$ and has a kink at the phase boundary but the later are smoothly increasing toward the trap center. However, Ref. [@hulet06] reported positive slopes of the axial density different profiles inside the superfluid cloud that indicates the $n_d (\vec r)$ need to be negative somewhere within the superfluid cloud. We do not obtain this phenomenon within current local density approach. This research was supported by the National Science Council of Taiwan under grant numbers NSC94-2112-M-194-001 (CHP) and NSC94-2112-M-001-002 (SKY), with additional support from National Center for Theoretical Sciences, Hsinchu, Taiwan. \[freeenergy\]Free energy ========================= To determine the minimum free energy state when there are multiple solutions, we need an expression for the free energy. We obtain this as follows. First consider a system with fixed volume $V$ and particle numbers $N_{\sigma}$. With the Hamiltonian in Eq. (\[eqh\]), it is straight-forward to show that [@AGD] the energy $E( N_{\sigma}, g) $ of this system obeys $$\frac{\partial E}{\partial g} = \frac{V}{g^2} |\Delta|^2 \label{Eg} \ .$$ We need to eliminate $g$ in favor of the scattering length $a$, the physical parameter of the system. These two variables are related by the expression $$\frac{m}{4 \pi \hbar^2 a}\ =\ \frac{1}{g} + \frac{1}{V} \sum_{\vec k} \frac{1}{2 \epsilon_k}\ . \label{ag}$$ We thus get $$\frac{m}{4 \pi \hbar^2} d \left( \frac{1}{a} \right) = d \left( \frac{1}{g} \right) \ , \label{dadg}$$ and therefore $$\frac{\partial E}{\partial (1/a)} \vert_{N_{\sigma}} = - V \frac{m}{4 \pi \hbar^2} |\Delta(a) |^2 \ . \label{Ea}$$ Writing this derivative to be ${\cal E}'$, we then get the thermodynamic relation $$dE = \sum_{\sigma} \mu_{\sigma} d N_{\sigma} + {\cal E}' d (\frac{1}{a}) \ . \label{dE}$$ The free energy $\Omega(\mu_{\sigma}, a) \equiv E - \mu_{\uparrow} N_{\uparrow} - \mu_{\downarrow} N_{\downarrow} $ then obeys $$d \Omega = - \sum_{\sigma} N_{\sigma} d \mu_{\sigma} + {\cal E}' d (\frac{1}{a}) \ . \label{dOmega}$$ We thus conclude that, for volume $V$ and chemical potentials $\mu_{\sigma}$, $$\frac{\partial \Omega}{\partial (1/a)} \vert_{\mu_{\sigma}} = - V \frac{m}{4 \pi \hbar^2} |\Delta|^2 \ . \label{dOda}$$ Though this expression is already sufficient to determine and thus compare the free energies of the different solutions for given scattering length $a$, we can convert it to an even more convenient form. To do this, let us write $x \equiv |\Delta|^2$ and $y \equiv 1/a$. We get, up to an overall multiplying factor $$\frac{\partial \Omega}{\partial y} = - x \ .\label{dOdx}$$ Thus, when the solutions are plotted in the form of Fig. \[fig5\], the free energy $\Omega$ of a state can be related to the free energy $\Omega_0$ of another state along the same curve by, again up to an overall multiplicative constant, $$\Omega = \Omega_0 - \int x dy \ .\label{Oxy}$$ Since $\int x dy$ is the area between the curve and the y-axis, the states corresponding to the minimum free energy at a given scattering length $a$ (hence $y$) can be found via the same procedure as the usual Maxwell construction. \[BEClimit\]BEC limit ===================== Here we try to understand the behaviour of the density profile in the BEC ($ 1/k_F a \gg 1$) limit, for the special case of small number of excess fermions \[$e.g.$ Fig. \[fig1\](g)\]. For a bulk system, it is straight forward to perform a low density expansion of the mean-field equations and obtain an expansion of the chemical potentials $\mu_f \equiv \mu_{\uparrow}$ and $\mu_b \equiv \mu_{\uparrow} + \mu_{\downarrow}$ as a series in the densities $n_f = n_d$ and $n_b \equiv n_{\downarrow}$ (see [@Yip02] for the details of this calculation). In the BEC limit, $n_d$ can be interpreted as the density of the excess unpaired fermions and $n_b$ as the density of the bosons which represent the bound fermion pairs. $\mu_f$ and $\mu_b$ can be interpreted as the corresponding chemical potentials. Explicitly we have an expansion of the form $$\begin{aligned} \mu_f &=& A n_d^{2/3} + g_{bf} n_b \ , \label{muf} \\ \mu_b &=& - \epsilon_b + g_{bb} n_b + g_{bf} n_f \ , \label{mub}\end{aligned}$$ where we have dropped the terms higher order in the densities. These terms are smaller in the limit $n_f a^3 \ll 1$ and $n_b a^3 \ll 1$. We obtain [@Yip02] $A = (6 \pi^2)^{2/3} / 2 m$, $\epsilon_b = \hbar^2 / m a^2$, $g_{bb} = 4 \pi \hbar^2 a / m$, $g_{bf} = 8 \pi \hbar^2 a / m$. The value of $A$ is as expected for a free Fermi gas and $\epsilon_b$ is the binding energy of the fermion pair. The values of $g_{bb}$ and $g_{bf}$ here, obtained from the mean-field equations, differ from the correct values known. Exact three-body calculation [@Skorniakov] gives $g_{bf}^{\rm exact} = 3.6 \pi \hbar^2 a / m$, while a recent four-body calculation [@Petrov04] gives $g_{bb}^{\rm exact} = 1.2 \pi \hbar^2 a / m$. Alternatively, defining the scattering lengths in the usual manner, our low density expansion yields the effective values $a_{bf} = 8 a /3$ and $a_{bb} = 2 a$, whereas the exact values should be $a_{bf}^{\rm exact} = 1.2 a$ and $a_{bb}^{\rm exact} = 0.6a $. The difference between the mean-field and exact results is due of course to the approximate nature of the mean-field theory. We note however, here that (a) $a_{bf}$ and $a_{bb}$ are both positive and of order $a$, hence in the BEC limit we have necessarily $n_d a^3 \ll 1$, thus the fermions and the bosons can mix [@viverit00]; (b) $g_{bf}/ g_{bb} > 1/2$ for both mean-field and exact values. We shall explain the importance of this second relation below. In the trap, Eqs. (\[muf\]) and (\[mub\]) becomes $$\begin{aligned} \mu_f &=& A n_d (\vec r) ^{2/3} + g_{bf} n_b (\vec r) + V( \vec r) \ , \label{muft} \\ \mu_b &=& - \epsilon_b + g_{bb} n_b (\vec r) + g_{bf} n_f (\vec r)+ 2 V (\vec r) \ . \label{mubt}\end{aligned}$$ Here $V(\vec r)$ is the trap potential. We shall only consider the case where the potential increases from the center of the trap. To appreciate the implications of the relation (b), consider the limit of a small number of fermions. The density profile of the bosons can then be determined by first ignoring the fermion term in Eq. (\[muft\]), and we obtain the boson density profile $$n_b (\vec r) = (\mu_b + \epsilon_b - 2 V (\vec r))/g_{bb} \ ,\label{nb0}$$ if the R.H.S. is larger than zero, and $n_b = 0$ otherwise. For ease of reference, we shall call these two regions “inside" and “outside" below. Eq. (\[muft\]) can be rewritten in the form $$\mu_f = A n_d (\vec r)^{2/3} + V_{\rm eff} (\vec r) \ ,\label{eff}$$ where $V_{\rm eff} (\vec r) = V (\vec r) + g_{bf} n_b (\vec r)$ is the effective potential for the fermions. We obtain, for the inside region, $V_{\rm eff} (\vec r) = V (\vec r) + (g_{bf} /g_{bb}) (\mu_b + \epsilon_b - 2 V (\vec r))$ whereas for the outside $V_{\rm eff} (\vec r) = V (\vec r)$. In the inside region, the position dependence is therefore $ - (2 g_{bf}/g_{bb} -1 ) V(\vec r)$. From this, it is clear that if $g_{bf} > g_{bb} /2$, then the effective potential actually is larger near $\vec r = 0$ than near the edge of the boson cloud. It decreases from the center of the trap till it reaches the edge of the boson cloud, then it increases again due to the spatial dependence of $V (\vec r)$. Therefore, we conclude that for ${g_{bf}}/{g_{bb}} > 1/2$, the excess fermions lie near the edge of the boson cloud, at least for a small number of Fermions [@Carr04; @pieri05]. The excess fermions lie near the edge of the cloud \[Fig. \[fig1\](g)\]. Thus a peak in $n_d(\vec r)$ does [*not*]{} indicate phase separation. \[adensity\]Axial Density ========================= We here discuss some useful expressions governing the axial density within the local density approximation. Our results here are, to some extent, a further development of those obtained in [@deSilva]. In local density approximation, any quantity, say the density difference $n_d (\vec r)$, is a function entirely of the local chemical potential $\mu (\vec r)$ (note that $h$ is a constant). Hence, we can write $$n_d (\vec r) = g (\mu(\vec r)) \equiv g (\mu(\vec r), h) \label{g}$$ for some function $g$. We here note that any density must be identically zero for sufficiently large and negative chemical potential, and thus $g(\zeta)$ vanishes exactly for sufficiently large and negative argument $\zeta$. It will be convenient to define another function $G(\zeta)$ via $$G(\zeta) \equiv \int_{-\infty}^\zeta d\zeta' g(\zeta') \ .\label{G}$$ Consider now for example the radially integrated density (referred afterwards as axial density) difference: $$n_{a} (z) \equiv \int dx dy n_d (\vec r) \ .\label{Na}$$ The integral can be written as, for a trap that is cylindrically symmetric with respect to $z$, $\pi \int_0^{\infty} d \rho^2 g (\mu_0 - \frac{1}{2} \alpha_z z^2 - \frac{1}{2} \alpha_{\parallel} \rho^2 ) $ where $\rho^2 \equiv x^2 + y^2$. This integral can be expressed in terms of $G$ and thus $$n_{a}(z) = \frac{2 \pi}{\alpha_{\parallel}} G (\mu_0 - \frac{1}{2} \alpha_z z^2) \ .\label{NaG}$$ This relation can be used to obtain directly the axial density in terms of the function $g$, without first calculating the density profile in trap and then performing the integration. Moreover, it can also be used to deduce the (unintegrated) density profile once the axial density is given, for example from experiment. To see this, we differentiate Eq. (\[NaG\]) with respect to $z$, and notice that $G'(\zeta) = g(\zeta)$ evaluated at $\zeta = \mu_0 - \frac{1}{2} \alpha_z z^2 $ is directly related to the corresponding density at the coordinate $(0,0,z)$. Thus we have $$n_d(0,0,z) = - \frac{\alpha_{\parallel}}{2 \pi \alpha_z} \frac{1}{z} \frac{\partial n_{a}(z)}{\partial z}\ . \label{nz}$$ Therefore the actual density for a point on the $z$ axis can be obtained from the axial density. Under the local density approximation, the density at any given point in the trap is a function of the local chemical potential only. Hence we can obtain the actual density at any given point in space provided the axial density is given. The relation Eq. (\[nz\]) also shows that, since $n_d$ is non-negative, the axial density profile is strictly decreasing (increasing) with $z$ for $z\, >\, (<)\, 0$, a result pointed out in reference [@deSilva]. In a region where the density vanishes, then the axial density should be a constant, a result recognized in references [@deSilva] and [@Haque]. References {#references .unnumbered} ========== [10]{} Fulde P and Ferrell R A 1964 A550 Larkin A I and Ovchinnikov Y N 1964 [*Zh. Éksp. Teor. Fiz.*]{} [**47**]{} 1136 \[1965 [*Sov. Phys.–JETP*]{} [**20**]{} 762\] Tiesinga E, Verhaar B J and Stoof H T C 1993 A [**47**]{} 4114 Inouye S, Andrews M R, Stenger J, Miesner H-J, Stamper-Kurn D M and Ketterle W 1998 [*Nature*]{} [**392**]{} 151 Courteille P, Freeland R S, Heinzen D J, van Abeelen F A and Verhaar B J 1998 69 Roberts J L, Claussen N R, Burke J P Jr., Greene C H, Cornell E A and Wieman C E 1998 5109 Carlson J and Reddy S 2005 060401 Pao C-H, Wu S-T and Yip S-K 2006 B (to be published) Sheehy D E and Radzihovsky L 2006 060401 Son D T and Stephanov M A 2005 [*Preprint*]{} cond-mat/0507586 Eagles D M 1969 456 Leggett A L 1980 [*Modern Trends in the theory of condensed matter*]{} ed Pekalski A and Przystawa J (Springer-Verlag: Berlin) Sá de Melo C A R, Randeria M, and Engelbrecht J R 1993 3202 Forbes M M, Gubankova E, Liu W V and Wilczek F 2005 017001 and references therein Bedaque P F, Caldas H and Rupak G 2003 247002 Caldas H 2004 A[**69**]{}, 063602 Zwierlein M W, Schirotzek A, Schunck C H and Ketterle W 2006 [*Science*]{} [**311**]{} 492 Partridge G B, Li W, Kamar R I, Liao Y and Hulet R G 2006 [*Science*]{} [**311**]{} 503 Yi W and Duan L-M 2006 [*Preprint*]{} cond-mat/0601006 De Silva T N and Mueller E J 2006 [*Preprint*]{} cond-mat/0601314 Haque M and Stoof H T C 2006 [*Preprint*]{} cond-mat/0601321 Chevy F 2006 [*Preprint*]{} cond-mat/0601122 Kinnunen J, Jensen L M and Törmä P 2005 [*Preprint*]{} cond-mat/0512556 Pieri P and Strinati G C 2005 [*Preprint*]{} cond-mat/0512354 Bruun G, Castin Y, Dum R and Burnett K 1999 D [**7**]{} 433 Wu S-T and Yip S-K 2003 A [**67**]{} 053603 Abrikosov A A, Gorkov L P and Dzyaloshinskii I E, 1965 [*Quantum Field Theoretical Methods in Statistical Physics*]{} (Pergamon: Oxford) Lifshitz E Mand Pitaevskii L P 1980 [*Statistical Physics*]{} Part 2 (Pergamon: Oxford) Chapter 5 sec 40. Note however that our sign convention for $g$ is opposite to these references. Yip S K 2002 [*Preprint*]{} cond-mat/0203582 Skorniakov G V and Ter-Martirosian K A 1956 [*Zh. Th. Eksp. Theo.*]{} [**31**]{} 775 \[[*Sov. Phys.–JETP*]{} [**4**]{} 648\] Petrov D, Salomon C and Shlyapnikov G V 2004 , 090404 Viverit L, Pethick C J and Smith H 2000 A [**41**]{} 053605 Carr L D, Chiaramonte R and Holland M J 2004 A[**70**]{}, 043609
--- author: - | [**Artem Dudko**]{}\ Stony Brook University, Stony Brook, NY, USA\ artem.dudko@stonybrook.edu\ [**Rostislav Grigorchuk** ]{}\ Texas A&M University, College Station, TX, USA\ grigorch@math.tamu.edu title: 'On spectra of Koopman, groupoid and quasi-regular representations.' --- Introduction. ============= The study of spectra of operators of unitary group representations has a long history, remarkable achievements and numerous applications. For instance, the famous Kadison-Kaplanski Conjecture which was proven for the case of amenable groups by Higson and Kasparov in [@HigsKasp97] asserts that for a torsion free group $G$ and an element $m\in\mathbb C[G]$ of the group algebra of $G$ the spectrum of $\lambda_G(m)$ is connected, where $\lambda_G$ is the left regular representation of $G$. The remarkable Kesten’s criterion of amenability and the fundamental property $(T)$ of Kazhdan can be formulated in terms of spectral properties of operators of the form $\lambda_G(m)$. The topic in discussion is related to the spectral theory of graphs and networks, random walks, theory of operator algebras, discrete potential theory, abstract harmonic analysis . There are three important types of unitary representations associated to a measure class preserving action of a countable group $G$ on a probability space $(X,\mu)$: quasi-regular, Koopman and groupoid representations. The goal of this article is to show that there is a close relation between spectral properties of these three types of representations. For a subgroup $H<G$ the quasi-regular representation $\rho_{G/H}$ acting on $l^2(G/H)$ is a natural generalization of the regular representation $\lambda_G$. In the case of a group action $(G,X,\mu)$ such representations appear as permutational representations $\rho_x$ in $l^2(Gx)$ for the action of $G$ on orbits $Gx$, $x\in X$. Spectra of quasi-regular representations play an important role in random walks on groups and Schreier graphs (see [@Kes59]). Quasi-regular representations naturally give rise to Hecke algebras and their representations. The Koopman representation (which we denote by $\kappa$) acts in $L^2(X,\mu)$. Some important properties of the dynamical system $(G,X,\mu)$, such as ergodicity and weak mixing, can be reformulated in terms of spectral properties of $\kappa$ (see [@BeHaVa]). If, in addition, $G$ is countable then the groupoid representation $\pi$ is defined in $L^2(\mathcal R,\nu)$, where $\mathcal R\subset X\times X$ is the orbit equivalence relation and $\nu$ is a measure on $\mathcal R$ which is the product of $\mu$ and the counting measure on leaves. Groupoid representations play important role in operator algebras (see [@Tak3]) and theory of factor representations and character theory (see [@VK] and [@DM13; @AF]). Given a unitary representation $U$ of a group $G$ and an element $m\in\mathbb C[G]$ (or, more generally $m\in l^1(G)$) define Hecke type operator $$\label{EqHecke}U(m)=\sum\limits_{s\in G}m(s)U(s).$$ For an operator $A$ denote by $\sigma(A)$ its spectrum. The main result of the present paper is the following: \[ThMain\] $1)$ For an ergodic measure class preserving action of a countable group $G$ on a standard probability space $(X,\mu)$ and any $m\in\mathbb C[G]$ one has $$\label{EqMainCont}\sigma(\kappa(m))\supset\sigma(\rho_x(m))=\sigma(\pi(m))$$ for $\mu$-almost all $x\in X$, where $\kappa$ is the Koopman representation, $\pi$ is the groupoid representation associated to the action of $G$ on $X$, $\rho_x$ is the quasi-regular representation associated with the orbit $Gx$.\ $2)$ If, moreover, $\mu$ is $G$-invariant and non-atomic, then $\sigma(\kappa_0(m))\supset\sigma(\pi(m))$, where $\kappa_0$ is the restriction of $\kappa$ onto the orthogonal complement of constant functions in $L^2(X,\mu)$.\ $3)$ If, in addition to the conditions of 0.1cm $1)$, $(G,X,\mu)$ is hyperfinite, then $$\label{EqMainSim}\sigma(\kappa(m))=\sigma(\pi(m)).$$ This result has an interpretation in terms of weak containment of representations. Given a unitary representation $\rho$ let $C_\rho$ denote the $C^*$-algebra generated by operators $\rho(g),g\in G$. Recall that a unitary representation $\rho$ of a group $G$ is weakly contained in a unitary representation$\eta$ (denoted by $\rho\prec\eta$) if there exists a surjective homomorphism $\phi:C_\eta\to C_\rho$ of $C^*$-algebras such that $\phi(\eta(g))=\rho(g)$ for all $g\in G$. We write $\rho\sim\eta$ if $\rho$ is weakly equivalent to $\eta$ ($\rho\prec\eta$ and $\eta\prec\rho$). An action $(G,X,\mu)$ of a countable group $G$ is called hyperfinite if the orbit equivalence relation associated to this action is hyperfinite with respect to $\mu$ (see [@FM1]). Theorem \[ThMain\] can be formulated in terms of weak containment. Namely, means that $$\kappa\succ\rho_x\sim\pi$$ for $\mu$-almost all $x\in X$ and means that $\kappa\sim\pi$. As an application of relations between spectra of representations we describe the spectra of the torsion group $\mathcal{G}=<a,b,c,d>$ of intermediate growth constructed by the second author in [@Gr80] and studied in [@Gr84] and other papers. Recall that $\mathcal{G}$ acts naturally on the boundary $\partial T$ of a binary rooted tree $T$ (see [@Grig11]). We prove the following: \[ThGammaSpec\] The spectrum of the Cayley graph of $\mathcal{G}$ $($the spectrum of $\lambda_\mathcal{G}(a+b+c+d))$ is $[-2,0]\cup [2,4]$ and coincides with the spectrum of the Schreier graph $\Gamma_x$ of the action of $\mathcal G$ on $\partial T$ for any $x\in\partial T$ (with the spectrum of $\rho_x(a+b+c+d)$). In fact, our proof shows that the spectra of $\lambda_\mathcal{G}(t a+b+c+d)$ and $\rho_x(t a+b+c+d)$ coincide for any $t\in\mathbb R$ and almost every $x\in\partial T$ and are equal to a union of two intervals, an interval, or two points. In [@GLN] the authors studied the operator $\rho_x(ta+ub+vc+wd)$ for parameters $t,u,v,w\in\mathbb R$ such that $t\neq 0$, $u\neq -v$, $u\neq -w$, $v\neq -w$ and at least two of $v,w,t$ are distinct. They showed that the spectrum of $\rho_x(ta+ub+vc+wd)$ is a Cantor set of Lebesgue measure zero by reduction to the results known for random Schrödinger operators and substitutional dynamical systems. The corresponding substitution $$\tau:a\to aca,b\to d,c\to b, d\to c$$ appears in the presentation $$\mathcal G=<a,b,c,d|a^2,b^2,c^2,d^2,bcd,\tau^i((ad)^4),\tau^i((adacac)^4)>$$ found by Lysenok in [@Lys]. An interesting question is whether the spectra of $\lambda_\mathcal{G}(ta+ub+vc+wd)$ and $\rho_x(ta+ub+vc+wd)$ coincide for arbitrary parameters $t,u,v,w\in\mathbb R$. Notice that there are not many examples of groups for which the spectrum of the Cayley graph has been calculated. Theorem \[ThMain\] is the first case when the spectrum of the Cayley graph is computed for a group of intermediate growth. The coincidence of the spectra of Schreier graphs $\Gamma_x$ of the action of $\mathcal G$ on $\partial T$ and the Cayley graph of $\mathcal G$ is very surprising since the $\Gamma_x$ are of linear growth and are very different from the Cayley graph of $\mathcal G$. Observe that $\mathcal G$ has an abelian extension $\tilde{\mathcal G}$ which is a torsion free group of intermediate growth generated by four elements $\tilde a,\tilde b,\tilde c,\tilde d$ (see [@Gr84]). From the amenability of $\tilde{\mathcal G}$ and Proposition 3.7 from [@BG] (based on a result of Higson and Kasparov) it follows that the spectrum of the Cayley graph of $\tilde{\mathcal G}$ is $[-4,4]$. Theorem \[ThMain\] has many applications. Among them let us indicate an application to the theory of representations of branch and weakly branch groups. Branch groups play important role in many investigations in and around group theory (see [@Gr00], [@BGS03] and [@Grig11]). The class of branch groups contains groups of intermediate growth, amenable but not elementary amenable groups, groups with finite commutator width . Weakly branch groups are a natural generalization of the class of branch groups playing important role in holomorphic dynamics (see [@Nekr]) and in the theory of fractals (see [@GNS15]). In Section \[SubsecAp\] using Theorem \[ThMain\] and results of [@DuGr15] and [@BG] we show (Corollary \[CoWeak\]) that any subexponentially bounded weakly branch group $G$ admits uncountably many pairwise disjoint (not unitarily equivalent) pairwise weakly equivalent irreducible representations. We finish the paper by presenting two examples of computation of spectra of Hecke type operators associated to the action of $\mathcal{G}$ on the boundary of a binary rooted tree. These examples illustrate the method of operator recursions used in [@BG] and other places. Preliminaries. ============== In this section we give necessary preliminaries. We deal with actions of countable groups on a standard probability space. A probability space is standard, if it is isomorphic modulo zero measure to an interval with Lebesgue measure, a finite or countable set of atoms, or a combination (disjoint union) of both. We refer the reader to [@Rokh] or [@Glas03] for details. Koopman and quasi-regular representations. {#SubsecKoopBranch} ------------------------------------------ A natural type of representations that one can associate to a measure-preserving action of a group $G$ on a measure space $(X,\mu)$, where $\mu$ is a quasi-invariant probability measure, is the Koopman representation $\kappa$ of $G$ in $L^2(X,\mu)$ acting by:$$(\kappa(g)f)(x)=\sqrt{\frac{{\mathrm{d}}\mu(g^{-1}(x))}{{\mathrm{d}}\mu(x)}}f(g^{-1}x),$$ where the expression under the radical is the Radon-Nikodym derivative. This representation is important due to the fact that the spectral properties of $\kappa$ reflect the dynamical properties of the action such as ergodicity and weak-mixing. One of the most natural questions concerning Koopman representations is whether it is irreducible. There are several examples of group actions with quasi-invariant measures known for which $\kappa$ is irreducible (see [@BM11], [@BC02], [@CS91], [@FTP83], [@FTS94] and [@KuSt]), but typically this representation (or its “brother” $\kappa_0$) is not irreducible. In [@DuGr15] we constructed a new class of examples of irreducible Koopman representations arising from subexponential actions of weakly branch groups on boundaries of rooted trees. Recall that for $H<G$ a quasi-regular representation $\rho_{G/H}$ is a permutational representation of $G$ in $l^2(G/H)$ given by the natural action of $G$ on the set of left cosets $gH$. Given a countable group acting on a set $X$ and a point $x\in X$ one can define the quasi-regular representation $\rho_x$ in $l^2(Gx)$, where $Gx$ is the orbit of $x$, by: $$(\rho_x(g)f)(y)=f(g^{-1}y).$$ It is clear that $\rho_x$ is unitary equivalent to $\rho_{G/{\mathrm{St}}_G(x)}$, where ${\mathrm{St}}_G(x)$ is the stabilezer of $x$ in $G$. Notice that the isomorphism class of $\rho_x$ depends only on the stabilizer ${\mathrm{St}}_G(x)$ of $x$. The question of irreducibility and disjointness of quasi-regular representations was studied by Mackey in [@Mack]. Using his criterion in [@BG] Bartholdi and the second author proved that for a weakly branch group $G$ and any $x\in\partial T$ the quasi-regular representation $\rho_x$ is irreducible. In addition, in [@DuGr15] the authors of the present paper showed that the representations $\rho_x$ associated to an action of a weakly branch group on the boundary of a rooted tree for $x\in\partial T$ from different orbits are pairwise disjoint (not unitary equivalent). In Section \[SubsecAp\] we use Theorem \[ThMain\] to strengthen this result for subexponentially bounded groups (Corollary \[CoWeak\]). Groupoid representations and Hecke type operators. {#SubsecReps} -------------------------------------------------- Here we briefly recall the construction of a groupoid representation (see [@FM2] and [@Tak3] for details). As before, let $(X,\mu)$ be a standard probability space with a measure class preserving action of a countable group $G$ on it. Denote by $\mathcal{\mathcal{R}}$ the orbit equivalence relation on $X$. For $A\subset X^2$ and $x\in X$ set $A_x=A \cap (X\times\{x\}),A^x=A\cap (\{x\}\times X)$. Introduce measures $\nu_l,\nu_r$ on $\mathcal{\mathcal{R}}\subset X^2$ by $$\nu_l(A)=\int\limits_X |A^x|d\mu(x),\;\;\nu_r(A)=\int\limits_X |A_x|d\mu(x).$$ Notice that if $\mu$ is invariant with respect to $G$ then $\nu_l=\nu_r$. If $\mu$ is only quasi-invariant with respect to $G$ then the Radon-Nikodym derivative $D(x,y)=\tfrac{{\mathrm{d}}\nu_l}{{\mathrm{d}}\nu_r}(x,y)$ is defined and the relation $$\tfrac{{\mathrm{d}}\mu(gx)}{{\mathrm{d}}\mu(x)}=D(gx,x)\;\;\text{for each}\;\;g\in G\;\;\text{and $\mu$-almost all}\;\;x\in X$$ holds (see [@FM1]). The (left) groupoid representation of $G$ is the unitary representation $\pi$ in $L^2(\mathcal{\mathcal{R}},\nu_r)$ defined by $$(\pi(g)f)((x,y))=f(g^{-1}x,y).$$ The next statement is of folklore type and is mentioned, for example, in [@Tak3], §2. \[Prop-grupp-equiv-int\] The groupoid representation $\pi$ is unitarily equivalent to the representation $\int_X \rho_x d\mu(x)$. Similarly to representation $\pi$ of $G$ in $L^2(\mathcal R,\nu_r)$ one can introduce a representation $\tilde\pi$ of $G$ in $L^2(\mathcal R,\nu_l)$ by $$\label{EqTilPi}(\tilde\pi(g)f)(x,y)= \sqrt{\tfrac{{\mathrm{d}}\mu(g^{-1}x)}{{\mathrm{d}}\mu(x)}}f(g^{-1}x,y).$$ It is straightforward to verify that the representation $\tilde\pi$ is unitarily equivalent to $\pi$ via the intertwining isometry $\mathcal I:L^2(\mathcal R,\nu_r)\to L^2(\mathcal R,\nu_l)$ given by: $$(\mathcal I f)(x,y)=\tfrac{1}{\sqrt{D(x,y)}} f(x,y).$$ The latter is well-defined since $D(x,y)\neq 0$ for $\nu_r$-almost all $(x,y)\in\mathcal R$. Let $U$ be a unitary representation of a countable group $G$ and $m\in\mathbb C[G]$, that is $m:G\to \mathbb C$ is a function of finite support. One can associate to $m$ a Hecke type operator . Additionally, given $\nu\in l^1(G)$ one can associate to it an operator $$U(\nu)=\sum\limits_{s\in G}\nu(s)U(s).$$ An interesting particular case is when $\nu$ is a measure on $G$ ($\nu\in l^1(G)$ with $\nu(s)\geqslant 0$ for all $s\in G$). For the case of quasi-regular representations spectral properties of these type operators are related to properties of random walks on graphs. Weak containment and spectrum of operators. {#SubsecWeak} ------------------------------------------- Let $\rho$ and $\eta$ be two unitary representations of a group $G$ acting on Hilbert spaces $\mathcal H_\rho$ and $\mathcal H_\eta$ correspondingly. Then $\rho$ is weakly contained in $\eta$ (denoted by $\rho\prec\eta$) if for any $\epsilon>0$, any finite subset $S\subset G$ and any vector $v\in \mathcal H_\rho$ there exists a finite collection of vectors $w_1,\ldots,w_n\in\mathcal H_\eta$ such that $$|(\rho(g)v,v)-\sum\limits_{i=1}^n(\eta(g)w_i,w_i)|<\epsilon$$ for all $g\in S$ (see [@BeHaVa] for details). In [@Dix69] Dixmier showed that for two unitary representations $\rho,\eta$ of a countable group $G$ one has $\rho\prec\eta$ if and only if $\|\rho(\nu)\|\leqslant \|\eta(\nu)\|$ for every $\nu\in l^1(G)$. His result implies the following well known fact: \[CoWeakCond\] Let $\rho,\eta$ be two unitary representations of a discrete group $G$. Then the following conditions are equivalent: - $\rho\prec\eta$; - $\sigma(\rho(\nu))\subset\sigma(\eta(\nu))$ for all $\nu\in l^1(G)$; - $\|\rho(m)\|\leqslant \|\eta(m)\|$ for every positive $m\in \mathbb C[G]$. - there exists a surjective homomorphism $\phi:C_\eta\to C_\rho$ such that $\phi(\eta(g))=\rho(g)$ for all $g\in G$. Here positiveness of an element $m$ of some $C^{*}$-algebra $A$ means that it can be represented as $m=x^{*}x$ with $x\in A$. Equivalently, $m$ is positive if it is self-adjoint ($m=m^{*}$) and $\sigma(m)\subset [0,\infty)$. For an action of a countable group $G$ on a measure space $(X,\mu)$ denote by $\mathcal R=\mathcal R_{G,X}$ the equivalence relation generated by $G$ on $X$. In $1977$ Zimmer introduced a notion of amenability of ergodic action of $G$ on a measure space $(X,\mu)$ with a quasi-invariant probability measure $\mu$. Later Adams, Eliott and Giordano [@AEG94] showed that Zimmer’s amenability is equivalent to the following two conditions: $1)$ $\mathcal R_{G,X}$ is $\mu$-hyperfinite (on a set of full measure it is equal to a union of finite measurable equivalence relations); $2)$ for $\mu$-almost all $x\in X$ the stabilizer ${\mathrm{St}}_G(x)$ is amenable. Observe that condition $1)$ is equivalent to the following (see [@FM1], Proposition 4.1): $1')$ on a set of full measure $\mathcal R_{G,X}$ coincides with $\mathcal R_{\mathbb Z,X}$ for some action of the group of integers $\mathbb Z$ on $(X,\mu)$ by measure class preserving transformations. \[ThKuhn\] For an ergodic Zimmer amenable measure class preserving action of $G$ on a probability measure space $(X,\mu)$ one has $$\kappa\prec \lambda_G,$$ where $\kappa$ is the Koopman representation associated to the action of $G$ and $\lambda_G$ is the regular representation. At the end of Section \[SubsecKoopGroup\] we will derive Kuhn’s Theorem from part $2)$ of Theorem \[ThMain\]. We refer the reader to [@AnDel03] for a generalization of Kuhn’s Theorem for locally compact groups $G$. Another result related to Theorem \[ThMain\] is the following (see [@Pichot], Theorem 30): \[ThPichot\] A measure class preserving action of a countable group $G$ on a standard probability space $(X,\mu)$ is hyperfinite if and only if for every $m\in l^1(G)$ with $\|m\|_1=1$ one has $\|\pi(m)\|=1$, where $\pi$ is the corresponding groupoid representation. Observe that the original result of Pichot concerns arbitrary discrete measured equivalence relations on $(X,\mu)$. However, all such equivalence relations are generated by group actions (see [@FM1]). Theorem \[ThPichot\] is a reformulation of Pichot’s result in terms of group actions. Theorem \[ThMain\] implies the “only if” direction of Theorem \[ThPichot\]. The following result was the starting point of our investigation: \[ThBG\] Let $G$ be a finitely generated group acting on a regular rooted tree $T$ and $m\in\mathbb C[G]$. Then $\sigma(\rho_x(m))\subset \sigma(\kappa(m))$ for all $x\in\partial T$. If moreover the Schreier graph of the action of $G$ on the orbit $Gx$ of $x\in X$ is amenable, then $\sigma(\rho_x(m))=\sigma(\kappa(m))$. For the proof of Theorem \[ThBG\] we refer the reader to [@Grig11], Proposition 10.4 (see also [@BG], Theorem 3.6). Proof of Theorem \[ThMain\]. ============================ We will split the proof of Theorem \[ThMain\] into three parts: Propositions \[PropMainHecke2\], \[Prop-spec-reg-Koop\] and \[PropKoopGroup\]. Equivalence of quasi-regular and groupoid representations. {#SubsecQRGr} ---------------------------------------------------------- \[PropMainHecke2\] For an ergodic measure class preserving action of a countable group $G$ on a standard probability space $(X,\mu)$ one has $\rho_x\sim\pi$ for $\mu-$almost all $x\in X$. The proof is based on a few technical statements. We will formulate these statements, deliver Proposition \[PropMainHecke2\] from them and then give the proofs of the statements. For an action of a group $G$ on a space $X$, a subset $S=\{g_1,g_2,\ldots g_n\}\subset G, n\in\mathbb N$ and a point $x\in X$ introduce an *orbital graph* $\Gamma_x=\Gamma_{x,g_1,\ldots,g_n}$ as a marked rooted graph whose vertex set is the set of points of the orbit $Gx$ ($x$ is the root) and such that $y,z\in Gx$ are connected by a directed edge marked by $g_i$ if and only if $z=g_iy$. Notice that here we don’t assume that the group $G$ is generated by $S$, so the graphs $\Gamma_x$ are not necessary connected. In case if $S$ generates $G$ orbital graph $\Gamma_x$ coincide with marked Schreier graph defined by the triple $(G,{\mathrm{St}}_G(x),S)$. Fix a numeration of all elements of $G$: $$\label{EqGNum} G=\{s_1,s_2,s_3,\ldots\}\;\;\text{with}\;\;s_1=e,\;\;\text{the unit element of}\;\;G.$$ For $k\in \mathbb N$, $x\in X$ and $y\in Gx$ denote by $B_k(y)$ the subgraph of $\Gamma_x$ consisting of vertices $\{z\in Gx:z=s_iy\;\;\text{for some}\;\;i\leqslant k\}$ and all edges connecting them. We denote by $y$ the root of $B_k(y)$. Observe that $B_k(y)$ may be disconnected and that $$\Gamma_x=\bigcup\limits_{k\in\mathbb N}B_k(x).$$ We will say that two orbital graphs $\Gamma_x$ and $\Gamma_y$ are locally isomorphic if for any $k$ there exist a vertex $u$ of $\Gamma_x$ and a vertex $v$ of $\Gamma_y$ such that $B_k(u)$ is isomorhpic (as marked rooted graph) to $B_k(y)$ and $B_k(v)$ is isomorphic to $B_k(x)$. The following statement is a straightforward modification of Proposition 8.11 from [@Grig11]. \[PropLocIsom\] Let $G$ act ergodically by measure class preserving transformations on a standard probability space $(X,\mu)$. Then there exists a subset $A\subset X$ of a full measure such that for any $g_1,\ldots g_n\in G,n\in \mathbb N$ and any $x,y\in A$ the marked orbital graphs $\Gamma_{x,g_1,\ldots,g_n}$ and $\Gamma_{y,g_1,\ldots,g_n}$ are locally isomorphic. For $m\in\mathbb C[G]$ denote the support of $m$ by $${\mathrm{supp}}(m)=\{g\in G:m(g)\neq 0\}.$$ \[Prop-equiv-M\] Let $G$ act on a space $X$, $x,y\in X$ and $m\in\mathbb C[G]$. Let ${\mathrm{supp}}(m)=\{g_1,g_2,\ldots,g_n\}$. If the orbital graphs $\Gamma_{x,g_1,\ldots,g_n}$ and $\Gamma_{y,g_1,\ldots,g_n}$ are locally isomorphic then $\sigma(\rho_x(m))=\sigma(\rho_y(m))$ . The next Proposition is a standard statement about direct integral of Hilbert spaces. It can be derived from Lemma 2, [@Chow]. It is straightforward to see that all conditions of Lemma 2, [@Chow], are satisfied in our case. \[Prop-int-M\] Let $(X,\mu)$ be a standard probability space, $\mathcal H=\int\limits_{X}\mathcal H_xd\mu(x)$ be a direct integral of separable Hilbert spaces, $M_x$ be an integrable family of operators and $M=\int M_xd\mu(x)$. If the spectrum $\sigma(M_x)$ of almost all operators $M_x$ coincide and is equal to $\Sigma$ then $\sigma(M)=\Sigma$. Now, let us derive Proposition \[PropMainHecke2\] from the above statements. #### Proof of Proposition \[PropMainHecke2\]. {#proof-of-proposition-propmainhecke2. .unnumbered} Let $m\in \mathbb C[G]$ and ${\mathrm{supp}}(m)=\{g_1,\ldots,g_n\}$. By Proposition \[PropLocIsom\] for almost all $x$ the orbital graphs $\Gamma_{x,g_1,\ldots,g_n}$ are pairwise locally isomorphic. Proposition \[Prop-equiv-M\] implies that the spectra $\sigma(\rho_x(m))$ coincide for almost all $x$. Denote this spectrum by $\Sigma$. From Proposition \[Prop-int-M\] we get that the spectrum of $$\int\limits_X \rho_x(m)\mathrm{d}\mu(x)$$ is equal to $\Sigma$. From Proposition \[Prop-grupp-equiv-int\] we get that $\sigma(\pi(m))=\Sigma$. Finally, Corollary \[CoWeakCond\] implies that $\pi\sim\rho_x$ for almost all $x\in X$. #### Proof of Proposition \[PropLocIsom\]. {#proof-of-proposition-proplocisom. .unnumbered} Fix $n$ and $S=\{g_1,g_2,\ldots,g_n\}\subset G$. Let us call a finite rooted directed graph with edges marked by elements of $S$ $r$-admissible if it is isomorphic to $B_r(x)$ (as marked rooted graph) for some point $x\in X$. For an $r$-admissible graph $\Delta$ denote by $X_\Delta(r)$ the set of points $y\in X$ such that $B_r(y)$ is isomorphic to $\Delta$. For any fixed $r$ the sets $X_\Delta(r)$, where $\Delta$ is $r$-admissible, cover $X$, therefore, there exist $\Delta$ for which $X_\Delta(r)$ is of positive measure. We will call such $\Delta$ positively $r$-admissible. Let $P_r$ be the set of positively $r$-admissible graphs and $Z_r$ be the set of $r$-admissible but not positively $r$-admissible graphs. For an $r$-admissible graph $\Delta$ set $$\tilde X_\Delta(r)=\bigcup\limits_{g\in G}g(X_\Delta(r)).$$ Clearly, for $\Delta\in P_r$ the set $\tilde X_\Delta(r)$ is an invariant set of positive measure. Since the action is ergodic $\mu(\tilde X_\Delta(r))=1$. For $\Delta\in Z_r$ one has $\mu( X_\Delta(r))=0$. Denote $$X^S_*=\bigcap\limits_{r\geqslant 1}\bigcap\limits_{\Delta\in P_r}\tilde X_\Delta(r)\setminus \Big(\bigcup\limits_{r\geqslant 1}\bigcup\limits_{\Delta\in Z_r}X_\Delta(r)\Big).$$ Then $\mu(X_*^S)=1$. Further, let $x,y\in X^S_*$, $r\in\mathbb N$ and $\Delta=B_r(x)$. Definition of $X^S_*$ implies that $\Delta\in P_r$. Therefore, $y\in \tilde X_\Delta(r)$. Thus, $y\in g(X_\Delta(r))$ for some $g\in G$. This means that the marked rooted graphs $B_r(g^{-1}y),\Delta$ and $B_r(x)$ are pairwise isomorphic. We obtain that for any $x,y\in X^S_*$ the orbital graphs $\Gamma_{x,g_1,\ldots,g_n}$ and $\Gamma_{y,g_1,\ldots,g_n}$ are locally isomorphic. Finally, denoting by $A$ the intersection of all sets of the form $X^S_*$ where $S$ is a finite subset of $G$ we obtain the desired. #### Proof of Proposition \[Prop-equiv-M\]. {#proof-of-proposition-prop-equiv-m. .unnumbered} Let $G,X,x,y,m$ be as in the formulation of the Proposition \[Prop-equiv-M\] and the orbital graphs $\Gamma_x=\Gamma_{x,g_1,\ldots,g_n}$ and $\Gamma_y=\Gamma_{y,g_1,\ldots,g_n}$ be locally isomorphic. Set $$R=2\sum\limits_{g\in{\mathrm{supp}}(m)} |m(g)|.$$ Fix a point $\alpha$ from $\sigma(\rho_x(m))$ and let us show that $\alpha\in \sigma(\rho_y(m))$. Clearly, $|\alpha|\leqslant\tfrac{1}{2}R$. The proof of the following Lemma is straightforward and we omit it here. \[LmEquivNorm\] Let $A$ be any bounded nonzero linear operator on a Hilbert space and $R\geqslant 2\|A\|$. Then $$\alpha\in \sigma(A)\;\;\Leftrightarrow\;\;1\in \sigma(\mathrm{I}- \tfrac{1}{R^2}(A-\alpha\mathrm{I})(A-\alpha\mathrm{I})^{*}),$$ where $\mathrm{I}$ is the identity operator. Using Lemma \[LmEquivNorm\] we obtain that for any unitary representation $\omega$ on $G$ one has: $$\alpha\in \sigma(\omega(m))\;\;\Leftrightarrow\;\;1\in \sigma(\mathrm{I}- \tfrac{1}{R^2}(\omega(m)-\alpha\mathrm{I})(\omega(m)-\alpha\mathrm{I})^{*}).$$ The operator $\mathrm{I}- \tfrac{1}{R^2}(\omega(m)-\alpha\mathrm{I})(\omega(m)-\alpha\mathrm{I})^{*}$ is of the form $\omega(s)$ for some $s\in \mathbb C[G]$, positive and of norm $\leqslant 1$. It follows that without loss of generality we may assume that $\alpha=1$ and operators $\rho_x(m)$ and $\rho_y(m)$ are positive of norm $\leqslant 1$ (in fact, $\|\rho_x(m)\|=1$, since we assume that $\alpha=1\in\sigma(\rho_x(m))$). Further, consider orbital graphs $\Gamma_x$ and $\Gamma_y$. Let $\epsilon>0$. Since $$\sup\limits_{\xi:\|\xi\|=1} (\rho_x(m)\xi,\xi)=1$$ we can find $l\in\mathbb N$ and a vector $\eta\in l^2(Gx)$ supported on $B_l(x)$ such that $(\rho_x(m)\eta,\eta)>1-\epsilon$. Let $v\in\Gamma_y$ be such that $B_l(v)\subset \Gamma_y$ is isomorphic (as a rooted labeled graph) to $B_l(x)$. Let $\eta'\in l^2(B_l(v))\subset l^2(\Gamma_y)$ be a copy of $\eta$ via this isomorphism. Then one has: $$(\rho_x(m)\eta',\eta')=(\rho_y(m)\eta,\eta)>1-\epsilon.$$ It follows that $\|\rho_y(m)\|=1$ and $1\in\sigma(\rho_y(m))$. This finishes the proof of Proposition \[Prop-equiv-M\]. Weak containment of quasi-regular representations in the Koopman representation. -------------------------------------------------------------------------------- \[Prop-spec-reg-Koop\] $1)$ Let a countable group $G$ act on a standard probability space $(X,\mu)$, where $\mu$ is a quasi-invariant measure. Let $\kappa$ be the corresponding Koopman representation in $L^2(X,\mu)$ and $\rho_x$ denotes the quasi-regular representation of $G$ in $l^2(Gx)$, $x\in X$. Then for almost all $x\in X$ one has $\rho_x\prec \kappa.$ $2)$ If moreover $\mu$ is $G$-invariant and non-atomic then for almost all $x\in X$ one has $\rho_x\prec\kappa_0$, where $\kappa_0$ is the restriction of $\kappa$ onto the orthogonal complement to constant functions. One of the ingredients of the proof is the following statement: \[Lm-A-refining\] Let $T$ be a measure class preserving transformation of a standard probability space $(X,\mu)$ such that $Tx\neq x$ for almost all $x\in A$, where $\mu(A)>0$. Then there exists $B\subset A,\mu(B)>0$ such that $\mu(B\cap TB)=0$. For the case of a measure preserving automorphism Lemma \[Lm-A-refining\] follows from the proposition of $\S 1$ of [@Rokh49]. In fact, the same proof works in the case of a measure class preserving transformation. For the reader’s convenience we provide here the arguments taken from [@Rokh49], $\S 1$. Let us show first that there exists a measurable subset $C\subset A$ such that $\mu(T(C)\Delta C)\neq 0$. Fix a basis $\{A_i\}$ in $A$. Set $$B_i=(A\setminus A_i)\cap T(A_i)\cup A_i\cap (A\setminus T(A_i)).$$ By definition of basis for almost all $x,y\in A$ such that $x\neq y$ there exists $A_i$ such that either $x\in A_i,y\in A\setminus A_i$ or $y\in A_i,x\in A\setminus A_i$. It follows that for almost all $x\in A$ there exists $i$ such that $x\in B_i$. Therefore, $\mu(\cup B_i)=\mu(A)>0$ and $\mu(B_i)>0$ for some i. Set $C=B_i$. Now, if $\mu(C\setminus TC)\neq 0$ we set $B=C\setminus TC$. If $\mu(TC\setminus C)\neq 0$ we set $B=T^{-1}(TC\setminus C)$. \[LmAkx\] Let $G$ act on $(X,\mu)$, where $\mu$ is a quasi-invariant probability measure. Let $g_1,g_2,\ldots,g_n\in G$. For $x\in X$ and $k\in\mathbb N$ set $$A_{k,x}:=\{y\in X:B_{k}(y)\;\;\text{is isomorphic to}\;\;B_{k}(x)\}.$$ Then for almost all $x\in X$ one has $$\mu(A_{k,x})>0 \;\;\text{for all}\;\;k\in\mathbb N.$$ For every $k$ there are only finitely many distinct marked graphs appearing in the set $\{B_{2(k+1)}(x):x\in X\}$. Let $\mathcal B_k$ be the set of marked graphs $B$ such that $$\mu(\{x\in X:B_k(x)=B\})>0.$$ Consider $$M_k=\{x\in X:B_k(x)\in \mathcal B_k\}.$$ By construction, $\mu(M_k)=1$ and for every $x\in M_k$ one has: $\mu(A_{k,x})>0$. Let $$M=\bigcap\limits_{k\in\mathbb N}M_k.$$ Then $\mu(M)=1$ and for every $x\in M$ and every $k\in \mathbb N$ one has: $\mu(A_{k,x})>0$, which finishes the proof. By Corollary \[CoWeakCond\] it is sufficient to show that for all positive $m\in\mathbb C[G]$ for almost all $x\in X$ one has $\|\rho_x(m)\|\leqslant \|\kappa(m)\|$. Let ${\mathrm{supp}}(m)=\{g_1,\ldots,g_n\}$ and $A_{k,x}$ be the sets defined in Lemma \[LmAkx\]. Till the end of the proof of this proposition fix $x$ such that $$\mu(A_{k,x})>0\;\;\text{for all}\;\;k\in\mathbb N.$$ Let $m\in\mathbb C[G]$ be a positive element. Without loss of generality we can assume that $\|\rho_x(m)\|=1$. Let $\epsilon>0$. Since $$\sup\{(\rho_x(m)\xi,\xi):\xi\in l^2(\Gamma_x),\|\xi\|=1\}=1$$ we can find a unit vector $\eta\in l^2(\Gamma_x)$ of finite support such that $(\rho_x(m)\eta,\eta)>1-\epsilon$. Further, fix $k$ such that ${\mathrm{supp}}(\eta)\subset B_k(x)$. Chose $K\in\mathbb N$ such that $gB_k(x)\subset B_K(x)$ for all $g\in{\mathrm{supp}}(m)$. Observe that for every $y\in A_{K,x}$, any $i,j\leqslant k$ and any $g,h\in{\mathrm{supp}}(m)$ one has: $$gs_iy=hs_jy\;\;\Leftrightarrow\;\; gs_ix=hs_jx.$$ Using Lemma \[Lm-A-refining\] successively for all elements of the form $s_j^{-1}h^{-1}gs_i,$ where $i,j\leqslant k$ and $g,h\in{\mathrm{supp}}(m)$ such that $s_j^{-1}h^{-1}gs_ix\neq x$ we can find $B\subset A_{k,x}$ such that $\mu(B)>0$ and $$\mu(gs_iB\cap hs_jB)=0\;\;\text{for all such}\;\;g,h,s_i,s_j.$$ Further, divide the set of positive numbers $\mathbb R_+$ into subintervals $$I_s=[(1+\epsilon)^s,(1+\epsilon)^{s+1}),s\in\mathbb Z$$ so that for every $s\in\mathbb Z$ one has $ab^{-1}\in (1-\epsilon,1+\epsilon)$ for $a,b\in I_s$. For every function $f:\{1,\ldots,k\}\times{\mathrm{supp}}(m)\to\mathbb Z$ introduce the set $$B_f=\{t\in B:\sqrt{\tfrac{{\mathrm{d}}\mu(gs_it)}{{\mathrm{d}}\mu(s_it)}}\in I_{f(i,g)}\;\;\text{for every}\;\;g\in {\mathrm{supp}}(m),1\leqslant i\leqslant k\}.$$ Since union of the sets $B_f$ over all function $f$ is the set $B$ of positive measure one has $\mu(B_f)>0$ for some $f$. Fix such $f$. Then for every $g\in {\mathrm{supp}}(m),1\leqslant i\leqslant k$ we have: $$\sqrt{\tfrac{\mu(gs_iB_f)}{\mu(s_iB_f)}}\in I_{f(i,g)}\;\;\text{and}\;\;\Big|1-\sqrt{\tfrac{{\mathrm{d}}\mu(t)}{{\mathrm{d}}\mu(gt)}}\sqrt{\tfrac{\mu(gs_iB_f)}{\mu(s_iB_f)}}\Big|<\epsilon\;\;\text{for all}\;\;t\in s_iB_f.$$ Finally, for $1\leqslant i\leqslant k,g\in{\mathrm{supp}}(m)$ and $y=s_ix$ consider the function $e_y=\frac{1}{\sqrt{\mu(gs_iB_f)}}\mathbbm{1}_{gs_iB_f}$. Observe that $$\label{Eqey}\|e_y-\kappa(s_i)e_x\|^2=\int\limits_{s_iB_f}\Big(\tfrac{1}{\sqrt{\mu(s_iB_f)}}- \tfrac{1}{\sqrt{\mu(B_f)}}\sqrt{\tfrac{{\mathrm{d}}\mu(s_i^{-1}t)}{{\mathrm{d}}\mu(t)}}\Big)^2{\mathrm{d}}\mu(t)<\epsilon.$$ Consider the spaces $$\begin{aligned} \mathcal H_x={\mathrm{Span}}\{\delta_y:y=gs_ix,i\leqslant k,g\in{\mathrm{supp}}(m) \}\subset l^2(Gx),\\ \mathcal L_x={\mathrm{Span}}\{e_y:y=gs_ix,i\leqslant k,g\in{\mathrm{supp}}(m)\}\subset L^2(X,\mu).\end{aligned}$$ The map $\phi:\delta_y\to e_y$ induces an isometry between these spaces. Moreover, by inequality for every $h\in {\mathrm{supp}}(m)$ and every $y=s_ix\in B_k(x)$, where $i\leqslant k$, we have $$\begin{aligned} \|\phi(\rho_x(h)\delta_y)-\kappa(h)e_y\|^2=\|e_{hy}-\kappa(h)e_y\|^2\\ \leqslant (\|e_{hs_ix}-\kappa(hs_i)e_x\|+\|\kappa(h)(e_{s_ix}-\kappa(s_i)e_x)\|)^2\leqslant 4\epsilon.\end{aligned}$$ This implies that $$\|\phi(\rho_x(h)\eta)-\kappa(h)\phi(\eta)\|\leqslant 2\sqrt\epsilon$$ for every $h\in{\mathrm{supp}}(m)$ and thus $$\label{EqKaRhoIneq}\|\phi(\rho_x(m)\eta)-\kappa(m)\phi(\eta)\|\leqslant 2\sqrt\epsilon\|m\|_1,$$ where $\|m\|_1=\sum\limits_{h\in{\mathrm{supp}}(m)}|m(h)|$. Since $\|\rho_x(m)\eta\|\geqslant 1-\epsilon$ for arbitrary $\epsilon>0$ we obtain that $\|\kappa(m)\|=1$ and $1\in \sigma(\kappa(m))$. This finishes the proof of part $1)$ of Proposition \[Prop-spec-reg-Koop\]. Now let $\mu$ be $G$-invariant and non-atomic. From ergodicity it follows that the orbit $Gy$ is infinite for almost all $y\in X$. Without loss of generality we can assume that $Gx$ is infinite. Fix $\epsilon>0$ and a unit vector $\eta\in l^2(\Gamma_x)$ of finite support such that $(\rho_x(m)\eta,\eta)>1-\epsilon$. Assume that $$\alpha=\sum_{y\in{\mathrm{supp}}(\eta)}\eta(y)\neq 0.$$ Then choose arbitrarily a sequence of distinct elements $y_i$ from $Gx\setminus {\mathrm{supp}}(\eta)$ and for $n\in\mathbb N$ introduce $$\eta_n=\eta-\tfrac{\alpha}{n}\sum\limits_{i=1}^n\delta_{y_i}\in l^2(\Gamma_x), \;\;m_n=m+\tfrac{1}{n^2}\sum\limits_{i=1}^n\delta_{h_i}\in \mathbb C[G],$$ where $h_i\in G$ are such that $h_ix=y_i$. Clearly, $$\sum_{y\in{\mathrm{supp}}(\eta_n)}\eta_n(y)=0,\;\;\text{and}\;\;\lim\limits_{n\to\infty}\eta_n=\eta\;\;\text{in}\;\;l^2\text{-norm}.$$ Moreover, $\lim\limits_{n\to\infty}\rho_x(m_n)=\rho_x(m)\;\;\text{and}\;\;\lim\limits_{n\to\infty}\kappa(m_n)=\kappa(m)$ where the limits are in the strong operator topology. Therefore, without loss of generality we may assume that $$\sum_{y\in{\mathrm{supp}}(\eta)}\eta(y)=0.$$ Then by construction $\phi(\eta)\in L^2(X,\mu)$ is orthogonal to constant functions, and thus the representation $\kappa$ in the inequality can be replaced by $\kappa_0$. When $\epsilon\to 0$ we obtain that $\|\kappa_0(m)\|=1$. This finishes the proof of part $2)$ of Proposition \[Prop-spec-reg-Koop\] and hence finishes the proof of parts $1)$ and $2)$ of Theorem \[ThMain\]. The condition of non-atomicity of measure $\mu$ in the second part of Proposition \[Prop-spec-reg-Koop\] is necessary. Consider $G=\mathbb Z_2=\{0,1\}$. Equip $X=\mathbb Z_2$ with the uniform probability measure $\mu$. Let $G$ act on $(X,\mu)$ by shifts. Then for any $m=\alpha\delta_0+\beta\delta_1\in\mathbb C[G]$ one has: $$\sigma(\rho_0(m))=\sigma(\rho_1(m))=\{1,\alpha-\beta\},\;\;\sigma(\kappa_0)(m)=\{\alpha-\beta\}$$ and thus the two spectra do not coincide when $\alpha-\beta\neq 1$. Equivalence of Koopman and groupoid representations for a hyperfinite action. {#SubsecKoopGroup} ----------------------------------------------------------------------------- Part $3)$ of Theorem \[ThMain\] follows from the next: \[PropKoopGroup\] For a hyperfinite measure class preserving action of a countable group $G$ on a standard probability space $(X,\mu)$ one has $\kappa\sim\pi.$ In the proof we will use the following result (see [@ChacFried:65], Lemma 4): \[ThRokh\] Let $U$ be an aperiodic measure class preserving transformation of a Lebesgue space $(X,\mu)$. Then for any $N$ and any $\epsilon>0$ there exists a measurable set $A$ such that the sets $A,UA,\ldots,U^{N-1}A$ are pairwise disjoint and $\mu(A\cup UA\cup\ldots\cup U^{N-1}A)>1-\epsilon$. Lemma \[ThRokh\] is a generalization of the famous Rohlin Lemma from [@Rokh49] to the case of quasi-invariant measures. First notice that in the case of a finite $X$ the groupoid representation $\pi$ is unitarily equivalent to a direct sum of finitely many copies of the Koopman representation $\kappa$. Therefore, without loss of generality we can assume that for all $x\in X$ the orbit $Gx$ is infinite. Since Koopman representation uses a Radon-Nikodym derivative in the definition it will be convenient to replace $\pi$ by a unitarily equivalent representation $\tilde \pi$ (see ). By Corollary \[CoWeakCond\] and part $1)$ of Theorem \[ThMain\] it is sufficient to show that $\|\kappa(m)\|\leqslant\|\tilde\pi(m)\|$ for every positive element $m\in\mathbb C[G]$. Without loss of generality we will assume that $\|\kappa(m)\|=1$. Since the action of $G$ on $X$ is hyperfinite there exists a measure-class preserving automorphism $U$ generating the equivalence relation $\mathcal R$ generated by $G$ on $X$ (see [@FM1], Proposition 4.1). Clearly, $U$ is aperiodic. For $g\in G$ and $x\in X$ denote by $n_g(x)$ the integer number such that $gx=U^{n_g(x)}x$. Observe that for every $g\in G$ the function $n_g(x)$ is measurable. Fix $\delta>0$. Let $\eta\in L^2(X,\mu)$ be a unit vector such that $(\kappa(m)\eta,\eta)>1-\delta$. Without loss of generality we may assume that the set of values of $\eta$ is finite. Let $K=\max\{\|\eta\|_\infty,1\}$. Further, find a number $L$ such that $$\mu(\{x:|n_{g^{\pm 1}}(x)|\leqslant L\;\text{for all}\;g\in{\mathrm{supp}}(m)\})\geqslant 1-\tfrac{\delta}{2K^2}.$$ Introduce a set $$\Omega=\{x:|n_{g^{\pm 1}}(x)|\leqslant L\;\text{for all}\;g\in{\mathrm{supp}}(m)\}.$$ Choose $N$ such that $\tfrac{L}{N}\leqslant\tfrac{\delta}{8K^2}$. Using Lemma \[ThRokh\] one can construct a set $C$ such that the sets $C,UC,\ldots, U^{N-1}C$ are pairwise disjoint, and $$\mu(C\cup UC\cup\ldots\cup U^{N-1}C)\geqslant 1-\tfrac{\delta}{4K^2}.$$ Set $C_j=U^j(C)$ for $j=0,1,\ldots,N-1$. Let $\Sigma=(C_L\cup C_{L+1}\cup\ldots\cup C_{N-L-1})\cap \Omega$. Then $\mu(\Sigma)\geqslant 1-\tfrac{\delta}{K^2}$. Consider the functions $$\tilde\eta(x,y)=\eta(x)\mathbbm{1}_C(y)\sum\limits_{j=0}^{N-1}\delta_{x,U^j(y)},$$ where $\delta_{x,y}$ is the Kronecker delta, $$\tilde\eta_0(x,y)=\mathbbm{1}_\Sigma(x)\tilde\eta(x,y) \;\;\text{and}\;\;\eta_0(x)=\mathbbm{1}_\Sigma(x)\eta(x),$$ where $\mathbbm{1}_A$ stands for the characteristic function of a set $A$. Observe that for every $x\in X$ there exists at most one $y$ such that $\tilde\eta(x,y)\neq 0$. By definition of $\nu_l$ one has: $$\|\tilde\eta\|^2=\int\limits_X\sum\limits_{y\sim x}|\tilde\eta(x,y)|^2{\mathrm{d}}\mu(x)=\sum\limits_{j=0}^{N-1}\int\limits_{C_j}|\eta(x)|^2{\mathrm{d}}\mu(x)\leqslant \|\eta\|^2.$$ Using similar computations one can show that $\|\tilde\eta_0\|=\|\eta_0\|$. Since $\mu(\Sigma)>1-\delta$ we obtain that $$\|\eta\|^2-\delta\leqslant \|\tilde\eta_0\|^2\leqslant \|\tilde\eta\|^2\leqslant \|\eta\|^2.$$ Let $g\in{\mathrm{supp}}(m)$. Assume that $x\in C_j\cap A$, where $L\leqslant j\leqslant N-L-1$. Let $y=U^{-j}(x)$. One has $-L\leqslant n_{g^{-1}}(x)\leqslant L$ and $g^{-1}x\in C_{j+n_{g^{-1}}(x)}$. It follows that $$\tilde\eta_0(x,y)=\eta_0(x),\;\;\tilde\eta(g^{-1}x,y)=\eta(g^{-1}x).$$ If $x\notin \Sigma$ then $\eta_0(x)=0$ and $\tilde\eta_0(x,y)=0$ for all $y\in Gx$. We obtain: $$\begin{aligned} (\tilde\pi(g)\tilde\eta,\tilde\eta_0)=\int\limits_X\sum\limits_{y\sim x}\sqrt{\tfrac{{\mathrm{d}}\mu(g^{-1}(x))}{{\mathrm{d}}\mu(x)}}\tilde\eta(g^{-1}x,y)\overline{\tilde\eta_0(x,y)}{\mathrm{d}}\mu(x)=\\ \int\limits_X\sqrt{\tfrac{{\mathrm{d}}\mu(g^{-1}(x))}{{\mathrm{d}}\mu(x)}}\eta(g^{-1}x)\eta_0(x){\mathrm{d}}\mu(x)=(\kappa(g)\eta,\eta_0).\end{aligned}$$ Since $\|\tilde\eta-\tilde\eta_0\|\leqslant\|\eta-\eta_0\|\leqslant \delta^{\frac{1}{2}}$ the latter implies that $$|(\pi(g)\tilde\eta,\tilde\eta)-(\kappa(g)\eta,\eta)|\leqslant 2\delta^{\frac{1}{2}}.$$ Finally, we get: $$|(\tilde\pi(m)\tilde\eta,\tilde\eta)-(\kappa(m)\eta,\eta)|\leqslant 2\delta^{\frac{1}{2}}\|m\|_1,\;\;\text{where}\;\;\|m\|_1=\sum\limits_{g\in{\mathrm{supp}}(m)} |m(g)|.$$ Since $\delta>0$ is arbitrary, the inequality $\|\pi(m)\|\geqslant\|\kappa(m)\|$ follows. This finishes the proof of Proposition \[PropKoopGroup\] and Hence part $3)$ of Theorem \[ThMain\]. Theorem 1 is now proven. We finish this section by deriving Kuhn’s Theorem \[ThKuhn\] from Theorem \[ThMain\]. Let $G$ act ergodically by measure class preserving automorphisms on a probability measure space $(X,\mu)$. Assume that this action is Zimmer amenable. Then by Theorem \[ThMain\] the corresponding Koopman representation is weakly equivalent to the quasi-regular representation $\rho_x$ of $G$ for almost every $x$. By result of Adams, Eliott and Giordano [@AEG94] Zimmer’s amenability implies that ${\mathrm{St}}_G(x)$ is amenable for almost every $x$. Let $x$ be such that $\rho_x\sim\kappa$ and ${\mathrm{St}}_G(x)$ is amenable. Using the well known fact that for a subgroup $H<G$ $$\rho_{G/H}\prec\lambda_G\;\;\text{if and only if}\;\;H\;\;\text{is amenable}$$ (see [@BG], Proposition 3.5) we obtain that $\rho_x\prec\lambda_G$. This shows that $\kappa\prec \lambda_G$. Applications to weakly branch groups. {#SubsecAp} ===================================== We recall some notions related to group actions on rooted trees. We refer the reader to [@Grig11] and [@GNS00] for detailed definitions and properties of these actions. A $d$-regular rooted tree is a tree $T$, with vertex set divided into levels $V_n$, $n\in\mathbb Z_+$, such that $V_0$ consists of one vertex $v_0$ (called the root of $T$), the edges are only between consecutive levels, and each vertex from $V_n$, $n\geqslant 0$ (we consider infinite trees), is connected by an edge to exactly $d$ vertices from $V_{n+1}$ (and one vertex from $V_{n-1}$ for $n\geqslant 1$). An automorphism of a rooted tree $T$ is any automorphism of the graph $T$ preserving the root. Denote by ${\mathrm{Aut}}(T)$ the group of automorphisms of $T$. Let $T$ be a $d$-regular rooted tree, $d\geqslant 2$, and $G<{\mathrm{Aut}}(T)$. The rigid stabilizer of a vertex $v$ is the subgroup ${\mathrm{rist}}_v(G)=\{g\in G:{\mathrm{supp}}(g)\subset T_v\}$. The rigid stabilizer of level $n$ is $${\mathrm{rist}}_n(G)=\prod\limits_{v\in V_n}{\mathrm{rist}}_v(G).$$ $G$ is called *branch* if it is transitive on each level and ${\mathrm{rist}}_n(G)$ is a subgroup of finite index in $G$ for all $n$. $G$ is called *weakly branch* if it is transitive on each level $V_n$ of $T$ and ${\mathrm{rist}}_v(G)$ is nontrivial for each $v$. For each level $V_n$ of a $d$-regular rooted tree an automorphism $g$ of $T$ can be presented in the form $$\label{EqRest}g=\sigma\cdot(g_1,\ldots,g_{d^n}),$$ where $\sigma\in{\mathrm{Sym}}(V_n)$ is a permutation of the vertices from $V_n$ and $g_i$ are the restrictions of $g$ on the subtrees emerging from the vertices of $V_n$. For an element $g\in{\mathrm{Aut}}(T)$ denote by $k_n(g)$ the number of restrictions $g_i$ to the vertices of level $n$ such that $g_i$ is not equal to the identity automorphism. We call $g$ subexponentially bounded if for every $0<\gamma<1$ one has $$\lim\limits_{n\to\infty}k_n(g)\gamma^n=0.$$ A group $G<{\mathrm{Aut}}(T)$ is subexponentially bounded if each $g\in G$ is subexponentially bounded. Many important examples of branch and weakly branch groups (the group $\mathcal G$ of intermediate growth constructed by the second author, Gupta-Sidki $p$-groups and Basilica group) are subexponentially bounded groups. For a $d$-regular rooted tree $T$ its boundary $\partial T$ is the set of infinite paths starting from $v_0$. Observe that $\partial T$ can be identified with a space of sequences $\{x_j\}_{j\in\mathbb N}$ where $x_j\in\{1,\ldots,d\}$. For a vertex $v$ of $T$ we denote by $\partial T_v\subset\partial T$ the set of paths passing through $v$. Supply $\partial T$ by the topology generated by the sets $\partial T_v$. Automorphisms of $T$ act naturally on $\partial T$ by homeomorphisms. Notice that $\partial T$ admits a unique ${\mathrm{Aut}}(T)$-invariant measure $\mu$. This measure is uniform in the sense that $$\mu(\partial T_v)=\tfrac{1}{d^n}\;\;\text{for any}\;\;n\;\;\text{and any}\;\;v\in V_n.$$ In [@GNS00] it is shown that this measure is ergodic with respect to a group $G<{\mathrm{Aut}}(T)$ if and only if the action of $G$ is transitive on each level $V_n$ of $T$. Moreover, in this case it is uniquely ergodic. Further, let $G$ be a weakly branch group acting on a $d$-regular rooted tree $T$. Recall that for $x,y\in \partial T$ from the same $G$-orbit the corresponding quasi-regular representations are unitary isomorphic. Denote by $\mathcal O$ the set of orbits of $G$ on $\partial T$. For $\omega\in\mathcal O$ denote by $\rho_\omega$ the corresponding quasi-regular representation of $G$. Let $$\label{EqMP}\mathcal P=\{p=(p_1,p_2,\ldots,p_d):p_i> 0\;\;\text{for}\;\;i=1,2,\ldots,d\;\;\text{and}\;\;\sum\limits_{i=1}^dp_i=1\}$$ be the set of all probability distributions on the alphabet $\{1,2,\ldots,d\}$ assigning positive probability to every letter and $$\label{EqMP*}\mathcal P^{*}=\{p\in\mathcal P:p_i\neq p_j\;\;\text{for all}\;\;1\leqslant i< j\leqslant d\}.$$ For $p\in\mathcal P^{*}$ denote by $\mu_p=\prod\limits_{\mathbb N}p$ the corresponding Bernoulli measure on $\partial T$. It is shown in [@DuGr15], Proposition 2, that subexponentially bounded automorphisms preserve the measure class of $\mu_p$. Assuming that $G$ is a subexponentially bounded group we denote by $\kappa_p$ the Koopman representation associated to the action of $G$ on $(\partial T,\mu_p)$. Using Mackey’s criterion of irreducibility of quasi-regular representations Bartholdi and Grigorchuk in [@BG] showed that quasi-regular representations $\rho_\omega$ corresponding to the action of a weakly branch group $G$ on the boundary of a rooted tree are irreducible for all $\omega\in \mathcal O$. Moreover, in [@DuGr15] the authors proved the following. Let $G$ be a subexponentially bounded weakly branch group acting on the boundary of a $d$-regular rooted tree $T$. For every $p\in\mathcal P^{*}$ the representation $\kappa_p$ of $G$ is irreducible. Moreover, the representations of $G$ from $\{\kappa_p:p\in\mathcal P^{*}\}\cup \{\rho_\omega:\omega\in\mathcal O\}$ are pairwise disjoint. Using Theorem \[ThMain\] we will strengthen this result (Corollary \[CoWeak\]). For $g\in {\mathrm{Aut}}(T)$ a point $x=x_1x_2x_3\ldots\in\partial T$ is called $g$-rigid if there exist $n\in\mathbb N$ and $v=x_1x_2\ldots x_n\in V_n$ such that the restriction $g|_{\partial T_v}$ is trivial. For $G<{\mathrm{Aut}}(T)$ denote by $R(G)$ the set of points $x\in \partial T$ such that $x$ is $g$-rigid for all $g\in G$. Such points are called rigid. Let $\mathcal O_{R(G)}$ be the set of $G$-orbits of points from $R(G)$. From the proof of Proposition 2 of [@DuGr15] we obtain: \[LmSubexpRigid\] Let $T$ be a $d$-regular rooted tree and $G<{\mathrm{Aut}}(T)$ be subexponentially bounded. Then for any $p\in\mathcal P$ from one has $\mu_\mathcal{P}(R(G))=1$. Recall that for an action of a group $G$ by homeomorphisms on a topological space $X$ a point $x$ is called *typical* if for every $g\in G$ either $gx\neq x$ or $g$ acts trivially on some neighborhood of $x$. Clearly, the set of all typical points is open and $G$-invariant. Observe that for $G<{\mathrm{Aut}}(T)$ rigid point $x\in\partial T$ is typical. The next proposition is a topological version of Proposition \[PropLocIsom\] and is a generalization of Proposition 6.21 from [@GNS00] (see also [@Grig11], Proposition 8.8). \[PropRegIsom\] Let $G$ be a countable group acting minimally on a topological space $X$. Let $n\in\mathbb N$ and $g_1,\ldots,g_n\in G$. Then for any typical points $x,y\in X$ the orbital graphs $\Gamma_{x,g_1,\ldots,g_n}$ and $\Gamma_{y,g_1,\ldots,g_n}$ are locally isomorphic. Let $A$ be the set of all typical points. Fix $r\in\mathbb N$. Denote by $D_r$ the set of finite rooted marked graphs $\Delta$ such that there exists $x\in A$ for which $B_r(x)=\Delta$. For any $\Delta\in D_r$ denote by $X_\Delta(r)$ the set of $x\in A$ such that $B_r(x)=\Delta$. Given a point $x\in A$, $1\leqslant l,m\leqslant r$ and $1\leqslant i\leqslant n$ by definition of a typical point there exists a neighborhood $U(x)$ such that either 0.1cm $a)$ for all $y\in U(x)$ one has $s_m^{-1}g_is_ly=y$ (the vertices $s_ly$ and $s_my$ are connected by an edge marked by $g_i$ in $\Gamma_y$) or 0.1cm $b)$ for all $y\in U(x)$ one has $s_m^{-1}g_is_ly\neq y$ (the vertices $s_ly$ and $s_my$ are not connected by an edge marked by $g_i$ in $\Gamma_y$).0.1cm It follows that the sets $X_\Delta(r)$ are open. Further, let $x,y\in A$ and $\Delta=B_r(x)$. By minimality of the action of $G$ on $X$ there exists $g\in G$ such that $gy\in X_\Delta$. Thus, $B_r(x)=\Delta=B_r(gy)$ which finishes the proof. Combining these results with Theorem \[ThMain\] and taking into account that any subexponentially bounded weakly branch group generates a hyperfinite equivalence relation on $\partial T$ (see [@GrigNekr:Amen], Theorem of Section 3) we obtain: \[CoWeak\] For any subexponentially bounded weakly branch group $G$ acting on a $d$-regular rooted tree ($d\geqslant 2$) the representations from $\{\kappa_p:p\in\mathcal P^{*}\}\cup \{\rho_\omega:\omega\in\mathcal O_{R(G)}\}$ are irreducible, pairwise disjoint (not unitarily equivalent), and pairwise weakly equivalent. Observe that $\mathcal P^{*}$ and $R(G)$ have cardinality of continuum. Similar to Corollary \[CoWeak\] results are known for free groups $F_n$, $n\geqslant 2$. Namely, for every $n\geqslant 2$ there exists a continuum of irreducible pairwise disjoint and pairwise weakly equivalent Koopman type representations of $F_n$ (see [@KuSt]). However, weak equivalence of these representations uses the fact that the reduced $C^{*}$-algebra of $F_n$ is simple (see [@Pow75]). The class of weakly branch groups contains many amenable groups. Recall that groups of intermediate growth are amenable but not elementary amenable. For any amenable group the reduced $C^{*}$-algebra is not simple (see [@harpe07]). To the authors’ best knowledge Corollary \[CoWeak\] gives the first example of amenable groups admitting a continuum of pairwise disjoint weakly equivalent irreducible representations. Notice that all $p$-groups of intermediate growth constructed in [@Gr84] and [@Gr85] (for each prime $p$ there are $2^{\chi_0}$ such groups) as well as Gupta-Sidki $p$-groups are bounded (and therefore subexponentially bounded) amenable branch groups and hence satisfy the conditions of Corollary 3. Examples and proof of Theorem \[ThGammaSpec\]. ============================================== One of the basic examples of branch groups is the group $\mathcal{G}$ mentioned in the introduction. This group acts on the boundary of the binary rooted tree (which we will denote by $T$) and is generated by elements $a,b,c,d$ satisfying the following recursions: $$\label{EqGrigRec}a=\sigma\cdot({\mathrm{I}},{\mathrm{I}}),\;b=(a,c),\;c=(a,d),\;d=({\mathrm{I}},b),$$ where ${\mathrm{I}}$ is the identity action (see [@Gr00], [@Grig11] or [@GNS00]). Set $$\label{EqDel}\Delta=\tfrac{1}{4}(a+b+c+d)\in \mathbb C[\mathcal{G}].$$ Let $\kappa$ be the Koopman representation corresponding to the action of $\mathcal{G}$ on $(\partial T,\mu)$, where $\mu$ is the unique probability $\mathcal G$-invariant measure on $\partial T$ ($(\tfrac{1}{2},\tfrac{1}{2})$ uniform Bernoulli measure on $\partial T=\{0,1\}^\mathbb N$). In [@BG] Bartholdi and the second author developed a method for calculating spectra of Hecke type operators associated with self-similar groups and showed the following: \[ThBGDelta\] For all $x\in\partial T$ one has $$\sigma(\kappa(\Delta))=\sigma(\rho_x(\Delta))=[-\tfrac{1}{2},0]\cup[\tfrac{1}{2},1]\subset\sigma(\lambda_\mathcal{G}(\Delta)).$$ Also in [@BG] spectra of Hecke type operators for other groups are calculated. The main tools authors used were operator recursions based on relations of type , Schur complement and the reduction of the spectral problem to the problem of finding a suitable invariant set for the associated rational map $\mathbb R^n\to\mathbb R^n$ for some $n$. Notice that the spectrum of $\rho_x(\Delta)$ coincides with the spectrum of the Schreier graph $\Gamma_x$ for every $x\in\partial T$. An important characteristic of the Schreier graph $\Gamma_x$, $x\in\partial T$, is the spectral measure of $\rho_x(\Delta)$ associated to the unit vector $\delta_x\in l^2(Gx)$. For the action of $\mathcal G$ on $\partial T$ these measures were computed in [@GrigKryl]. In this section we compute the spectrum of the Cayley graph of $\mathcal G$ thus proving Theorem \[ThGammaSpec\]. Also, using operator recursions similar to those from [@BG] we prove the results analogous to Theorem \[ThBGDelta\] for the Koopman representations $\kappa_{(q,1-q)}$ of $\mathcal{G}$, $0<q<1$, and for the groupoid representation of $\mathcal{G}$ corresponding to the invariant Bernoulli measure $\mu$ on $\partial T$. Surprisingly, the spectrum does not depend on the parameter $q$ defining the measure. Spectrum of the Cayley graph of $\mathcal{G}$. ---------------------------------------------- Here we prove Theorem \[ThGammaSpec\] which is equivalent to: $$\sigma(\lambda_\mathcal{G}(\Delta))=[-\tfrac{1}{2},0]\cup[\tfrac{1}{2},1],$$ where $\lambda_\mathcal{G}$ is the regular representation of $\mathcal G$. Introduce a 2-parameter family of elements $Q(\alpha,\beta)=4\Delta-(\alpha+1)a-(\beta+1)e=-\alpha a+b+c+d-(\beta+1)e\in\mathbb C[\mathcal{G}]$, where $e$ is the identity element of $\mathcal{G}$. For a unitary representation $\rho$ of $\mathcal{G}$ let $\Sigma_\rho$ be the set of pairs $(\alpha,\beta)\in\mathbb R^2$ such that $\rho(Q(\alpha,\beta))$ is not invertible. ![The set $\Omega$.[]{data-label="FigSigma"}](Sigma){width="40.00000%"} Let $\Omega=\{(\alpha,\beta):||\alpha|-|\beta||\leqslant 2,\;||\alpha|+|\beta||\geqslant 2\}$ (see figure \[FigSigma\]). \[LmSigmaAny\] For any unitary representation $\rho$ one has: $\Sigma_\rho\subset\Omega$. Using the basic relation in $\mathcal G$ it is straightforward to verify that $(b+c+d-e)^2=4e\in\mathbb C[\mathcal{G}]$, where $e\in\mathcal{G}$ is the group unit. Let $A=\rho(a)$, $U=\rho(\tfrac{1}{2}(b+c+d-e))$. Then $A$ and $U$ are unitary operators such that $A^2=U^2={\mathrm{I}}$. For any $\alpha,\beta\in\mathbb R$ one has: $$\rho(Q(\alpha,\beta))=-\alpha A+2U-\beta{\mathrm{I}}.$$ If $|\alpha|+|\beta|<2$ then $$-\alpha A+2U-\beta{\mathrm{I}}=U(2{\mathrm{I}}-\alpha U A-\beta U)$$ is invertible since $\|\alpha U A+\beta U\|\leqslant |\alpha|+|\beta|<2$. If $|\alpha|>|\beta|+2$ then $$-\alpha A+2U-\beta{\mathrm{I}}=A(-\alpha{\mathrm{I}}+2AU-\beta A)$$ is invertible since $\|2AU-\beta A\|\leqslant 2+|\beta|<|\alpha|$. Finally, if $|\beta|>|\alpha|+2$ then $-\alpha A+2U-\beta{\mathrm{I}}$ is invertible since $\|-\alpha A+2U\|\leqslant |\alpha|+2<|\beta|$. By construction, the spectrum of $\lambda_\mathcal{G}(4\Delta-e)$ coincides with the intersection of $\Sigma_{\lambda_\mathcal{G}}$ and the line $\alpha=-1$. By Lemma \[LmSigmaAny\] we obtain $\sigma(\lambda_\mathcal{G}(4\Delta-e))\subset [-3,-1]\cup [1,3]$. It follows that $\sigma(\lambda_\mathcal{G}(\Delta))\subset [-\tfrac{1}{2},0]\cup [\tfrac{1}{2},1]$. The opposite inclusion follows from Theorem \[ThBGDelta\]. In fact, calculations in [@BG] show that for any $t\in\mathbb R$ one has $$\sigma(\rho_x(-t a+b+c+d))=\Lambda_t:=(\{\alpha=t\}\cap \Omega)+1$$ (for instance, $\Lambda_t=[t-1,-t-1]\cup[t+3,-t+3]$ if $-2<t<0$) which is a union of two intervals, an interval, or two points (if $t=0$). Arguments similar to the proof of Theorem \[ThGammaSpec\] show that $\sigma(\lambda_\mathcal{G}(-t a+b+c+d))=\Lambda_t$ for any $t\in\mathbb R$. Spectra of Koopman representations of $\mathcal{G}$. ---------------------------------------------------- The boundary $\partial T$ of a binary rooted tree is homeomorphic to a space of infinite sequences $\{0,1\}^\mathbb{N}$ and hence is homeomorphic to a Cantor set. For any $q\in (0,1)$ define a measure $\nu_q$ on $\{0,1\}$ by $$\nu_q(\{0\})=q,\nu_q(\{1\})=1-q.$$ Let $\mu_q=\nu_q^\mathbb N$ be the corresponding Bernoulli measure on $\partial T$. For $q\in(0,1)$ let $\kappa_q$ be the Koopman representation associated to the action of $\mathcal{G}$ on $(\partial T,\mu_q)$ (this is the representation $\kappa_p$ with $p=(q,1-q)$ using the notations of Section \[SubsecAp\]). We prove that the spectrum of $\kappa_q(\Delta)$ (see ) does not depend on the parameter $q$ and thus coincides with the spectrum given by Theorem \[ThBGDelta\]. Observe that the representations $\kappa_q$ for $q\neq\tfrac{1}{2}$ are irreducible (see [@DuGr15]), but $\kappa_{\frac{1}{2}}$ is a direct sum of countably many finite-dimensional irreducible representations (see [@BG]). \[PropKoopGrig\] For every $q\in (0,1)$ one has $\sigma(\kappa_q(\Delta))=[-\tfrac{1}{2},0]\cup[\tfrac{1}{2},1]$. Fix $q\in (0,1),q\neq\tfrac{1}{2}$. Set $$A=\kappa_q(a),B=\kappa_q(b),C=\kappa_q(c),D=\kappa_q(d).$$ Consider the operators $$\label{EqQab}Q(\alpha,\beta)=\kappa_q(4\Delta-(\alpha+1)a-(\beta+1))=-\alpha A+B+C+D-(\beta+1){\mathrm{I}}$$ on $L^2(\partial T,\mu_q)$. Denote by $\Sigma$ the set of pairs $(\alpha,\beta)\in\mathbb R^2$ such that $Q(\alpha,\beta)$ is not invertible. Theorem \[PropKoopGrig\] is a consequence of the following: \[PropSigma\] $\Sigma=\Omega$. In [@BG] the authors proved Proposition \[PropSigma\] in the case $q=\tfrac{1}{2}$. For the proof they considered restrictions $Q_n(\alpha,\beta)$ of $Q(\alpha,\beta)$ on $\mathcal{G}$-invariant finite dimensional subspaces of $L^2(\partial T,\mu)$ and constructed operator recursions for $Q_n(\alpha,\beta)$ to describe spectra of $Q_n(\alpha,\beta)$ and $Q(\alpha,\beta)$. In the case $q\neq\frac{1}{2}$ the representation $\kappa_q$ is irreducible and so does not have invariant subspaces. We need to modify arguments from [@BG] and use operator recursions for infinite-dimensional Hilbert spaces. Recall that $V_n$ is the set of vertices of level $n$ in $T$. For every $n$ encode vertices of $V_n$ by $\{0,1\}^n$ so that for every vertex $v=x_1x_2\ldots x_n\in V_n$ one has: $$\mu_q(\partial T_j)=q^{1-\sum x_i}(1-q)^{\sum x_i}.$$ Let $v_0$ and $v_1$ be the vertices of the first level of $T$. Denote $$\mathcal H=L^2(\partial T,\mu_q),\;\;\mathcal H_j=\{f\in\mathcal H:{\mathrm{supp}}(f)\subset \partial T_{v_j}\},\;\;\text{where}\;\;j=0,1.$$ Observe that $\mathcal H_0$ and $\mathcal H_1$ are isomorphic to $\mathcal H$ via the isometries $I_j:\mathcal H_j\to \mathcal H,j=0,1,$ given by: $$(I_0f)(x)=\sqrt qf(0x),\;\;(I_1f)(x)=\sqrt{1-q}f(1x),$$ where $x\in\partial T$ is encoded by sequences from $\{0,1\}^\infty$. Using the decomposition $\mathcal H=\mathcal H_0 \oplus \mathcal H_1$ and identifying with $\mathcal H$ the spaces $\mathcal H_i$ using the isometries $I_i$, $i=0,1$, we can write every operator on $\mathcal H$ in a $2\times 2$ block matrix form whose entries are also operators on $\mathcal H$. The operators of the Koopman representation $\kappa_q$ corresponding to the generators of $\mathcal{G}$ can be written as follows: $$\begin{gathered} \label{EqRestr}\begin{split}A=\begin{bmatrix}0&{\mathrm{I}}\\ {\mathrm{I}}&0 \end{bmatrix},\;\; B=\begin{bmatrix}A&0\\0&C \end{bmatrix},\\ C=\begin{bmatrix}A&0\\0&D \end{bmatrix},\;\; D=\begin{bmatrix}{\mathrm{I}}&0\\0&B \end{bmatrix}.\end{split}\end{gathered}$$ In particular, the recursions do not depend on parameter $q$. It follows that the operator $Q(\alpha,\beta)$ can be written as follows: $$Q(\alpha,\beta)=\begin{bmatrix}2A-\beta{\mathrm{I}}&-\alpha {\mathrm{I}}\\-\alpha {\mathrm{I}}&B+C+D-(\beta+1){\mathrm{I}}\end{bmatrix}.$$ Notice that $(2A-\beta{\mathrm{I}})(2A+\beta{\mathrm{I}})=(4-\beta^2){\mathrm{I}}$. Assume that $\beta\neq \pm 2$. Straightforward calculations show that $$\label{EqRec}Q(\alpha,\beta)\begin{bmatrix}{\mathrm{I}}&\frac{\alpha(2A+\beta{\mathrm{I}})}{4-\beta^2}\\0&{\mathrm{I}}\end{bmatrix}= \begin{bmatrix}2A-\beta{\mathrm{I}}&0\\-\alpha{\mathrm{I}}&Q(\frac{2\alpha^2}{4-\beta^2},\beta+\frac{\alpha^2\beta}{4-\beta^2}) \end{bmatrix}.$$ Following [@BG] introduce a map on $\mathbb R^2\setminus\mathbb R\times\{\pm 2\}$ by $$F(\alpha,\beta)=(\frac{2\alpha^2}{4-\beta^2},\beta+\frac{\alpha^2\beta}{4-\beta^2}).$$ Also, for $n\in\mathbb N$ set $(\alpha_n,\beta_n)=F^n(\alpha,\beta)$. Since $\sigma(A)=\{-1,1\}$ we obtain \[PropInv\] If $\beta\neq\pm 2$ then $(\alpha,\beta)\in\Sigma$ if and only if $F(\alpha,\beta)\in\Sigma$. Next, let us prove another auxiliary statement. \[LmQ\] $\Sigma\supset \{(\alpha,\beta):|\alpha-2|=|\beta|\}$. Since $(B+C+D-{\mathrm{I}})^2=4{\mathrm{I}}$ and clearly $B+C+D-{\mathrm{I}}$ is not a scalar operator we obtain that $\sigma(B+C+D-{\mathrm{I}})=\{-2,2\}$. Thus, $-1$ and $3$ are eigenvalues of the operator $B+C+D$. Let $\eta\neq 0,\eta\in\mathcal H$ be such that $(B+C+D-3)\eta=0$. Let $\alpha=\beta+2$. Then $\alpha_1=\beta_1+2$ and $\beta_1=\frac{4\beta}{2-\beta}$. The map $$h(z)=\frac{4z}{2-z}$$ has two fixed points on the Riemann sphere: a repelling fixed point $0$ and an attracting fixed point $-2$, and for every $z\neq 0$ $h^n(z)\to -2$ exponentially fast. Thus, if $\beta\neq 0$, then $$\beta_n\to -2,\;\;\alpha_n\to 0$$ exponentially fast. It follows that $Q(\alpha_n,\beta_n)\eta\to 0$ exponentially fast. Further, let $\xi\in\mathcal H$. Applying the operator in to the block vector $\begin{bmatrix}0\\ \xi\end{bmatrix}$, where $\xi\in\mathcal H$, we obtain: $$Q(\alpha,\beta)\begin{bmatrix}\frac{\alpha(2A+\beta{\mathrm{I}})}{4-\beta^2}\xi\\ \xi\end{bmatrix}= \begin{bmatrix} 0\\Q(\alpha_1,\beta_1)\xi \end{bmatrix}.$$ Thus, there exists $\xi_1$ such that $\|\xi_1\|\geqslant\|\xi\|$ and $\|Q(\alpha_1,\beta_1)\xi\|=\|Q(\alpha,\beta)\xi_1\|$. By induction, we get that $\|Q(\alpha,\beta)\eta_n\|=\|Q(\alpha_n,\beta_n)\eta\|$ for some $\eta_n$ with $\|\eta_n\|\geqslant\|\eta\|$. Since $\|Q(\alpha_n,\beta_n)\eta\|$ converges to $0$, we obtain that $Q(\alpha,\beta)$ is not invertible. The case $\alpha=2-\beta$ can be treated similarly. Following [@BG] consider the curves $$\gamma_{n,j}=\{(\alpha,\beta):4-\beta^2+\alpha^2-4\alpha\cos(\tfrac{2\pi j}{2^n})=0\}.$$ Observe that $\gamma_{0,j}=\{(\alpha,\beta):|\alpha-2|=|\beta|\}$. Straightforward computations show that $F(\gamma_{n,j})\subset\gamma_{n-1,j}$ for all $n,j\in\mathbb N$. From Lemmas \[PropInv\] and \[LmQ\] taking into account that $\Sigma$ is closed we obtain that $\Sigma\supset \gamma_{n,j}$ for all $n,j$. Notice that the curve $\gamma_{n,j}$ can be written as: $$\beta=\pm \sqrt{\alpha^2-4\alpha\cos(\tfrac{2\pi j}{2^n})+4}.$$ Since the union of curves $\gamma_{n,j}$ is dense in the region $$S=\{(\alpha,\beta):||\alpha|-|\beta||\leqslant 2,\;||\alpha|+|\beta||\geqslant 2\}$$ we obtain that $\Sigma\supset S$. From Lemma \[LmSigmaAny\] we deduce that $\Sigma=S$, which finishes the proof. Spectra of groupoid representations of $\mathcal{G}$. ----------------------------------------------------- Let $\pi$ be the groupoid representation of $\mathcal{G}$ corresponding to the action of $\mathcal{G}$ on $(\partial T,\mu)$, where $\mu=\{\tfrac{1}{2},\tfrac{1}{2}\}^{\otimes\mathbb N}$ is the invariant Bernoulli measure on $\partial T$. The following Proposition follows from Theorem \[ThBGDelta\] and Theorem \[ThMain\]. \[PropGroupGrig\] $\sigma(\pi(\Delta))=[-\tfrac{1}{2},0]\cup[\tfrac{1}{2},1]$. To give another illustration of the method of operator recursions we provide a sketch of a direct proof of Proposition \[PropGroupGrig\]. Let $v_0$ and $v_1$ be the vertices of the first level of $T$. For $i,j\in\{0,1\}$ introduce a subspace $$\mathcal H_{i,j}=\{\eta\in L^2(\mathcal R,\nu):{\mathrm{supp}}(\eta)\subset \partial T_{v_i}\times\partial T_{v_j}\}.$$ One has: $$L^2(\mathcal R,\nu)=\mathcal H_{0,0}\oplus \mathcal H_{1,0}\oplus \mathcal H_{0,1}\oplus \mathcal H_{1,1}.$$ Recall that $x,y\in\partial T$ belong to the same orbit by $\mathcal{G}$ if and only if $x_i=y_i$ for all large enough $i$ (see [@Grig11], Theorem 7.3). This implies that the subspaces $\mathcal H_{i,j}$ are canonically isomorphic to $L^2(\mathcal R,\nu)$. Thus, every operator acting on $L^2(\mathcal R,\nu)$ can be written in a $4\times 4$ block matrix form with entries operators on $L^2(\mathcal R,\nu)$. Set $$A=\pi(a),B=\pi(b),C=\pi(c),D=\pi(d).$$ It is straightforward to check that every operator $X$ from the latter list can be written as $$X=\begin{bmatrix}Y&0_2\\0_2&Y \end{bmatrix},$$ where $0_2$ is the $2\times 2$ zero matrix and $Y$ is the $2\times 2$ block matrix representation for the corresponding operator from . Similarly to introduce an operator $$\label{EqQab1}Q(\alpha,\beta)=\pi(4\Delta-(\alpha+1)a-(\beta+1))=-\alpha A+B+C+D-(\beta+1){\mathrm{I}}$$ in $L^2(\mathcal R,\nu)$ and denote by $\Sigma$ the set of pairs $(\alpha,\beta)\in\mathbb R^2$ such that $Q(\alpha,\beta)$ is not invertible. One has: $$Q(\alpha,\beta)=\begin{bmatrix}Y&0_2\\0_2&Y \end{bmatrix},\;\;\text{where}\;\;Y=\begin{bmatrix}2A-\beta{\mathrm{I}}&-\alpha {\mathrm{I}}\\-\alpha {\mathrm{I}}&B+C+D-(\beta+1){\mathrm{I}}\end{bmatrix}.$$ Similarly to Proposition \[PropSigma\] one can show that $\Sigma=\Omega$ from which the statement of Proposition \[PropGroupGrig\] follows easily. Acknowledgement {#acknowledgement .unnumbered} --------------- The authors are grateful to Maria Gabriella Kuhn for useful discussions. [99]{} S. Adams, G. Elliot, T. Giordano, *Amenable actions of groups*, Trans. Amer. Math. Soc., [**344**]{} (1994), 803-822. C. Anantharaman-Delaroche, *On spectral characterization of amenability*, Israel Journal of Mathematics, [**137**]{} (2003), pp. 1-33. U. Bader and R. Muchnik, *Boundary unitary representations - irreducibility and rigidity*, Journal of Modern Dynamics, 5 (2011), no. 1, pp. 49-69. L. Bartholdi and R. I. Grigorchuk, *On the Spectrum of Hecke Type Operators Related to Some Fractal Groups,* Tr. Mat. Inst. im. V.A. Steklova, Ross. Akad. Nauk [**231**]{} (2000), 5-45 \[Proc. Steklov Inst. Math. [**231**]{} (2000), 1-41. L. Bartholdi, R. I. Grigorchuk, and Z. Šunić, *Branch Groups*, Handbook of Algebra (North-Holland, Amsterdam, 2003), Vol. 3, pp. 989-1112. M. E. B. Bekka and M. Cowling, *Some irreducible unitary representation of $G(K)$ for a simple algebraic group $G$ over an algebraic number field $K$*, Math. Z. [**241**]{} (2002), no. 4, 731-741. B. Bekka, P. de la Harpe and A. Valette, *Kazhdan Property (T),* Cambridge University Press 2008. O. Brattelli and W. Robinson, [*Operator algebras and quantum statistical mechanics. $C^*$- and $W^*$-algebras. Symmetry Groups. Decomposition of States.*]{} Texts and Monographs in Physics. Berlin-Heidelberg-New York, Springer-Verlag, 1987. R.V. Chacon, N.A. Friedman, *Approximation and invariant measures,* Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, [**3**]{} (1965), 286-295. T. Chow, *A spectral theory for direct integrals of operators*, Math. Ann. [**188**]{} (1970), no. 4, pp. 285-303. M. Cowling and T. Steger, *The irreducibility of restrictions of unitary representations to lattices*, J. Reine Angew. Math. [**420**]{} (1991), 85-98. J. Dixmier, *Les C$^*$-algèbres et leurs représentations*, Gauthier- Villars 1969. A. Dudko, R. Grigorchuk, *On diagonal actions of branch groups and the corresponding characters*, arxiv:math.RT/1412.5476. A. Dudko, R. Grigorchuk, *On irreducibility and disjointness of Koopman and quasi-regular representations of weakly branch groups*, Modern Theory of Dynamical Systems: A Tribute to Dmitry Victorovich Anosov, AMS Cont. Math. Ser. (2016), to appear. A. Dudko and K. Medynets, *On Characters of Inductive Limits of Symmetric Groups*, Journal of Functional Analysis, [**264**]{} (2013), no.7, 1565-1598. J. Feldman, C. Moore, *Ergodic equivalence relations, cohomology, and von Neumann algebras. I,* Trans. Amer. Math. Soc., [**234**]{} (1977), no. 2, 289-324. J. Feldman, C. Moore, *Ergodic equivalence relations, cohomology, and von Neumann algebras. II,* Trans. Amer. Math. Soc., [**234**]{} (1977), no. 2, 325-359. A. Figá-Talamanca, M. A. Picardello, *Harmonic analysis on free groups*, Lecture Notes in Pure and Applied Mathematics, vol. 87, Marcel Dekker Inc., New York, 1983. A. Figá-Talamanca, T. Steger, *Harmonic analysis for anisotropic random walks on homogeneous trees,* Mem. Amer. Math. Soc. [**110**]{} (1994), no. 531, xii+68. E. Glasner, *Ergodic theory via joinings*, Math. Surv. and Mon., [**101**]{}, Amer. Math. Soc. (2003). R. I. Grigorchuk, *Burnside’s Problem on Periodic Groups,* Funkts. Anal. Prilozh. [**14**]{} (1), 53-54 (1980) \[Funct. Anal. Appl. [**14**]{}, 41-43 (1980)\]. R. I. Grigorchuk, *Degrees of growth of finitely generated groups and the theory of invariant means*, (Russian) Izv. Akad. Nauk SSSR Ser. Mat. [**48**]{} (1984), no. 5, 939-985. R. I. Grigorchuk, *Degrees of growth of p-groups and torsion-free groups* (Russian) Mat. Sb. (N.S.) [**126**]{} (168) (1985), no. 2, 194-214. R. I. Grigorchuk, *Just Infinite Branch Groups,* New Horizons in Pro-p Groups (Birkhauser, Boston, MA, 2000), Prog. Math. [**184**]{}, pp. 121-179. R. I. Grigorchuk, *Some topics in the dynamics of group actions on rooted trees*, Proceedings of the Steklov Institute of Mathematics, 273, Issue 1 (2011), pp. 64-175. R. I. Grigorchuk, D. Lenz and T. Smirnova-Nagnibeda, *Spectra of Schreier graphs of Grigorchuk’s group and Schroedinger operators with aperiodic order*, arXiv:1412.6822. R. I. Grigorchuk and V. Nekrashevich, *Amenable actions of non-amenable groups*, J. of Math. Sci., [**140**]{} (2007), no. 3, pp. 391-397. R. Grigorchuk, V. Nekrashevich, and Z. Sunic, *From self-similar groups to self-similar sets and spectra*, Fractal Geometry and Stochastics V, [**70**]{}, series Progress in Probability, pp. 175-207. R. I. Grigorchuk, V. V. Nekrashevich, and V. I. Sushchanskii, *Automata, Dynamical Systems, and Groups,* Tr. Mat. Inst. im. V.A. Steklova, Ross. Akad. Nauk [**231**]{}, 134-214 (2000) \[Proc. Steklov Inst. Math. [**231**]{}, 128-203 (2000)\]. R. I. Grigorchuk and Ya. Krylyuk, *The spectral measure of the Markov operator related to 3-generated 2-group of intermediate growth and its Jacobi parameters*, Algebra Discrete Math., [**13**]{} (2012), no 2, pp. 237-272. P. de la Harpe, *On simplicity of reduced $C^*$–algebras of groups*, Bull. London Math. Soc., [**39**]{} (2007), pp. 1-26. N. Higson and G. Kasparov, *Operator K-theory for groups which act properly and isometrically on Hilbert space*, Electron. Res. Announc. Amer. Math. Soc. [**3**]{} (1997), 131-142 (electronic). R. Kadison and J. Ringrose, *Fundamentals of the theory of operator algebras. Vol. I. Elementary theory.* Pure and Applied Mathematics, 100. Academic Press, Inc. \[Harcourt Brace Jovanovich, Publishers\], New York, 1983. R. Kadison and J. Ringrose, *Fundamentals of the theory of operator algebras. Vol. II. Advanced theory.* Pure and Applied Mathematics, 100. Academic Press, Inc., Orlando, FL, 1986. I. Lysenok, *A set of defining relations for the Grigorchuk group*, Mat. Zametki, [**38**]{} (1985), no. 4, pp. 503–516. S. Kerov and A. Vershik, *Characters and factor representations of the infinite symmetric group*, Soviet Math. Dokl., [**23** ]{}(1981) No.2, 389-392. H. Kesten, *Symmetric random walks on groups*, Trans. Amer. Math. Soc., [**92**]{} (1959), 336-354. G. Kuhn, *Amenable actions and weak containment of certain representations of discrete groups*, Proc. Amer. Math. Soc., [**122**]{} (1994), 751-757. G. Kuhn and T. Steger, *More irreducible boundary representations of free groups*, Duke Math. J., [**82**]{}, no. 2 (1996), 381-435. G. W. Mackey, *The Theory of Unitary Group Representations*, Univ. Chicago Press, Chicago, 1976, Chicago Lect. Math. M. Naimark, *Normed rings*, Noordhoff, Groningen, 1959, xvi+560pp. V. Nekrashevych, *Self-similar Groups,* Am. Math. Soc., Providence, RI, 2005, Math. Surv. Monogr. [**117**]{}. M. Pichot, *Sur la théorie spectrale des relations d’équivalence mesurées*, Journal of the Inst. of Math. Jussieu (2007), [**6**]{} (3), 453-500. D. Powers, *Simplicity of the $C^{\ast} $-algebra associated with the free group on two generators*, Duke Math. J., [**42**]{} (1975), pp. 151-156. V. A. Rokhlin, *On the basic ideas of measure theory*, Mat. Sb. [**25**]{} (1) (1949), 107-150. Engl. transl.: *On the fundamental ideas of measure theory*, (Am. Math. Soc., Providence, RI, 1952), AMS Transl., No. 71, 55pp. V. A. Rokhlin, *Selected topics from the metric theory of dynamical systems*, Uspekhi Mat. Nauk, [**4**]{}:2(30) (1949), pp. 57-128 (Russian). Skudlarek H. L., [*Die unzerlegbaren Charaktere einiger diskreter Gruppen*]{}, Math. Ann., [**233**]{} (1976), 213–231. M. Takesaki, Theory of operator algebras I. Encyclopedia of Mathematical Sciences, vol. 124, 2002. M. Takesaki, Theory of operator algebras III. Encyclopedia of Mathematical Sciences, vol. 127, 2003. R.J. Zimmer, *Hyperfinite factors and amenable ergodic actions*, Inv. Math., [**41**]{} (1977), 23-31.
--- abstract: 'Metric-based meta-learning techniques have successfully been applied to few-shot classification problems. In this paper, we propose to leverage cross-modal information to enhance metric-based few-shot learning methods. Visual and semantic feature spaces have different structures by definition. For certain concepts, visual features might be richer and more discriminative than text ones. While for others, the inverse might be true. Moreover, when the support from visual information is limited in image classification, semantic representations (learned from unsupervised text corpora) can provide strong prior knowledge and context to help learning. Based on these two intuitions, we propose a mechanism that can adaptively combine information from both modalities according to new image categories to be learned. Through a series of experiments, we show that by this adaptive combination of the two modalities, our model outperforms current uni-modality few-shot learning methods and modality-alignment methods by a large margin on all benchmarks and few-shot scenarios tested. Experiments also show that our model can effectively adjust its focus on the two modalities. The improvement in performance is particularly large when the number of shots is very small.' author: - | Chen Xing[^1]\ College of Computer Science,\ Nankai University, Tianjin, China\ Element AI, Montreal, Canada\ Negar Rostamzadeh\ Element AI, Montreal, Canada\ Boris N. Oreshkin\ Element AI, Montreal, Canada\ Pedro O. Pinheiro\ Element AI, Montreal, Canada bibliography: - 'bibliography.bib' title: 'Adaptive Cross-Modal Few-shot Learning' --- Introduction {#sec:intro} ============ Deep learning methods have achieved major advances in areas such as speech, language and vision [@lecun2015nature]. These systems, however, usually require a large amount of labeled data, which can be impractical or expensive to acquire. Limited labeled data lead to overfitting and generalization issues in classical deep learning approaches. On the other hand, existing evidence suggests that human visual system is capable of effectively operating in small data regime: humans can learn new concepts from a very few samples, by leveraging prior knowledge and context [@landau1988importance; @markman1991categorization; @smith2017developmental]. The problem of learning new concepts with small number of labeled data points is usually referred to as *few-shot learning* [@bart2005fsl; @fink2005fsl; @li2006one; @lake2011one] (FSL). Most approaches addressing few-shot learning are based on *meta-learning* paradigm [@schmidhuber1987srl; @bengio1992oban; @thrun1998lifelong; @hochreiter2001learning], a class of algorithms and models focusing on learning how to (quickly) learn new concepts. Meta-learning approaches work by learning a parameterized function that embeds a variety of learning tasks and can generalize to new ones. Recent progress in few-shot image classification has primarily been made in the context of unimodal learning. In contrast to this, employing data from another modality can help when the data in the original modality is limited. For example, strong evidence supports the hypothesis that language helps recognizing new visual objects in toddlers [@jackendoff1987beyond; @smith2005development]. This suggests that semantic features from text can be a powerful source of information in the context of few-shot image classification. Exploiting auxiliary modality (*e.g.*, attributes, unlabeled text corpora) to help image classification when data from visual modality is limited, have been mostly driven by *zero-shot learning* [@larochelle2008zsl; @palatucci2009zsl] (ZSL). ZSL aims at recognizing categories whose instances have not been seen during training. In contrast to few-shot learning, there is no small number of labeled samples from the original modality to help recognize new categories. Therefore, most approaches consist of aligning the two modalities during training. Through this *modality-alignment*, the modalities are mapped together and forced to have the same semantic structure. This way, knowledge from auxiliary modality is transferred to the visual side for new categories at test time [@frome2013devise]. However, visual and semantic feature spaces have heterogeneous structures by definition. For certain concepts, visual features might be richer and more discriminative than text ones. While for others, the inverse might be true. Figure \[fig:visual-semantic\] illustrates this remark. Moreover, when the number of support images from visual side is very small, information provided from this modality tend to be noisy and local. On the contrary, semantic representations (learned from large unsupervised text corpora) can act as more general prior knowledge and context to help learning. Therefore, instead of aligning the two modalities (to transfer knowledge to the visual modality), for few-shot learning in which information are provided from both modalities during test, it is better to treat them as two independent knowledge sources and adaptively exploit both modalities according to different scenarios. Towards this end, we propose *Adaptive Modality Mixture Mechanism* (AM3), an approach that adaptively and selectively combines information from two modalities, visual and semantic, for few-shot learning. ![Concepts have different visual and semantic feature space. *(Left)* Some categories may have similar visual features and dissimilar semantic features. (*Right*) Other can possess same semantic label but very distinct visual features. Our method adaptively exploits both modalities to improve classification performance in low-shot regime.[]{data-label="fig:visual-semantic"}](figures/visual-semantic.pdf){width="1\linewidth"} AM3 is built on top of metric-based meta-learning approaches. These approaches perform classification by comparing distances in a learned metric space (from visual data). On the top of that, our method also leverages text information to improve classification accuracy. AM3 performs classification in an adaptive convex combination of the two distinctive representation spaces with respect to image categories. With this mechanism, AM3 can leverage the benefits from both spaces and adjust its focus accordingly. For cases like Figure \[fig:visual-semantic\](Left), AM3 focuses more on the semantic modality to obtain general context information. While for cases like Figure \[fig:visual-semantic\](Right), AM3 focuses more on the visual modality to capture rich local visual details to learn new concepts. Our main contributions can be summarized as follows: (i) we propose adaptive modality mixture mechanism (AM3) for cross-modal few-shot classification. AM3 adapts to few-shot learning better than modality-alignment methods by adaptively mixing the semantic structures of the two modalities. (ii) We show that our method achieves considerable boost in performance over different metric-based meta-learning approaches. (iii) AM3 outperforms by a considerable margin current (single-modality and cross-modality) state of the art in few-shot classification on different datasets and different number of shots. (iv) We perform quantitative investigations to verify that our model can effectively adjust its focus on the two modalities according to different scenarios. Related Work {#sec:related_work} ============ #### Few-shot learning. Meta-learning has a prominent history in machine learning [@schmidhuber1987srl; @bengio1992oban; @thrun1998lifelong]. Due to advances in representation learning methods [@goodfellow2016deep] and the creation of new few-shot learning datasets [@lake2011one; @vinyals2016matching], many deep meta-learning approaches have been applied to address the few-shot learning problem . These methods can be roughly divided into two main types: metric-based and gradient-based approaches. Metric-based approaches aim at learning representations that minimize intra-class distances while maximizing the distance between different classes. These approaches rely on an episodic training framework: the model is trained with sub-tasks (episodes) in which there are only a few training samples for each category. For example, matching networks [@vinyals2016matching] follows a simple nearest neighbour framework. In each episode, it uses an attention mechanism (over the encoded support) as a similarity measure for one-shot classification. In prototypical networks [@snell2017prototypical], a metric space is learned where embeddings of queries of one category are close to the centroid (or prototype) of supports of the same category, and far away from centroids of other classes in the episode. Due to the simplicity and good performance of this approach, many methods extended this work. For instance, Ren *et al.* [@ren2018meta] propose a semi-supervised few-shot learning approach and show that leveraging unlabeled samples outperform purely supervised prototypical networks. Wang et al. [@wang2018lowshot] propose to augment the support set by generating hallucinated examples. Task-dependent adaptive metric (TADAM) [@oreshkin2018tadam] relies on conditional batch normalization [@dumoulin2018feature] to provide task adaptation (based on task representations encoded by visual features) to learn a task-dependent metric space. Gradient-based meta-learning methods aim at training models that can generalize well to new tasks with only a few fine-tuning updates. Most these methods are built on top of model-agnostic meta-learning (MAML) framework [@finn2017model]. Given the universality of MAML, many follow-up works were recently proposed to improve its performance on few-shot learning [@nichol2018reptile; @lacoste2017deep]. Kim *et al.* [@bmaml2018] and *Finn et al.* [@finnXL18] propose a probabilistic extension to MAML trained with variational approximation. Conditional class-aware meta-learning (CAML) [@jiang2018learning] conditionally transforms embeddings based on a metric space that is trained with prototypical networks to capture inter-class dependencies. Latent embedding optimization (LEO) [@rusu2018meta] aims to tackle MAML’s problem of only using a few updates on a low data regime to train models in a high dimensional parameter space. The model employs a low-dimensional latent model embedding space for update and then decodes the actual model parameters from the low-dimensional latent representations. This simple yet powerful approach achieves current state of the art result in different few-shot classification benchmarks. Other meta-learning approaches for few-shot learning include using memory architecture to either store exemplar training samples [@santoro16mem] or to directly encode fast adaptation algorithm [@ravi2016optimization]. Mishra et al. [@mishra2018simple] use temporal convolution to achieve the same goal. Current approaches mentioned above rely solely on visual features for few-shot classification. Our contribution is orthogonal to current metric-based approaches and can be integrated into them to boost performance in few-shot image classification. #### Zero-shot learning. Current ZSL methods rely mostly on visual-auxiliary modality alignment [@frome2013devise; @xian2017zero]. In these methods, samples for the same class from the two modalities are mapped together so that the two modalities obtain the same semantic structure. There are three main families of modality alignment methods: representation space alignment, representation distribution alignment and data synthetic alignment. Representation space alignment methods either map the visual representation space to the semantic representation space [@norouzi2013zero; @socher2013zsl; @frome2013devise], or map the semantic space to the visual space [@zhang2017learning]. Distribution alignment methods focus on making the alignment of the two modalities more robust and balanced to unseen data [@schonfeld2018generalized]. ReViSE [@hubert2017learning] minimizes maximum mean discrepancy (MMD) of the distributions of the two representation spaces to align them. CADA-VAE [@schonfeld2018generalized] uses two VAEs [@kingma2013auto] to embed information for both modalities and align the distribution of the two latent spaces. Data synthetic methods rely on generative models to generate image or image feature as data augmentation [@zhu2018generative; @xian2018feature; @mishra2018generative; @wang2018lowshot] for unseen data to train the mapping function for more robust alignment. ZSL does not have access to any visual information when learning new concepts. Therefore, ZSL models have no choice but to align the two modalities. This way, during test the image query can be directly compared to auxiliary information for classification [@zhang2017learning]. Few-shot learning, on the other hand, has access to a small amount of support images in the original modality during test. This makes alignment methods from ZSL seem unnecessary and too rigid for FSL. For few-shot learning, it would be better if we could preserve the distinct structures of both modalities and adaptively combine them for classification according to different scenarios. In Section \[sec:experiments\] we show that by doing so, AM3 outperforms directly applying modality alignment methods for few-shot learning by a large margin. Method {#sec:method} ====== In this section, we explain how AM3 adaptively leverages text data to improve few-shot image classification. We start with a brief explanation of episodic training for few-shot learning and a summary of prototypical networks followed by the description of the proposed adaptive modality mixture mechanism. Preliminaries ------------- ### Episodic Training Few-shot learning models are trained on a labeled dataset $\mathcal{D}_{\text{train}}$ and tested on $\mathcal{D}_{\text{test}}$. The class sets are disjoint between $\mathcal{D}_{\text{train}}$ and $\mathcal{D}_{\text{test}}$. The test set has only a few labeled samples per category. Most successful approaches rely on an *episodic* training paradigm: the few shot regime faced at test time is simulated by sampling small samples from the large labeled set $\mathcal{D}_{\text{train}}$ during training. In general, models are trained on $K$-shot, $N$-way episodes. Each episode $e$ is created by first sampling $N$ categories from the training set and then sampling two sets of images from these categories: (i) the *support* set $\mathcal{S}_e=\{(s_i,y_i)\}_{i=1}^{N\times K}$ containing $K$ examples for each of the $N$ categories and (ii) the *query* set $\mathcal{Q}_e=\{(q_j,y_j)\}_{j=1}^{Q}$ containing different examples from the same $N$ categories. The episodic training for few-shot classification is achieved by minimizing, for each episode, the loss of the prediction on samples in query set, given the support set. The model is a parameterized function and the loss is the negative loglikelihood of the true class of each query sample: $$\mathcal{L}(\theta) = \underset{(\mathcal{S}_e,\mathcal{Q}_e)}{\mathbb{E}}\; - \sum_{t=1}^{Q_e}\; \log p_{\theta}(y_t |q_t, \mathcal{S}_e)\;, \label{eq:loglikelihood}$$ where $(q_t,y_t)\in\mathcal{Q}_e$ and $\mathcal{S}_e$ are, respectively, the sampled query and support set at episode $e$ and $\theta$ are the parameters of the model. ### Prototypical Networks We build our model on top of metric-based meta-learning methods. We choose prototypical network [@snell2017prototypical] for explaining our model due to its simplicity. We note, however, that the proposed method can potentially be applied to any metric-based approach. Prototypical networks use the support set to compute a centroid (prototype) for each category (in the sampled episode) and query samples are classified based on the distance to each prototype. The model is a convolutional neural network [@lecun98cnn] $f:\mathbb{R}^{n_v}\to\mathbb{R}^{n_p}$, parameterized by $\theta_f$, that learns a $n_p$-dimensional space where samples of the same category are close and those of different categories are far apart. For every episode $e$, each embedding prototype $p_c$ (of category $c$) is computed by averaging the embeddings of all support samples of class $c$: $$\mathbf{p}_{c}=\frac{1}{|S_e^c|}\sum_{(s_i,y_i)\in\mathcal{S}_e^c} f(s_{i})\;,$$ where $\mathcal{S}_e^c\subset\mathcal{S}_e$ is the subset of support belonging to class $c$. The model produces a distribution over the $N$ categories of the episode based on a softmax [@bridle1990softmax] over (negative) distances $d$ of the embedding of the query $q_t$ (from category $c$) to the embedded prototypes: $$\label{eq:metric} p(y = c | q_t, S_e, \theta) = \frac{\text{exp}(-d(f(q_{t}), \mathbf{p}_{c} ))}{\sum_k \text{exp}(-d(f(q_{t}), \mathbf{p}_{k} ))} \;.$$ We consider $d$ to be the Euclidean distance. The model is trained by minimizing Equation \[eq:loglikelihood\] and the parameters are updated with stochastic gradient descent. Adaptive Modality Mixture Mechanism ----------------------------------- The information contained in semantic concepts can significantly differ from visual contents. For instance, ‘Siberian husky’ and ‘wolf’, or ‘komondor’ and ‘mop’, might be difficult to discriminate with visual features, but might be easier to discriminate with language semantic features. In zero-shot learning, where no visual information is given at test time (that is, the support set is void), algorithms need to solely rely on an auxiliary (*e.g.*, text) modality. On the other extreme, when the number of labeled image samples is large, neural network models tend to ignore the auxiliary modality as it is able to generalize well with large number of samples [@krizhevsky2012imagenet]. Few-shot learning scenario fits in between these two extremes. Thus, we hypothesize that both visual and semantic information can be useful for few-shot learning. Moreover, given that visual and semantic spaces have different structures, it is desirable that the proposed model exploits both modalities adaptively, given different scenarios. For example, when it meets objects like ‘ping-pong balls’ which has many visually similar counterparts, or when the number of shots is very small from the visual side, it relies more on text modality to distinguish them. In AM3, we augment metric-based FSL methods to incorporate language structure learned by a word-embedding model $\mathcal{W}$ (pre-trained on unsupervised large text corpora), containing label embeddings of all categories in $\mathcal{D}_{\text{train}} \cup\mathcal{D}_{\text{test}}$. In our model, we modify the prototype representation of each category by taking into account their label embeddings. More specifically, we model the new prototype representation as a convex combination of the two modalities. That is, for each category $c$, the new prototype is computed as: $$\label{eq:mix} \mathbf{p}^{\prime}_{c} = \lambda_{c}\cdot\mathbf{p}_{c}+(1-\lambda_{c})\cdot\mathbf{w}_{c}\;,$$ where $\lambda_c$ is the *adaptive mixture coefficient* (conditioned on the category) and $\mathbf{w}_c=g(\mathbf{e}_c)$ is a transformed version of the label embedding for class $c$. The representation $\mathbf{e}_c$ is the pre-trained word embedding of label $c$ from $\mathcal{W}$. This transformation $g:\mathbb{R}^{n_{w}}\to\mathbb{R}^{n_{p}}$, parameterized by $\theta_g$, is important to guarantee that both modalities lie on the space $\mathbb{R}^{n_{p}}$ of the same dimension and can be combined. The coefficient $\lambda_c$ is conditioned on category and calculated as follows: $$\lambda_{c} = \frac{1}{1 + \text{exp}(-h(\mathbf{w}_{c}))}\;,$$ where $h$ is the adaptive mixing network, with parameters $\theta_h$. Figure \[fig:model\_embeds\](left) illustrates the proposed model. The mixing coefficient $\lambda_c$ can be conditioned on different variables. In Appendix \[app:mix\] we show how performance changes when the mixing coefficient is conditioned on different variables. The training procedure is similar to that of the original prototypical networks. However, the distances $d$ (used to calculate the distribution over classes for every image query) are between the query and the cross-modal prototype $\mathbf{p}_c^{\prime}$: $$p_{\theta}(y = c | q_t, S_e, \mathcal{W}) = \frac{\text{exp}(-d(f(q_{t}), \mathbf{p}^{\prime}_{c} ))}{\sum_k \text{exp}(-d(f(q_{t}), \mathbf{p}^{\prime}_{k} ))} \;,$$ where $\theta=\{\theta_f,\theta_g,\theta_h\}$ is the set of parameters. Once again, the model is trained by minimizing Equation \[eq:loglikelihood\]. Note that in this case the probability is also conditioned on the word embeddings $\mathcal{W}$. Figure \[fig:model\_embeds\](right) illustrates an example on how the proposed method works. Algorithm \[alg:modal\_mixture\], on supplementary material, shows the pseudocode for calculating the episode loss. We chose prototypical network [@snell2017prototypical] for explaining our model due to its simplicity. We note, however, that AM3 can potentially be applied to any metric-based approach that calculates prototypical embeddings $\mathbf{p}_{c}$ for categories. As shown in next section, we apply AM3 on both ProtoNets and TADAM [@oreshkin2018tadam]. TADAM is a task-dependent metric-based few-shot learning method, which currently performs the best among all metric-based FSL methods. Experiments {#sec:experiments} =========== In this section we compare our model, AM3, with three different types of baselines: uni-modality few-shot learning methods, modality-alignment methods and metric-based extensions of modality-alignment methods. We show that AM3 outperforms the state of the art of each family of baselines. We also verify the adaptiveness of AM3 through quantitative analysis. Experimental Setup ------------------ #### Datasets. We conduct main experiments with two widely used few-shot learning datasets: *mini*ImageNet [@vinyals2016matching] and *tiered*ImageNet [@ren2018meta]. We also experiment on CUB-200 [@WelinderEtal2010], a widely used zero-shot learning dataset. We evaluate on this dataset to provide a more direct comparison with modality-alignment methods. This is because most modality-alignment methods have no published results on few-shot datasets. We use GloVe [@pennington2014glove] to extract the word embeddings for the category labels of the two image few-shot learning data sets. The embeddings are trained with large unsupervised text corpora. More details about the three datasets can be found in Appendix \[app:data\]. #### Baselines. We compare AM3 with three family of methods. The first is uni-modality few-shot learning methods such as MAML [@finn2017model], LEO [@rusu2018meta], Prototypical Nets [@snell2017prototypical] and TADAM [@oreshkin2018tadam]. LEO achieves current state of the art among uni-modality methods. The second fold is modality alignment methods. CADA-VAE [@schonfeld2018generalized], among them, has the best published results on both zero and few-shot learning. To better extend modality alignment methods to few-shot setting, we also apply the metric-based loss and the episode training of ProtoNets on their visual side to build a visual representation space that better fits few-shot scenario. This leads to the third fold baseline, modality alignment methods extended to metric-based FSL. Details of baseline implementations can be found in Appendix \[app:baslines\]. #### AM3 Implementation. We test AM3 with two backbone metric-based few-shot learning methods: ProtoNets and TADAM. In our experiments, we use the stronger ProtoNets implementation of [@oreshkin2018tadam], which we call ProtoNets++. Prior to AM3, TADAM achieves the current state of the art among all metric-based few-shot learning methods. For details on network architectures, training and evaluation procedures, see Apprendix \[app:implementation\]. Source code is released at *https://github.com/ElementAI/am3*. -0.1in -0.1in Results {#sec:results} ------- Table \[tab:mini-imagenet\] and Table \[tab:tiered-image\] show classification accuracy on *mini*ImageNet and on *tiered*ImageNet, respectively. We conclude multiple results from these experiments. First, AM3 outperforms its backbone methods by a large margin in all cases tested. This indicates that when properly employed, text modality can be used to boost performance in metric-based few-shot learning framework very effectively. Second, AM3 (with TADAM backbone) achieves results superior to current state of the art (in both single modality FSL and modality alignment methods). The margin in performance is particularly remarkable in the 1-shot scenario. The margin of AM3 w.r.t. uni-modality methods is larger with smaller number of shots. This indicates that the lower the visual content is, the more important semantic information is for classification. Moreover, the margin of AM3 w.r.t. modality alignment methods is larger with smaller number of shots. This indicates that the adaptiveness of AM3 would be more effective when the visual modality provides less information. A more detailed analysis about the adaptiveness of AM3 is provided in Section \[sec:adaptive\]. Finally, it is also worth noting that all modality alignment baselines get a significant performance improvement when extended to metric-based, episodic, few-shot learning framework. However, most of modality alignment methods (original and extended), perform worse than current state-of-the-art uni-modality few-shot learning method. This indicates that although modality alignment methods are effective for cross-modality in ZSL, it does not fit few-shot scenario very much. One possible reason is that when aligning the two modalities, some information from both sides could be lost because two distinct structures are forced to align. We also conducted few-shot learning experiments on CUB-200, a popular dataest for ZSL dataset, to better compare with published results of modality alignment methods. All the conclusion discussed above hold true on CUB-200. Moreover, we also conduct ZSL and generalized FSL experiments to verify the importance of the proposed adaptive mechanism. Results on on this dataset are shown in Appendix \[app:cub\]. Adaptiveness Analysis {#sec:adaptive} --------------------- We argue that the adaptive mechanism is the main reason for the performance boosts observed in the previous section. We design an experiment to quantitatively verify that the adaptive mechanism of AM3 can adjust its focus on the two modalities reasonably and effectively. Figure \[fig:ablation\](a) shows the accuracy of our model compared to the two backbones tested (ProtoNets++ and TADAM) on *mini*ImageNet for 1-10 shot scenarios. It is clear from the plots that the gap between AM3 and the corresponding backbone gets reduced as the number of shots increases. Figure \[fig:ablation\](b) shows the mean and std (over whole validation set) of the mixing coefficient $\lambda$ for different shots and backbones. First, we observe that the mean of $\lambda$ correlates with number of shots. This means that AM3 weighs more on text modality (and less on visual one) as the number of shots (hence, the number of visual data points) decreases. This trend suggests that AM3 can automatically adjust its focus more to text modality to help classification when information from the visual side is very low. Second, we can also observe that the variance of $\lambda$ (shown in Figure \[fig:ablation\](b)) correlates with the performance gap of AM3 and its backbone methods (shown in Figure \[fig:ablation\](a)). When the variance of $\lambda$ decreases with the increase of number of shots, the performance gap also shrinks. This indicates that the adaptiveness of AM3 on category level plays a very important role for the performance boost. Conclusion {#sec:conclusion} ========== In this paper, we propose a method that can adaptively and effectively leverage cross-modal information for few-shot classification. The proposed method, AM3, boosts the performance of metric-based approaches by a large margin on different datasets and settings. Moreover, by leveraging unsupervised textual data, AM3 outperforms state of the art on few-shot classification by a large margin. The textual semantic features are particularly helpful on the very low (visual) data regime (*e.g.* one-shot). We also conduct quantitative experiments to show that AM3 can reasonably and effectively adjust its focus on the two modalities. Algorithm for Episode Loss {#app:algorithm} ========================== **Input**: Training set $\mathcal{D}_{\text{train}} = \{(\mathbf{x}_i,y_i)\}_i,y_i\in\{1,...,M\}$. $\mathcal{D}^c_{\text{train}} = \{(\mathbf{x}_i, y_i)\in\mathcal{D}_{\text{train}}\; |\; y_i = c \}$. **Output:** Episodic loss $\mathcal{L}(\theta)$ for sampled episode $e$. $C\leftarrow RandomSample(\{1,...,M\}, N$) $\mathcal{Q}_e^c\leftarrow RandomSample(\mathcal{D}_{\text{train}}^c \setminus \mathcal{S}_e^c, K_Q)$ $\mathbf{p}_{c}\leftarrow\frac{1}{|\mathcal{S}_e^c|}\sum_{(s_i,y_i)\in\mathcal{S}_e^c} f(s_{i})$ $\mathbf{e}_{c}\leftarrow LookUp(c, \mathcal{W})$ $\mathbf{w}_{c}\leftarrow g(\mathbf{e}_c)$ $\lambda_{c} \leftarrow \frac{1}{1 + \text{exp}(-h(\mathbf{w}_{c}))}$ $\mathbf{p}^{\prime}_{c} \leftarrow \lambda_{c}\cdot\mathbf{p}_{c}+(1-\lambda_{c})\cdot\mathbf{w}_c$ $\mathcal{L}(\theta)\leftarrow0$ $\mathcal{L}(\theta)\leftarrow\mathcal{L}(\theta)+\frac{1}{N\cdot K}[ d(f(q_{t}), \mathbf{p}^{\prime}_{c} )) \;+\; \text{log}{\sum_k \text{exp}(-d(f(q_{t}), \mathbf{p}^{\prime}_{k} ))}] $ Descriptions of data sets {#app:data} ========================== #### *mini*ImageNet. This dataset is a subset of ImageNet ILSVRC12 dataset [@russakovsky2015imagenet]. It contains 100 randomly sampled categories, each with 600 images of size $84 \times 84$. For fair comparison with other methods, we use the same split proposed by Ravi et al. [@ravi2016optimization], which contains 64 categories for training, 16 for validation and 20 for test. #### *tiered*ImageNet. This dataset is a larger subset of ImageNet than *mini*ImageNet. It contains 34 high-level category nodes (779,165 images in total) that are split in 20 for training, 6 for validation and 8 for test. This leads to 351 actual categories for training, 97 for validation and 160 for the test. There are more than 1,000 images for each class. The train/val/test split is done according to their higher-level label hierarchy. According to Ren et al. [@ren2018meta], splitting near the root of ImageNet hierarchy results in a more realistic (and challenging) scenario with training and test categories that are less similar. #### CUB-200. Caltech-UCSD-Birds 200-2011 (CUB-200) [@WelinderEtal2010] is a fine-grained and medium scale dataset with respect to both number of images and number of classes, *i.e.* 11,788 images from 200 different types of birds annotated with 312 attributes [@xian2017zero]. We chose the split proposed by Xian *et al.* [@xian2017zero]. We used the 312-dimensional hand-crafted attribution as the semantic modality for fair comparison with other published modality alignment methods. #### Word embeddings. We use GloVe [@pennington2014glove] to extract the semantic embeddings for the category labels. GloVe is an unsupervised approach based on word-word co-occurrence statistics from large text corpora. We use the Common Crawl version trained on 840B tokens. The embeddings are of dimension 300. When a category has multiple (synonym) annotations, we consider the first one. If the first one is not present in GloVe’s vocabulary we use the second. If there is no annotation in GloVe’s vocabulary for a category (4 cases in *tiered*ImageNet), we randomly sample each dimension of the embedding from a uniform distribution with the range (-1, 1). If an annotation contains more than one word, the embedding is generated by averaging them. We also experimented with fastText embeddings [@joulin2016fasttext] and observed similar performances. Baselines {#app:baslines} ========= For modality alignment baselines, we follow CADA-VAE [@schonfeld2018generalized]’s few-shot experimental setting. During training, we randomly sample $N$-shot images for the test classes, and add them in the training data to train the alignment model. During test, we compare the image query and the class embedding candidates in the aligned space to make decisions as in ZSL and GZSL. For the meta-learning extensions of modality alignment methods, instead of including the $N$-shot images into training data, we follow the standard episode training (explained in Section \[sec:method\]) of metric-based meta-learning approach and train models only with samples from training classes. Moreover, during training, we add an additional loss illustrated in Equation \[eq:loglikelihood\] and  \[eq:metric\], to ensure the metric space learned on the visual side matching the few-shot test scenario. At test, we employ the standard few-shot testing approach (described in Appendix \[app:implementation\]) and calculate the prototype representations of test classes as follows: $$\label{eq:regularizer} \mathbf{p}_c = \frac{\Sigma_i \mathbf{r}_i^c+\mathbf{w}_c}{N+1},$$ where $\mathbf{r}_i$ is the representation of the $i$-th support image. For both training and test, we need a visual representation space to calculate prototype representations. For DeViSE, they are calculated in its visual space before the transformer [@frome2013devise]. For both ReViSE and CADA-VAE, prototype representations are calculated in the latent space. For f-CLSWGAN, they are calculated in the discriminator’s input space. Implementation Details of AM3 Experiments {#app:implementation} ========================================= We model the visual feature extractor $f$ with a ResNet-12 [@he2016resnet], which has shown to be very effective for few-shot classification [@oreshkin2018tadam]. This network produces embeddings of dimension 512. We use this backbone in all the modality-alignment baselines mentioned above and in AM3 implementations (with both backbones). We call *ProtoNets++* the prototypical network [@snell2017prototypical] implementation with this more powerful backbone. The semantic transformation $g$ is a neural network with one hidden layer with 300 units which also outputs a 512-dimensional representation. The transformation $h$ of the mixture mechanism also contains one hidden layer with 300 units and outputs a single scalar for $\lambda_c$. On both $g$ and $h$ networks, we use ReLU non-linearity [@glorot2011deep] and dropout [@srivastava14dropout] (we set the dropout coefficient to be 0.7 on *mini*ImageNet and 0.9 on *tiered*ImageNet). The model is trained with stochastic gradient descent with momentum [@sutskever2013importance]. We use an initial learning rate of 0.1 and a fixed momentum coefficient of 0.9. On *mini*ImageNet, we train every model for 30,000 iterations and anneal the learning rate by a factor of ten at iterations 15,000, 17,500 and 19,000. On *tiered*ImageNet, models are trained for 80,000 iterations and the learning rate is reduced by a factor of ten at iteration 40,000, 50,000, 60,000. The training procedure composes a few-shot training batch from several tasks, where a task is a fixed selection of 5 classes. We found empirically that the best number of tasks per batch are 5,2 and 1 for 1-shot, 5-shot and 10-shot, respectively. The number of query per batch is 24 for 1-shot, 32 for 5-shot and 64 for 10-shot. All our experiments are evaluated following the standard approach of few-shot classification: we randomly sample 1,000 tasks from the test set each having 100 random query samples, and average the performance of the model on them. All hyperparameters were chosen based on accuracy on validation set. All our results are reported with an average over five independent run (with a fixed architecture and different random seeds) and with $95\%$ confidence intervals. Results on CUB-200 {#app:cub} ================== We also conduct experiments on CUB-200 to better compare with modality-alignment baselines from ZSL. Table \[tab:cub\] shows the results. For $0$-shot scenario, AM3 degrades to the simplest modality alignment method that maps the text semantic space to the visual space. Therefore, without the adaptive mechanism, AM3 performs roughly the same with DeViSE, which indicates that the adaptive mechanism play the main role on the performance boost we observed in FSL. The results on other few-shot cases on CUB-200 are consistent with the other two few-shot learning data sets. We also conduct generalized few-shot learning experiments as reported for CADA-VAE in [@schonfeld2018generalized] to compare AM3 with the published FSL results for CADA-VAE. Figure \[fig:gzsl\] shows that AM3-ProtoNets outperforms CADA-VAE in every case tested. We consider as a metric the harmonic mean (H-acc) between the accuracy of seen and unseen classes, as defined in [@xian2018zero; @schonfeld2018generalized]. ![H-acc of generalized few-shot learning on CUB-200.[]{data-label="fig:gzsl"}](figures/gzsl2.pdf){width=".5\textwidth"} Ablation study on the input of the adaptive mechanism {#app:mix} ===================================================== We also perform an ablation study to see how the adaptive mechanism performs with respect to different features. Table \[tab:ablation\] shows results, on both datasets, of our method with three different inputs for the adaptive mixing network $h$: (i) the raw GloVe embedding ($h(\mathbf{e})$), (ii) the visual representation ($h(\mathbf{p})$) and (iii) a concatenation of both the query and the language embedding ($h(\mathbf{q},\mathbf{w})$). We observe that conditioning on transformed GloVe features performs better than on the raw features. Also, conditioning on semantic features performs better than when conditioning on visual ones, suggesting that the former space has a more appropriate structure to the adaptive mechanism than the latter. Finally, we note that conditioning on the query and semantic embeddings helps with the ProtoNets++ backbone but not with TADAM. ---------------------------- -------- -------- -- -------- -------- Method 1-shot 5-shot 1-shot 5-shot $h(\mathbf{e})$ 61.23 74.77 57.47 72.27 $h(\mathbf{p})$ 64.48 74.80 64.93 77.60 $h(\mathbf{w},\mathbf{q})$ 66.12 75.83 53.23 56.70 $h(\mathbf{w})$ (AM3) 65.21 75.20 65.30 78.10 ---------------------------- -------- -------- -- -------- -------- : Performance of our method when the adaptive mixing network is conditioned on different features. Last row is the original model.[]{data-label="tab:ablation"} [^1]: Work done when interning at Element AI. Contact through: xingchen1113@gmail.com
--- author: - | Geoffrey Compère$^{\flat}$, Sophie de Buyl$^{\flat}$, Stéphane Detournay$^{\natural}$, and Kentaroh Yoshida$^{\natural,\dagger}$  \ [$^{\flat}$ Department of Physics, University of California, Santa Barbara, Santa Barbara, CA 93106, USA ]{}  \ [$^{\natural}$ *Kavli Institute for Theoretical Physics, University of California, Santa Barbara, CA 93106, USA* ]{}  \ [$^{\dagger}$ *Department of Physics, Kyoto University, Kyoto 606-8502, Japan* ]{}  \  \ [E-mail: [gcompere@physics.ucsb.edu, sdebuyl@physics.ucsb.edu, detourn@kitp.ucsb.edu, kyoshida@gauge.scphys.kyoto-u.ac.jp]{}]{} title: Asymptotic symmetries of Schrödinger spacetimes --- Introduction ============ The holographic description [@Maldacena:1997re; @Gubser:1998bc; @Witten:1998qj] of condensed matter systems such as superconductors [@Gubser:2008px; @Hartnoll:2008vx] and materials undergoing the quantum Hall effect [@KeskiVakkuri:2008eb; @Davis:2008nv; @Fujita:2009kw] have recently attracted a lot of interest. The systems are strongly coupled at critical points and hence the holographic description may give a new analytical method to investigate some aspects of the critical behaviors in terms of classical gravity. Some condensed matter systems realized in laboratories are described at their critical points by non-relativistic conformal field theories (NRCFTs). Non-relativistic conformal symmetry contains the scaling invariance $$t \to \lambda^z t\,, \qquad x^i \to \lambda x^i\,,$$ where $z$ is a dynamical exponent. When $z=2$, the symmetry is enhanced to the Schrödinger symmetry [@Hagen:1972pd; @Niederer:1972zz] containing in addition the special conformal transformations. The NRCFTs based on the [Schrödinger ]{}symmetry are studied e.g. in [@Henkel:1993sg; @Mehen:1999nd; @Son:2005rv; @Nishida:2007pj; @Bobev:2009mw; @Donos:2009xc]. Recently, gravity duals for these NRCFTs have been proposed [@Son:2008ye; @Balasubramanian:2008dm] (see [@Duval:1990hj] for earlier work on the geometric realization of the Schrödinger symmetry and [@Duval:2008jg] for the relationship with [@Son:2008ye; @Balasubramanian:2008dm]). The background at zero temperature consists in a light-like deformation of the anti-de Sitter metric – for other gravity solutions and their string theory embedding, see [@Goldberger:2008vg; @Barbon:2008bg; @Herzog:2008wg; @Maldacena:2008wh; @Adams:2008wt; @Kovtun:2008qy; @Hartnoll:2008rs; @Schvellinger:2008bf; @Mazzucato:2008tr; @Rangamani:2008gi; @Adams:2008zk; @Donos:2009en; @Colgain:2009wm; @Ooguri:2009cv; @Donos:2009xc; @Donos:2009zf]. The background metric is given by $$\label{Metric} ds^2 = L^2 \left( \frac{dx^id x^i + 2 dx^+ dx^-}{r^2} + \frac{dr^2}{r^2} \mp \frac{(dx^-)^2}{r^{2z}} \right)\,,$$ where the $x^+$ direction is compactified as $x^+ \sim x^+ + 2\pi x^+_0$ for some $x_0^+$. Both the deformation term and the compactification break the relativistic conformal symmetry to the [Schrödinger ]{}symmetry $\mathfrak{sch}_z(d)$ where $d$ is the number of space dimensions of the NRCFT. The isometries of this metric are identified with the time translations $H$, dilations $D$, mass/particle number $N$, spatial translations $P_i$, Galilean boosts $K_i$, and spatial rotations $M_{ij}$, with $i,j=1,...,d$. For $z=2$, an additional generator is present, corresponding to special conformal transformations $C$, which together with $H$ and $D$ form an ${{\frak{sl}(2,\mathbb{R})}{}}$ subalgebra. Note that in the NRCFT context the minus sign should be taken in [(\[Metric\])]{} so that the causality properties of a non-relativistic system will be recovered close to the boundary, see e.g. [@Brecher:2002bw; @Hubeny:2002zr] and [@Hartnoll:2009sz]. The plus sign turns out to be relevant to describe black holes in three dimensions (i.e. $d=0$ above) [@Anninos:2008fx]. The infinite-dimensional extension of the ${{\frak{sl}(2,\mathbb{R})}{}}\oplus {{\frak{sl}(2,\mathbb{R})}{}}$ symmetry algebra to two copies of the Virasoro algebras around $AdS_3$ in Einstein gravity [@Brown:1986nw] leads to severe constraints on the quantum theories dual to asymptotically $AdS_3$ spacetimes [@Strominger:1997eq; @Maldacena:1998bw]. Now, it has been known for a while that an infinite-dimensional extension is possible for the [Schrödinger ]{}algebra with $z=2$ [*in any dimension*]{} (note this is also true for the isometry group $so(2,d-1)$ of $AdS_d$, given by the affine extension $\widehat{so(2,d-1)}$; the latter is however not realized as asymptotic symmetry algebra of $AdS_d$ [@Henneaux:1985tv]) . It is called the Schrödinger-Virasoro algebra [@Henkel:1993sg]. Moreover, such an algebra can be easily generalized to extend the $\mathfrak{sch}_z(d)$ symmetry for any $z$. The ${{\frak{sl}(2,\mathbb{R})}{}}$ part of the [Schrödinger ]{}algebra has the familiar Virasoro extension while the other generators get enhanced to current algebras. As we will show below and as was shown independently in [@Alishahiha:2009nm], one can represent the entire Schrödinger-Virasoro algebra as generators acting as diffeomorphisms on the solution . All these generators are thus *candidates* to be asymptotic symmetries of an ought-to-be defined phase space. One would like however to go beyond a kinetic analysis and learn if part of these symmetries can be dynamically realized around the gravity backgrounds by attempting to construct a phase space accommodating these asymptotic symmetries. This is the main aim of the paper. More precisely, we will address the following issues - Could the charges associated with the Schrödinger-Virasoro algebra be defined ? One first observation one can make from the outset from purely algebraic considerations is the following. The author of [@Unterberger:2007hd] classified all possible central extensions for the Schrodinger-Virasoro algebra, showing that only the Virasoro could be centrally extended. This means in particular that the current algebra with generators $N_m$ whose zero mode is the number operator (see [(\[exact1\])]{}) has level $k=0$. But it is known that the only unitary representation of such an algebra is the trivial one, for which $N_m=0, \, \forall m$. Hence, the corresponding gravity dual would only be able to describe field theories with zero number operator, which would be of little interest![^1] However, non-unitary representations are not excluded on general grounds, since little is known on the nature of the field theory dual. On the other hand, infinite-dimensional extensions are possible even if the current algebra is not realized. - Could the charges represent the Schrödinger-Virasoro algebra ? Since the Schrödinger spacetimes are not asymptotically AdS, the holographic renormalization techniques based on Fefferman-Graham expansions used extensively in the AdS/CFT correspondence to compute the charges [@Balasubramanian:1999re; @de; @Haro:2000xn] are not directly applicable but can be used as a guideline for extrapolating the charges, see e.g. the discussion in Appendix C of [@Martelli:2009uc]. While conserved charges for black holes can be defined using the regulated on-shell action for a phase space with fixed temperature and chemical potential [@Herzog:2008wg], a Hamiltonian [@Regge:1974zd; @Brown:1986ed] or Lagrangian [@Barnich:2001jy; @Barnich:2003xg; @Barnich:2007bf] definition of conserved charges is necessary in order to obtain a representation of the Schrödinger symmetries via a Dirac bracket. The conserved charges obtained via holographic and Hamiltonian/Lagrangian methods are identical up to background shifts at least for AdS spacetimes, see e.g. [@Hollands:2005wt; @Papadimitriou:2005ii]. Note also that a holographic stress-tensor for Schrödinger space-times has recently been defined [@Ross:2009ar]. The Lagrangian methods [@Barnich:2001jy; @Barnich:2003xg; @Barnich:2007bf] can be used straightforwardly, see Appendix \[method\] for a short summary. Note however that in general the asymptotic charges of [@Barnich:2001jy; @Barnich:2003xg; @Barnich:2007bf] could be corrected due to counterterms in the regulated action [@Compere:2008us], see [@Azeyanagi:2009wf; @Amsel:2009pu] for related discussions. Counterterms can be obtained for a phase space of fluctuations around a fixed Schrödinger black brane [@Herzog:2008wg] and since they do not contain derivatives of the fields, they do not contribute to the charges. The computation of charges of [@Barnich:2001jy; @Barnich:2003xg; @Barnich:2007bf] however requires a phase space containing also the zero temperature background and counterterms for such general variations are unfortunately very difficult to obtain due to non-linearities at infinity (see however [@Ross:2009ar] for progress on this issue). We will assume in what follows that the supplementary counterterms, if any, to those of [@Herzog:2008wg] do not contribute to the definition of covariant phase space charges. One difficulty than one faces right away is that the charges should be defined as an integral over $x^+$ and the infinitely extended spatial directions $x^i$. A regulator along the spatial directions is therefore needed in order to define finite charges. The approach taken in this paper is to consider that the $x^i$ have a finite extent, i.e. that the NRCFT is defined in a “box”. We will see that the introduction of this regulator is sufficient to be able to define the charges associated to the Schrödinger-Virasoro group. It has been proven some time ago that once canonical charges associated with the asymptotic symmetries are defined and once a phase space preserved by the symmetries is shown to exist, the charges represent the algebra of asymptotic symmetries through a Dirac bracket [@Brown:1986ed], see also [@Barnich:2007bf] for the analogous theorem in Lagrangian formalism. Here, however, one cannot use blindly these theorems since the regulator may also be transformed under the asymptotic symmetries. Only a subset of the proposed Schrödinger-Virasoro generators will preserve the regulator and will thus be represented under the usual bracket. We will propose a modified bracket which includes a change of regulator in order to treat the other candidate symmetries. The status of these transformations will be commented in the conclusions. Spacetimes admiting the Lifshitz symmetry have also been considered recently in the holographic context, see [@Kachru:2008yh]. We will also discuss briefly in Appendix \[Lifsec\] the extension of our analysis to those spacetimes. The organization of this paper is as follows. The asymptotic analysis is presented in section \[dbigger0\]. The method used is illustrated in some detail. We will mainly discuss the dynamical exponent $z = 2$ in all dimensions and extend the analysis to an example with dynamical exponent bigger than 2, namely $z = 3$ (for $d=2$). We finally conclude and interpret our results in section \[conclusion\]. A short review of the method to compute (asymptotic) charges in the covariant formalism is given in Appendix \[method\], while asymptotic symmetries of Lifshitz spacetimes are discussed in Appendix \[Lifsec\]. Asymptotic analysis of gravity duals to NRCFTs \[dbigger0\] =========================================================== This section is devoted to a detailed analysis of the asymptotic symmetries of the Schrödinger metrics for $d>0$ and their realization via an algebra of asymptotic charges on some class of metrics that asymptotes to it. The backgrounds [(\[Metric\])]{} possess $d$ space directions and are part of the ten or eleven dimensional metrics conjectured to be dual to $d$ dimensional non-relativistic systems. The successive steps to define an asymptotic algebra of charges that we will implement in the sequel are the following: 1. We will start by defining a class $\mathcal C$ of candidate asymptotic Killing vectors of the metrics [(\[Metric\])]{} by solving the Killing equations up to a well chosen order in the $r$ expansion led by intuition. Solving the Killing equations to all orders in $r$ would result in finding only the exact Killing vectors, while solving only at the leading order would lead to a very large set of candidate asymptotic symmetries[^2]. 2. We will then construct a phase space ${\cal F}$ together with a class of asymptotic symmetries $\mathcal A \subset \mathcal C$ satisfying the following conditions : 1. The metrics in ${\cal F}$ should approach [(\[Metric\])]{} in the limit $r \rightarrow 0$. 2. The phase space ${\cal F}$ must contain solutions of interest such as black hole solutions. 3. ${\cal F}$ must be invariant under the action of the finite diffeomorphisms associated with asymptotic Killing symmetries belonging to $\mathcal A$. 4. The asymptotic charges of the metrics in ${\cal F}$ associated with elements of $\mathcal A$ must be finite, conserved, and integrable. The asymptotic charges are computed by using the methods of [@Barnich:2001jy; @Barnich:2003xg; @Barnich:2007bf] which are briefly reviewed in Appendix \[method\]. The phase space is usually specified by a set of boundary conditions. Here instead, we will give an explicit construction of ${\cal F}$ and $\mathcal A$ starting from the candidate asymptotic Killing vectors. Only a subset of the candidate asymptotic Killing vectors $\mathcal C$ will be promoted to asymptotic symmetries $\mathcal A$ of ${\cal F}$. 3. In the phase space ${\cal F}$, we will then study the algebra of asymptotic charges which should be isomorphic to the algebra of asymptotic symmetries up to possible central extensions. We start by deriving candidate asymptotic Killing vectors by solving the asymptotic Killing equations in section \[candasym\]. We then turn to the realization of the candidate asymptotic symmetries on a phase space. Black hole solutions that asymptote the metrics [(\[Metric\])]{} are not known for general dimensions and critical exponents $z$. Therefore, we will first focus in section \[knsection\] on the exponent $z=2$ for which $d$-dimensional black hole solutions [@Kovtun:2008qy], generalizing the ones of [@Herzog:2008wg; @Adams:2008wt; @Maldacena:2008wh], are known. In section \[MBH\], we treat the exponent $z=3$ in five dimensions $D=5$ ($d=2$) using solutions obtained by acting with the Null Melvin Twist on non-extremal D3-brane solutions [@Hubeny:2005qu] as an example of critical exponent greater than 2. Candidate asymptotic Killing vectors \[candasym\] ------------------------------------------------- For $z > 1$, the term $\frac{1}{r^{2z}}(dx^-)^2$ is the leading divergent term close to the boundary. This asymptotic behavior differs from asymptotically flat or anti-de Sitter spacetimes. When solving the Killing equations $$\cL_{\xi_{as}} g_{\mu \nu} \rightarrow 0 \hspace{1cm} \mbox{for} \, r \rightarrow 0 ,$$ up to certain well chosen orders (depending on each $\mu \nu$ component), we obtain the following vector fields, $$\begin{aligned} \xi_{as} &=& \frac{r}{z} L'(x^-) \p_r + L(x^-) \p_{-} \nonumber \\ && + \left( N(x^-)-\frac{z-2}{z}\,x^+\, L'(x^-) - \vec{x} \cdot \vec{X}'(x^-) -\frac{\vec{x}^2 + r^2}{2z} L''(x^-)\right) \p_+ \nonumber \\ && + \left(X_i(x^-) +\frac{x_i}{z} L'(x^-) + M_{ij} x_j \right) {\partial}_i\, ,\label{candidakv}\end{aligned}$$ where $M_{ij}$ is antisymmetric. The exact Killing vectors are recovered when $L''(x^-) = 0$, $N'(x^-) = 0$ and $X_i''(x^-) = 0$. A detailed analysis implies that the rotations cannot be extended to $x^-$-dependent functions. Defining the generators $$\begin{aligned} \hat L_n &=& \xi(L(x^-)= -2^{-n/2}(x^-)^{n+1}) \, \, \, \, \, \text{for }n \in \mathbb Z \nonumber ,\\ \hat N_n &=& \xi(N(x^-) = 2^{-n/2}(x^-)^n) \qquad \, \, \, \, \, \, \, \text{for }n \in \mathbb Z \label{modesLn},\\ \hat X^i_{n} &=& \xi(X^i(x^-) = -2^{-n/2} (x^-)^{n+\frac{1}{2}}) \qquad \text{for }n \in \mathbb Z + \frac{1}{2},\nonumber\end{aligned}$$ one gets the algebra $$\begin{aligned} \mbox{} [ \hat L_m,\hat L_n ] &=& (m-n)\hat L_{m+n},\nonumber\\ \mbox{} [ \hat{L}_m,\hat{N}_n ] &=& ( -\frac{z-2}{z}(m+1) -n )\hat{N}_{m+n} \nonumber ,\\ \mbox{} [\hat L_m,\hat X^i_n] &=& (\frac{m}{z} -n + \frac{2-z}{2z}) \hat X^i_{m+n}, \label{Alg3}\\ \mbox{} [\hat X^i_m,\hat X^j_n] &=& (m-n) \hat N_{m+n}\delta^{ij}, \nonumber\\ \mbox{} [M_{ij},\hat X^k_n] &=& -\delta^{ik} \hat X^j_n +\delta^{jk} \hat X^i_n , \nonumber\\ \mbox{} [ \hat N_m ,\hat N_n ] &= & 0 ,\qquad [\hat N_m,\hat X^i_n] = 0,\nonumber\end{aligned}$$ which generalizes to arbitrary $z$ the Schrödinger-Virasoro algebra studied in [@Henkel:1993sg; @Unterberger:2007hd] for $z=2$ and the one proposed in [@Alishahiha:2009nm]. When $z \neq 2$, the exact Killing vectors are given by $M_{ij}$, $$\begin{aligned} \hat L_0 = (-{r \over z}, -x^-,{z-2 \over z} x^+,-x_1 /z,...,-x_d/z)& & \hspace{1cm} \mbox{dilatation} ,\no \\ \hat L_{-1} = (0,-\sqrt{2},0,0,...,0) & & \hspace{1cm} x^- \, \mbox{ translation} \label{exact1},\no \\ N_0 = (0,0,1,0,...,0) & & \hspace{1cm} x^+ \, \mbox{translation}, \\ \hat X_{1/2}^i = (0,0,2^{-1/4}x^i ,0,..,-2^{-1/4} x^-,..,0) & & \hspace{1cm} \mbox{boost}, \no\\ \hat X_{-1/2}^i = (0,0,0,0,.. , -2^{1/4},..,0) & & \hspace{1cm} x^i \, \mbox{translation}. \no\end{aligned}$$ The Killing vector $\hat L_{-1}$ will be interpreted as the Hamiltonian and $\hat N_0$ as the particle number. For $z=2$, special conformal transformations are part of the symmetries. The corresponding generator $\hat L_1$ is given by $$\begin{aligned} \hat L_{+1}= (- 2^{-1/2} \, x^- \,r, - 2^{-1/2}(x^-)^2,{\overrightarrow{x}^2+r^2 \over 2} 2^{-1/2},-2^{-1/2} x^1 x^-,...,-2^{-1/2} x^i x_d) \no \\ \mbox{special conformal transformation} .\label{exact2}\end{aligned}$$ In that case, the $\hat L_{-1}, \,\hat L_0, \, \hat L_1 , \, X^i_{1/2}, \, X^i_{-1/2}$ and $\hat N_0$ form the algebra denoted as $\mathfrak{sch}_2(d)$. This infinite-dimensional algebra is a natural generalization of the [Schrödinger ]{}algebra. However, the appearance of this algebra in the asymptotic Killing equations does not imply that it is actually realized, i.e. associated with finite, conserved, integrable and well represented charges in a phase space containing interesting solutions. We now turn our attention to this issue. Realization of the asymptotic symmetries on a phase space for $z=2$ \[knsection\] --------------------------------------------------------------------------------- This section is devoted to realize the asymptotic symmetry algebra on a phase space for $z=2$ and $d>0$ containing solutions of physical interest and such that the charges are finite, integrable, asymptotically conserved and well represented via a Dirac bracket. In section \[KNprephasespace\], we start the construction of this phase space by considering a two-parameter family of black brane solutions and checking whether or not the charges associated with the candidate asymptotic Killing vectors [(\[candidakv\])]{} for $z=2$ give finite, integrable, conserved and well represented charges. Next we will turn in section \[z2fullphasespace\], to the construction of a restricted phase space by acting with finite diffeomorphisms associated with the asymptotic Killing vectors [(\[candidakv\])]{} (that fulfill the above conditions on the pre-phase space) on the black branes in order to obtain a phase space that is invariant under the asymptotic symmetry algebra. ### Black branes for the critical exponent $z=2$\[KNprephasespace\] Building on earlier work of [@Herzog:2008wg; @Adams:2008wt; @Maldacena:2008wh], the authors of [@Kovtun:2008qy] constructed for any dimension a class of black hole solutions which asymptotes to [(\[Metric\])]{} for $z=2$: && ds\^2 = r\^2 h\^[-]{} ( r\^2 dx\^[-]{}\^2 + (1[+]{}f) dx\^[+]{}dx\^[-]{} + dx\^[+]{}\^2 )\ & & + h\^ ( r\^2 dx\^i dx\^i + ),    \[KNblackholes\]\ && A = r\^2 dx\^[-]{} - dx\^[+]{}, \[AKN\]\ && = -12 h, \[phiKN\] where $ h(r) = 1+{\beta^2r_0^{d+2}}/{r^{d}}$ and $ f(r) = 1-{r_0^{d+2}}/{r^{d+2}}$, $\beta$ is an arbitrary parameter, and the horizon is located at $r=r_0$. The metric [(\[KNblackholes\])]{} and matter fields [(\[AKN\])]{} and [(\[phiKN\])]{} are solution of the following Einstein gravity action coupled to a dilaton and a massive vector field, S &=& d\^[d+3]{}x , \[actionKN\] where $G_{d+3}$ is the $(d{+}3)$-dimensional Newton’s constant, the scalar potential is given by $ V(\phi) = (\Lambda{+}\Lambda')e^{a\phi} + (\Lambda{-}\Lambda')e^{b\phi}$, and the coefficients are $$\Lambda=-\frac{1}{2}(d{+}1)(d{+}2) \,,\quad \Lambda'=\frac{1}{2}(d{+}2)(d{+}3) \,,\quad m^2=2(d{+}2) \,,\quad a=(d{+}2)b=2\frac{d{+}2}{d{+}1} \,.$$ In order to be able to interpret the solution as a gravity dual to a finite temperature non-relativistic system, we are required to identify the $x^+$ coordinate as $x^+ \sim x^+ + 2\pi x^+_0$. The particle number $\mathcal N$ associated with ${\partial}_+$ has then discrete values. Using the methods described in Appendix \[method\], the charge $(D-2)$-forms associated with exact symmetries (evaluated at constant $x^-$ and at any finite $r$) are found to be integrable in the phase space[^3] parameterized by $\beta$ and $r_0$. We denote the set of fields given in [(\[KNblackholes\])]{}-[(\[phiKN\])]{} by $\Phi(\beta, r_0)$, (, r\_0) := { g\_(,r\_0) , A\_(,r\_0), (,r\_0)} . Setting the charges of the background $\bar \Phi = \Phi(0,+\infty)$ to zero by convention, the final expressions for the conserved exact charges are given by $$\begin{aligned} \mathcal N &\equiv& \mathcal Q_{-{\partial}_+} = \frac{D-1}{16 \pi G_{d+3}} \frac{\beta^2 L^{D-2}}{r_0^{D-1}}(2 \pi x_0^+) \text{Vol}_d, \label{valueN}\\ \mathcal H &\equiv& \mathcal Q_{{\partial}_-} = \frac{D-3}{32 \pi G_{d+3}} \frac{L^{D-2}}{r_0^{D-1}}(2 \pi x_0^+) \text{Vol}_d, \\ \mathcal P_i &\equiv& \mathcal Q_{{\partial}_i}=0, \quad \mathcal M_{ij} = 0, \label{otherch}\end{aligned}$$ where $\text{Vol}_d = \int d^d x$ is the transverse volume and $D = d+3$. The Hamiltonian $\cH$ and particle number $\cN$ are finite provided we consider a finite volume $\text{Vol}_d$, we introduce a ‘box’ in the $x^i$-space to regulate the charges. These expressions [(\[valueN\])]{}-[(\[otherch\])]{} have been obtained using a Mathematica code[^4] implementing the formulae for the charges in Appendix \[method\] for $d=1,2,3$. The expression for general $d$ has been guessed by matching that in lower dimensions, but given the simplicity of the final expression, the result is expected to be valid for any $d$. The Hamiltonian is identical to the one of the anti-de Sitter black brane as expected from the Null Melvin Twist procedure [@Maldacena:2008wh; @Adams:2008wt; @Herzog:2008wg]. If one plans to construct a phase space containing the black brane solutions , a necessary (but not sufficient) condition that any asymptotic symmetry $\xi_{as}$ of that phase space *should* obey is that the charge $D-2$ form $\delta \mathcal Q_{\xi_{as}} = \int k_{\xi_{as}}[\delta_{\beta,r_0} \Phi(\beta,r_0) ; \Phi(\beta,r_0)]$ evaluated on $\Phi(\beta,r_0)$ for small perturbations of $r_0$ and $\beta$ should be finite, integrable and conserved. For a general candidate asymptotic Killing vector , we can show that the charge is indeed finite (if we introduce a box) and integrable. Computing the charge $\mathcal Q_{\xi_{as}}[\Phi(\beta,r_0) ; \bar \Phi] = \int_{\bar \Phi}^{\Phi(\beta,r_0)} \delta \mathcal Q_{\xi_{as}}$ of the solution $\Phi(\beta,r_0)$ with respect to the background $\bar \Phi$ , we get the result $$\begin{aligned} \mathcal Q_{\xi_{as}}[\Phi(\beta,r_0) ; \bar \Phi] = L(x^-) \mathcal H - N(x^-) \mathcal N + \frac{ \mathcal N }{\text{Vol}_d}\int d^d x (\vec{x} . \vec{X}'(x^-) + \frac{1}{4} \vec{x}^2 L''(x^-)) \,. \label{chargesr0beta} \end{aligned}$$ In particular, we get $$\begin{aligned} \mathcal D &=& \mathcal Q_{2 \hat L_{0}} = -2 x^- \mathcal H , \\ \mathcal C &=& \mathcal Q_{-\sqrt{2} \hat L_{1}} = (x^-)^2 \mathcal H + \frac{\mathcal N}{\text{Vol}_d} \int d^d x \frac{1}{2} \vec{x}^2 , \\ \mathcal K_i &=& \mathcal Q_{-2^{1/4} \hat X^i_{1/2}} = \frac{\mathcal N}{\text{Vol}_d} \int d^d x x^i .\end{aligned}$$ The dilatations and special conformal transformations are explicitly $x^-$-dependent. They are therefore not explicitly conserved in time. We will however go back to the issue of conservation after having introduced the Dirac bracket of charges. Let us now study if the above charges represent the algebra [(\[Alg3\])]{} (with $z=2$). We should be careful to the fact that to have finite charges, we need to introduce a regulator, i.e. a finite box of integration $\int d^d x$. An important point is that the box is *not* invariant under all the candidate asymptotic Killing vectors [(\[candidakv\])]{} with a non-vanishing spatial component $\xi^i$, $i=1 \dots d$. Since the domain of integration is part of the data determining how to compute the charges, this could mean that the candidate asymptotic Killing vector modifying the location of the box should be removed from the asymptotic algebra. However, the regulator resembles more a technical obstacle than a physical limitation. Let us imagine that one could find a gravity dual to a NRCFT with fields (including the Hamiltonian density and particle number density) falling-off at spatial infinity $x^i \rightarrow \pm \infty$ instead of remaining constant. The system would be finitely extended and the charges associated with asymptotic symmetries would be defined. All these asymptotic symmetries would be interpreted in the dual picture as global symmetries of the boundary theory which are not preserved by particular solutions of that theory but which map a solution to another one with a transformed Hamiltonian and particle number density. Note also that we do not expect the algebra to be centrally extended. As shown in [@Unterberger:2007hd], a central extension could only appear in the commutation relation of the Virasoro generators. Now, the central extension in three-dimensional AdS spacetime for example [@Brown:1986nw] is possible because the Virasoro modes are expanded in exponentials depending on an angular coordinate. The central term is given by the integral of some function of the Virasoro modes in this angular coordinate which leads to a Kroneker delta $\delta_{m+n,0}$ originating from the orthogonality relations of the exponentials. In our case, since the modes $\hat L_m$ are polynomials in $x^-$, variable that we do not integrate over, it is impossible to obtain a central term of the required form proportional to a Kroneker delta $\delta_{m+n,0}$. Therefore, the central term has to vanish. In order to define the action of symmetries on other generators including the ones which changes the shape of the box of integration, we will define the following Dirac bracket { \^[box]{}\_[\_1]{}\[; |\] , \^[box]{}\_[\_2]{}\[; |\] } := \^\_[\_2]{} \^[box]{}\_[\_1]{}\[; |\]+\^[box]{}\_[\_2]{} \^[box]{}\_[\_1]{}\[; |\] , \[modifiedPB\] where the first term is the usual Dirac bracket involving the variation of the fields, while the second term \^[box]{}\_[\_2]{} \^[box]{}\_[\_1]{}\[; |\] := \_[0]{} ( \^[box(x- \_2)]{}\_[\_1]{}\[; |\]-\^[box(x)]{}\_[\_1]{}\[; |\] ) \[deltabox\] accounts for the variation of the regulator. Here, we consider the box as some mapping of $S^1 \times S^d$ (with coordinates $y^+,y^i$) to the manifold parameterized by some functions $x^\mu(y^+,y^i)$. Using these definitions, one gets the expected results { X\^i\_m,X\^i\_n } &=& (m-n) N\_[m+n]{} \^[ij]{} ,\ { N\_m,L\_n }&=& m N\_[m+n]{} , \[aas\]\ { X\^i\_m ,L\_n } &=& (m-n 2) X\^i\_[m+n]{} \[aas2\], while the individual contributions in are not anti-symmetric under the exchange of $\xi_1$ and $\xi_2$ and thus do not make any sense by themselves. However, we also get the unexpected expressions { L\_m , L\_n } &=& (m-n) L\_[m+n]{} - d\^d x L\_m L\_n”’,\ { L\_m,N\_n } &=& 0,\[anomaly\]\ { L\_m,X\^i\_n } &=& X\^i\_[m+n]{} \[anomaly2\] , which show that the Dirac bracket as defined in does not make sense in general since it is not anti-symmetric. However for the charges associated with the exact Killing vectors, it is easy to check that this Dirac bracket is well defined and is isomorphic to the algebra of exact symmetry generators. Note that we have to take into account the effect coming from the variation of the box for the exact charges to be correctly represented. The Dirac bracket could be “anti-symmetrized” by definition but it would not help since the average between e.g. the correct right-hand side in and the incorrect right-hand side of would not be isomorphic to the algebra of generators. Using the definition of the modified Dirac bracket, let us now notice that even though $\mathcal D$ and $\mathcal C$ are time dependent, their total time derivatives $$\begin{aligned} \frac{D}{D x^-}\mathcal D = \frac{{\partial}}{{\partial}x^-}\mathcal D + \{ \mathcal D,\mathcal H \} = -2\mathcal H+2\mathcal H = 0,\\ \frac{D}{D x^-}\mathcal C = \frac{{\partial}}{{\partial}x^-}\mathcal C + \{ \mathcal C,\mathcal H \} = 2 x^- \mathcal H + \mathcal D = 0\end{aligned}$$ vanish as it should. Expanding in modes as in , we can also check that all Schrödinger charges $\mathcal L_n$, $n=-1,0,1$, $\mathcal N_0$, $\mathcal X^i_n$, $n=\pm\frac 1 2$ associated with asymptotic vectors with non-zero $L_n$, $N_n$ and $X_n^i$ respectively are totally conserved, $$\begin{aligned} \frac{D}{D x^-}\mathcal L_n = 0 ,\qquad \frac{D}{D x^-}\mathcal N_n = 0 ,\qquad \frac{D}{D x^-}\mathcal X^i_n = 0.\end{aligned}$$ This conservation property is familiar from the $AdS_3$ example in Einstein gravity [@Brown:1986nw] where even though the Virasoro charges depend explicitly on time, they are totally conserved because the symplectic flux at the boundary is zero. However, contrary to the $AdS_3$ example, the total derivative of the infinite-dimensional extension of those generators is not defined because the Dirac bracket is not defined. At this point in the discussion, we could summarize as follows: only the exact symmetries of the background, the Schrödinger algebra $\mathfrak{sch}_2(d)$, are associated with well-defined charges on our pre-phase space provided that we introduce a regulator. ### \[z2fullphasespace\]Restricted phase space for $z=2$ The set of candidate asymptotic symmetries has been reduced to the set of exact Killing vectors. Let us now act with finite diffeomorphisms of parameter $p$ associated with any Schrödinger generator on the black brane solutions , and check that finiteness, conservation and integrability hold for these new solutions $\Phi[\beta,r_0,p] \equiv (g[\beta,r_0,p],A[\beta,r_0,p],\phi[\beta,r_0,p])$ as well. We first focus on diffeomorphisms associated with the candidate asymptotic Killing vector [(\[candidakv\])]{} with $L(x^-)=0$, we will specify the modes corresponding to exact Killing vectors only afterwards. The vector field $$\begin{aligned} \xi_{as}(\vec{\mathfrak{X}}(x^-),\mathfrak{N}(x^-)) = (\mathfrak{N}(x^-) - \vec{x} . \vec{\mathfrak{X}}^\prime(x^-))\partial_+ + \mathfrak{X}^i(x^-){\partial}_- \label{diffeoXN}\end{aligned}$$ generates the following active finite diffeomorphism of parameter $p$, $$\begin{aligned} x^i &\rightarrow & x^i + p \mathfrak{X}^i(x^-),\qquad r \rightarrow r,\qquad x^- \rightarrow x^- ,\no\\ x^+ &\rightarrow & x^+ +p(\mathfrak{N}(x^-) - \vec{x} . \vec{\mathfrak{X}}^\prime(x^-))-\frac{p^2}{2} \vec{\mathfrak{X}}(x^-) . \vec{\mathfrak{X}'}(x^-).\label{diff0}\end{aligned}$$ The integrability conditions $$\begin{aligned} I \equiv \int_S \delta^{(2)}_{r_0,\beta,p} k_{ \xi_{as}(L(x^-),N(x^-))}[\delta^{(1)}_{r_0,\beta,p} \Phi(r_0,\beta,p) ; \Phi(r_0,\beta,p)] - ((1) \leftrightarrow (2) ) = 0\end{aligned}$$ should hold for all asymptotic symmetries $\xi_{as}(L(x^-),N(x^-))$ (see eq. ) of interest. We get that $$\begin{aligned} I = \frac{1}{\text{Vol}_d} \int d^dx \left( \mathfrak{N}'(x^-) - (\vec{x} + p \vec{\mathfrak{X}}(x^-)). \vec{\mathfrak{X}}''(x^-) \right)\no \\ \times \left( \delta^{(1)}(\frac{\mathcal N}{2\pi x_0^+}) \delta^{(2)}p - [(1) \leftrightarrow (2)] \right) L(x^-), \label{nonint1}\end{aligned}$$ where $\mathcal N$ is the particle number depending on $\beta$ and $r_0$ given in . For the modes corresponding to exact symmetries, ${\mathfrak N}=1$, $\mathfrak{X}^i=-2^{-1/4} x^-$ and $\mathfrak{X}^i=-2^{1/4} $, the integrability condition $I=0$ is fulfilled. Let us also compute the integrability condition for the diffeomorphisms associated with a non-zero $L(x^-)$. We focus on a particular mode of $L(x^-)$ : $\hat L_n(x^-)= -2^{-n/2}(x^-)^{n+1}$ and will specify to the exact modes only afterwards. The finite diffeomorphisms have the form $$\begin{aligned} x^- \rightarrow x^-(1-p n (x^-)^n)^{-1/n}, \qquad x^i &\rightarrow & x^i(1-p n (x^-)^n)^{-(n+1)/(2n)},\no\\ r \rightarrow r(1-p n (x^-)^n)^{-(n+1)/(2n)},\qquad x^+ &\rightarrow & x^+ -\frac{n+1}{4}\frac{r^2+\vec{x}^2}{x^-} \left((1-p n (x^-)^n)^{-1}-1\right),\nonumber \label{diff1}\end{aligned}$$ and we obtain $$\begin{aligned} I = \frac{1}{\text{Vol}_d} \int d^dx \left( n(n^2-1) \frac{\vec{x}^2}{4} (x^-)^{n-2} (1-p n (x^-)^n)^{-\frac{1+7n}{2n}} \right) \no\\ \times \left( \delta^{(1)}(\frac{\mathcal N}{2\pi x_0^+}) \delta^{(2)}p - ((1) \leftrightarrow (2)) \right) L(x^-). \label{nonint2}\end{aligned}$$ This expression vanishes for $n=-1,0,1$. This fact is consistent with the expectation that all the exact symmetries of the background will belong to the asymptotic symmetry algebra. Remark that if we were able to define a modified Dirac bracket that represents correctly all the charges associated with $\xi_{as}$, we would conclude from expressions - that we have to fix the number of particles $\mathcal N = \text{constant}$ in order to get integrable charges. On the phase space constructed by acting with Schrödinger diffeomorphisms only, the conserved charges (which are complicated non-linear functions of the metric) are finite (if we introduce a ‘box’), conserved and well represented. The asymptotic symmetry algebra can be summarized as follows: - strictly speaking, the asymptotic algebra is empty since our charges are either null or infinite, the phase space is therefore also empty; - if we introduce a box, the infinite charges are regulated. We need to restrict the asymptotic symmetry algebra to the exact symmetry algebra in order for the charges to be well represented. If we require the box to be invariant under the asymptotic symmetry algebra, we get as asymptotic algebra only the Hamiltonian $\hat L_{-1}$ and the particle number $\hat N_0$ (supplemented by the rotations $M_{ij}$ if we choose the box to be a sphere centered at the origin of the $x$-space); - if we allow the box to be acted upon by other generators, the asymptotic symmetries consist of all exact generators $\mathfrak{sch}_2(d)=\{\hat L_{-1},\hat L_0,\hat L_1, $ $ \hat N_0, \hat X_{-1/2}, \hat X_{1/2}, M_{ij} \}$. The phase space constructed in the previous sections is extremely limited since it contains no bulk excitations. It would be interesting to define boundary conditions including at the same time bulk excitations and Schrödinger asymptotic symmetries. However, given the non-linearities in the asymptotic region, such an analysis would be pretty tedious, see however [@Ross:2009ar]. Realization of the asymptotic symmetries on a phase space for $z>2$ : an example \[MBH\] ---------------------------------------------------------------------------------------- The generic family of black brane solutions with $z\neq 2$ in any dimension is not known. We will therefore analyze the case $z>2$ by considering a particular case : $z=3$ in $D=5$. As for the $z=2$ case, we will first compute the charges for a family of black holes depending on two parameters and verify their conservation, finiteness (up to a regulator), integrability and representation through a Dirac bracket. Since the computations are analogous to the ones of the $z=2$ case, the asymptotic algebra will *a priori* not contain any infinite-dimensional extensions of the exact symmetry group of the background. Next we turn to the construction of the entire phase space by acting with finite diffeomorphisms associated with the asymptotic Killing vectors. ### Black holes with $z=3$ in $d=2$ \[solM\] For $z\neq 2$, we do not generically know black hole solutions that asymptote [(\[Metric\])]{}. But nicely, for the particular case of $z=3$ in $d=2$, the following black hole metric[^5] ds\^2&=& [1 r\^2 f(r)]{} dr\^2 - (dx\^-)\^2 ([f(r)r\^6]{}-[r\^2 r\_+\^4 4 \^2]{} ) + dx\^- dx\^+ ([1+f(r) r\^2]{})+ r\^2 r\_+\^4 \^2 (d x\^+)\^2\ &&+ [ dx\_1\^2 + dx\_2\^2 r\^2]{} , \[bhM\] where $f(r)= 1 -{r_+^4 r^4} $ does asymptote [(\[Metric\])]{}. It is a solution of the action S &=& d\^[5]{}x ( e\^[-2 ]{} (R - 2 - H\_ H\^) - [1 12]{} F\_F\^)\ && + [2 \^2]{}B F , \[actionM\] where $F = dC $ and $H = dB$. The metric [(\[bhM\])]{} is supported by the following cosmological constant and matter fields B &=& ([1+f(r) r\^4]{} dx\^- + 2 r\_+\^4 \^2 dx\^+) dx\_1 , \[BM\]\ C&=& -2e\^[-]{} [f(r)r\^4]{} dx\^- dx\_2 , \[CM\]\ &=& -10+ , e\^= [ 1 ]{} \[phiM\] .In the coordinates chosen, even though the metric asymptotically approaches the one of , the dilaton and the field $C$ have different value at infinity for different values of $r_+^4 \beta^2$. The fields are therefore not strictly speaking asymptotic to the zero-temperature solution. This will make the analysis of asymptotic charges quite subtle. In order to describe a non-relativistic system with a discrete spectrum for the particle number, we should identify the $x^+$ coordinate as $x^+ \sim x^+ + 2\pi x^+_0$. Hence, any transformation which does not depend periodically on $x^+$ cannot exist. In particular, the dilatation and all Virasoro generators which are part of the candidate asymptotic Killing vectors cannot be part of the asymptotic symmetries, except the Hamiltonian for which $L'(x^-) = 0$ (in contrast to the $z=2$ case). It turns out that the charge $D-2$-form associated with the generator ${\partial}_-$ is not integrable in the phase space parameterized by $\beta$ and $r_+$. Therefore, the Hamiltonian cannot be associated with ${\partial}_-$ following standard prescriptions. One way to define the Hamiltonian consists in multiplying the generator ${\partial}_-$ by an “integrating factor” $f(r_+,\beta)$ chosen such that the resulting charge is integrable, see [@Barnich:2007bf]. One finds that $f(r_+,\beta)$ has to have the form f(r\_+,) = r\_+\^2 f(r\_+\^4 + \^2 r\_+\^8) . A natural choice for $\tilde f$ is to require that the integrating factors goes to 1 when $r_+$ goes to zero. The resulting unique factor is given by f(r\_+,) = , which is in fact the same expression as the dilaton which is non-trivial at infinity. The Hamitonian is then defined as \_[ f(r\_+,) \_-]{}.\[presc\] We will see that this is the correct prescription to obtain an isomorphism between the algebra of asymptotic symmetries and the Dirac bracket[^6]. Setting the charges of the background to zero by convention, the final expressions for the charges associated with the vectors [(\[candidakv\])]{} are given by f(r\_+,)\^[-1]{} \_[ f(r\_+,) \_-]{}&=& [r\_+\^4(1 + r\_+\^4 \^2)\^[3/2]{} 16 G]{}Ê (2 x\^+\_0) \_d ,\ \_[-\_+]{} &= & (2 x\^+\_0) \_d ,\ \_[ X\^i\_n]{}&= &[\_d]{} d\^dx x\^i X\_n\^[i]{}(x\^-) ,\ \_[N\_n]{}&= & - N\_n(x\^-) ,where prime denotes derivative with respect to $x^-$. Note that the charge associated with translations $\mathcal X^i_{-1/2}$ and angular momentum are zero. Using the definition of the Dirac bracket , one obtains $\{ \mathcal H,\mathcal N_n \} = 0$, for all $n \in \mathbb Z$ and $\{ \mathcal H , \mathcal X_n^i\}= 0$, for all $n \in \mathbb Z+\frac{1}{2}$. We see that the isomorphism with the symmetry algebra holds only for the expected generators $\hat N_0$, $\hat X^i_{-1/2}$ and $\hat X^i_{1/2}$ while the representation of the infinite-dimensional generalizations of these generators breaks down, exactly as in the $z=2$ case. One can check that the remaining Dirac brackets have the expected commutation rules. ### Restricted phase space for $z=3$, $d=2$ We could act with the finite diffeomorphisms associated with the candidate asymptotic symmetries on the black holes [(\[bhM\])]{} to construct a restricted phase space. According to the analysis done in the previous section, the candidate asymptotic symmetries are reduced to the Galilean algebra and the particle number $\xi_{cand} = \{ \hat H,\hat N, \hat X^i_{-1/2},\hat X^i_{1/2},\hat M_{12} \}$. It is then straightforward to check that the family obtained by acting with the finite diffeomorphisms associated with these vectors on the black brane solutions is a good phase space, is invariant under the Galilean algebra together with the particle number, and is such that all charges on the family are finite (up to the regulator), integrable, totally conserved and well represented via the generalized Dirac bracket . Conclusion and discussion \[conclusion\] ======================================== We have studied the representation of asymptotic charges in asymptotically Schrödinger spacetimes. While there exists a consistent infinite-dimensional algebra which extends the Schrödinger algebra in any dimension and for any dynamical exponent $z$, the charges associated with these generators have been shown not to obey a regular Dirac bracket algebra in the sense of Brown-Henneaux. Our derivation proceeded by providing a Lagrangian method to derive the conserved charges of black branes in Schrödinger spacetimes. Since these branes are infinitely extended, they require a cut-off in each spatial direction. The regularized charges then depend on this spatial cutoff which is not invariant under the whole Schrödinger algebra. A Dirac bracket between two charges including the variation of the cut-off was defined and was shown to represent the asymptotic Schrödinger algebra of symmetries. Moreover, the Schrödinger asymptotic charges were shown to be conserved in the sense that the total derivative of the charges, including both the explicit time dependence and the commutator with the Hamitonian, was shown to be zero. However, none of the proposed generators in the infinite extension of this algebra appeared to have well-defined Dirac brackets on the restricted phase space of black branes, i.e. on the finite-temperature solutions. We thereby concluded that the infinite-dimensional extension is not part of any asymptotic symmetry algebra of a phase space containing these black branes. We can thus argue that non-relativistic systems having a gravity dual will contain fields forming representations of the Schrödinger group, and not the Schrödinger-Virasoro group. Let us now discuss some extensions and directions for future developments. Our asymptotic analysis is identical if one considers the global coordinates for the Schrödinger metric obtained in [@Blau:2009gd] since the behavior of the metric only differs from the metric we studied by terms becoming subleading at the boundary. Also, since the charges are regulated using a box in all dual spatial directions, one could equivalently consider an infinitesimal box or, equivalently, charge densities and the same conclusions would apply. An interesting possibility comes from the Schrödinger spacetime with a spherical spatial boundary described in [@Yamada:2008if]. One could expect that the finite area spatial boundary would give finite charges without needing a regulator which introduced all the problems in the representation of the charges[^7]. Therefore, an infinite-dimensional extension in these backgrounds is not discarded. The boundary theory would however have to be defined on a sphere which is pretty usual from the condensed matter perspective. The canonical charges associated with the generators of time-translation, translation in the compact null direction and spatial translations were obtained straightforwardly for $z=2$. For $z=3$, however, a subtle manipulation of the conserved charge was necessary in order to define an integrable charge which is still associated with the canonical time and which still represent the algebra of asymptotic symmetries. General results on the equivalence of Hamiltonian and Lagrangian formalisms, and the unicity of the charges shows that identical results would be obtained in Hamiltonian framework if one also uses the prescription we introduced for the integration in phase space. We also comment in Appendix \[Lifsec\] on the relationship between our results and another class of gravitational backgrounds relevant to the non-relativistic AdS/CFT correspondence, namely the Lifshitz spacetimes [@Kachru:2008yh]. We show that a candidate infinite-dimensional extension of the Lifshitz symmetry can be defined. However, a regulator and a modified Dirac bracket should be defined. This can be argued to lead to the same problems as the ones encountered in the Schrödinger case. Another approach to look at gravitational backgrounds dual to NRCFTs with Schrödinger invariance relies on the observation that, in non-relativistic systems, the number or mass operator usually appears as a central element between the translations and boosts instead of being a generator on its own[^8]. Since central extensions cannot appear in the bracket of exact symmetries, one idea would be to look at spacetimes which do not admit the full Schrödinger algebra as an exact symmetry group, but instead realize it as its asymptotic symmetry group. One natural question is to ask if the Lifshitz spacetimes can realize such a scenario since they admit translations but not boosts as exact symmetries. However, the exact statement is that the central elements can appear only in the Dirac bracket between two asymptotic symmetries [@Brown:1986ed]. This is easily seen by using the anti-symmetry of the central charge, $$\begin{aligned} \cK_{P_i,K_j} = \int_{S^\infty} k_{P_i}[\mathcal L_{K_i}\bar\Phi ; \bar\Phi] = - \int_{S^\infty} k_{K_j}[\mathcal L_{P_i}\bar\Phi ; \bar\Phi] \end{aligned}$$ between the translations and rotations, where $\bar \Phi$ are the fields of the background including the metric. Since the Lifshitz spacetime is translation-invariant, it is not appropriate to realize that idea. The only way the number operator could appear as central element would be to consider a gravity background where both translations and Galilean boosts would be realized as asymptotic isometries. In place of Schrödinger algebras, NRCFT can be based on Galilean conformal algebras [@Henkel:1997zz; @Lukierski:2005xy]. The proposal of gauge/gravity correspondences based on Galilean conformal algebras has been developed so far using the Newton-Cartan formalism (see e.g. [@Duval:2009vt] and references therein). It has been argued recently in [@Bagchi:2009my] that infinite-dimensional extensions of the asymptotic symmetry group could occur in that context as well. Unfortunately, our charge analysis does not extend straightforwardly to this case since the lack of a regular metric in the bulk would prevent one to use covariant phase space methods to define the conserved charges of the theory to infirm or confirm the proposal. In this paper we focused on spacetimes of dimensions strictly greater than $3$ which are conjectured to be dual to field theories living in a positive number of spatial dimensions. However, from the classical asymptotic analysis of AdS spaces of [@Brown:1986nw; @Henneaux:1985ey; @Henneaux:1985tv], it is expected that the three-dimensional background will exhibit specific features with respect to its higher-dimensional counterparts. One can show it is indeed the case: as for $AdS_3$, the asymptotic symmetry algebra becomes infinite-dimensional with completely well-defined charges satisfying a Virasoro algebra. Those results will be presented elsewhere. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Allan Adams, Nicolay Bobev, Gaston Giribet, Sean Hartnoll, Mokhtar Hassaine, Marc Henneaux, Petr Hořava, Veronika Hubeny, Josh Lapan, Alex Maloney, Don Marolf, Charles Melby-Thompson, Michael Mulligan, Mukund Rangamani, Sakura Schafer-Nameki, Philippe Spindel, Andy Strominger and Jérémie Unterberger for fruitful discussions. The work of GC is supported in part by the US National Science Foundation under Grant No. PHY05-55669, and by funds from the University of California. The work of SdB and SD are funded by the European Commission though the grants PIOF-GA-2008-220338 and PIOF-GA-2008-219950 (Home institution: Université Libre de Bruxelles, Service de Physique Théorique et Mathématique, Campus de la Plaine, B-1050 Bruxelles, Belgium). The work of KY was supported in part by the National Science Foundation under Grant No. PHY05-51164 and by the Grant-in-Aid for the Global COE Program “The Next Generation of Physics, Spun from Universality and Emergence” from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. Method to compute conserved charges \[method\] ============================================== In this appendix, we will briefly review the formalism of [@Barnich:2001jy; @Barnich:2003xg; @Barnich:2007bf] to compute conserved or asymptotically conserved charges. We will present the method for gravity in $D $ dimensions coupled to one $p$-form and then provide the relevant definitions for a more general Lagrangian including multiple $p$-forms, scalar fields as well as $U(1)$ and gravitational Chern-Simons terms. General definitions illustrated on an example {#method1} --------------------------------------------- Let us explain how conserved charges are defined on an example : the Einstein–$p$-form system in $D$ dimensions with the following action, $$I = \frac{1}{16 \pi G} \int \, d^Dx \,\left[ \sqrt{-g}\left( R - \frac{1}{2} \star \mathbf F \wedge \mathbf F\right) \right],\label{action}$$ where $\mathbf F = d \mathbf A$. The gauge parameters of the theory $(\xi, \mathbf\Lambda)$, where $\xi$ generates infinitesimal diffeomorphisms and ${\mathbf\Lambda}$ is the parameter of $U(1)$ gauge transformations are endowed with the Lie algebra structure $$[(\xi, \mathbf\Lambda),(\xi^\prime, \mathbf\Lambda ^\prime)]_{G} = ([\xi,\xi^\prime],[\mathbf\Lambda, \mathbf\Lambda ^\prime]),\label {eq:Lie}$$ where the $[\xi,\xi^\prime]$ is the Lie bracket and $[\mathbf\Lambda, \mathbf\Lambda ^\prime]\, \equiv \cL_\xi \mathbf \Lambda ^\prime - \cL_{\xi^\prime} \mathbf\Lambda $. We will denote for compactness the fields as $\phi \equiv (g_{\mu\nu}, \mathbf A)$ and the gauge parameters as $f = (\xi^\mu, \mathbf\Lambda)$. For a given field $\phi $, the gauge parameters $f$ satisfying $$\cL_\xi g_{\mu\nu} \approx 0, \qquad \cL_\xi \mathbf A + d \mathbf\Lambda \approx 0, \label{eq:red}$$ where $\approx$ is the on-shell equality, will be called the exact symmetry parameters of the field configuration $\phi$. Parameters $(\xi, \mathbf\Lambda) \approx 0$ are called trivial symmetry parameters. The set of gauge parameters which satisfy the equations in an asymptotic region, i.e. such that in some large radius $r$ limit the equations are satisfied at leading order, and which form a Lie algebra, will be called ’candidate asymptotic symmetries’. The concept of (truely) asymptotic symmetries are defined as a subset of those which are associated to finite, conserved and integrable charges, see the next definitions. It exists a canonical algorithm to construct a spacetime $D-2$ form $$\begin{aligned} \mathbf k_{f} [\delta \phi ; \phi ], \label{oneform}\end{aligned}$$ which is also a one-form in field space (because the expression is linear in $\delta \phi$ and its derivatives) such that the following properties hold : - The conserved quantity associated with any exact symmetry parameter $f$ that provides the difference of charge between the solution $\phi$ and the solution $\phi + \delta \phi$ where $\delta \phi$ obeys the linearized equations of motion is given by Q\_[f]{} := \_S k\_[ f]{} \[; \] \[infcharge\] and only depend on the homology class of the $D-2$ surface $S$. As a consequence, the conserved charge is finite and time-independent. One can further show that the conserved charge is unique, i.e. there is a one-to-one correspondence mapping a couple of symmetry parameters and a surface of given homology class and conserved charges [@Barnich:1994db]. - The quantity associated with a candidate asymptotic symmetry parameter $f$ that provides the difference of charge between the solution $\phi$ and the solution $\phi + \delta \phi$ where $\delta \phi$ obeys the linearized equations of motion is given by Q\_[f]{} := \_[r ]{} \_[S\^r]{} k\_[ f]{} \[; \] . \[infchargeasympt\] This quantity can be infinite and/or not conserved depending on the choice of boundary conditions obeyed by $\phi$ and $\delta \phi$. Given a definition of phase space, one has to discard any candidate asymptotic symmetry which violates the conditions of finiteness and conservation of the charges. - The form is constructed out of the equations of motion and therefore does not depend on boundary terms that may be added to the Lagrangian. Moreover, the form is a linear functional of the equations of motion, and so, of the Lagrangian. One can therefore construct this form by summing up the individual contributions from the different pieces of the Lagrangian. Additional properties of the charge form are discussed in [@Barnich:2004uw; @Barnich:2006av]. In the case of the Lagrangian , one gets $$\begin{aligned} \mathbf k_{ \xi,\mathbf\Lambda} [\delta \phi ; \phi ] &=& \mathbf k^{g}_{ \xi}[\delta g;g] + k^{\mathbf A}_{ \xi,\mathbf \Lambda}[\delta \phi ; \phi ] , \label{k_tot}\end{aligned}$$ where the gravitational contribution to the charge form is given by [@Abbott:1981ff; @Barnich:2001jy] $$\begin{aligned} \mathbf k^{g}_{ \xi}[\delta g;g] &=& -\delta \mathbf Q^g_{ \xi} -i_{\xi}\mathbf \Theta^g[\delta g] -\mathbf E^g_\cL[\cL_\xi g, \delta g],\label{grav_contrib}\end{aligned}$$ where $$\begin{aligned} \mathbf Q^g_{\xi}&=& \star \Big( {\mathchoice{ {\raisebox{.5pt} {\footnotesize $\displaystyle\frac{1}{2}$}\kern1pt}}{\frac{1}{2}}{\frac{1}{2}}{\frac{1}{2}}}(D_\mu\xi_\nu-D_\nu\xi_\mu) dx^\mu \wedge dx^\nu \Big),\label{Komar_term} \\ \mathbf \Theta^g[\delta g]&=&\star \Big( (D^\sigma \delta g_{\mu\sigma}- g^{\alpha\beta} D_\mu \delta g_{\alpha\beta})\,dx^\mu\Big),\\ \mathbf E^g_\cL[\delta_2 g, \delta_1 g] &=& \star \Big( {\mathchoice{ {\raisebox{.5pt} {\footnotesize $\displaystyle\frac{1}{2}$}\kern1pt}}{\frac{1}{2}}{\frac{1}{2}}{\frac{1}{2}}}\delta_1 g_{\mu\alpha} g^{\alpha\beta }\delta_2 g_{\beta\nu} dx^\mu \wedge dx^\nu \big).\end{aligned}$$ The term [(\[Komar\_term\])]{} is the Komar $D-2$ form and the supplementary term, $E^g_\cL$, with respect to the Iyer-Wald form [@Iyer:1994ys] vanishes for Killing vectors but may be relevant for asymptotic symmetries. In , we define $\delta$ as an operator acting on the fields $\phi$ but not on $\xi$. The $p$-form contribution to the charge form is given by [@Compere:2007vx] $$\mathbf k^{\mathbf A}_{ \xi,\mathbf\Lambda}[\delta \phi ; \phi ]=- \delta \mathbf Q^{\mathbf A}_{\xi,\mathbf\Lambda} + i_\xi \mathbf \Theta_{\mathbf A}-\mathbf E^{\mathbf A}_\cL[\cL_\xi \mathbf A+d \mathbf\Lambda,\delta \mathbf A] \label{Bcharge}$$ with $$\begin{aligned} & &Q^{\mathbf A}_{\xi,\mathbf\Lambda} = (i_\xi \mathbf A + \mathbf\Lambda) \wedge \star \mathbf F \label{QA} ,\qquad \mathbf \Theta^{\mathbf A} = \delta \mathbf A \wedge \star \mathbf F,\label{ThetaA}\\ & &\mathbf E^{\mathbf A}_\cL[\delta_2 \mathbf A,\delta_1 \mathbf A] = \star \big( {\mathchoice{ {\raisebox{.5pt} {\footnotesize $\displaystyle\frac{1}{2}$}\kern1pt}}{\frac{1}{2}}{\frac{1}{2}}{\frac{1}{2}}}\frac{1}{(p-1)!}\delta_1 \mathbf A_{\mu\alpha_1\cdots \alpha_{p-1}} \delta_2 \mathbf A_{\nu}^{\;\,\,\alpha_1\cdots \alpha_{p-1}} dx^\mu\wedge dx^\nu \big).\end{aligned}$$ The set of fields $\phi$, $\delta \phi$ and gauge parameters $(\xi,\mathbf\Lambda)$ that satisfies the conditions $$\begin{aligned} \oint_{S} \delta_1 k_{f}[\delta_2 \phi,\phi] - (1 \leftrightarrow 2)= 0, \label{cond_int} \\ \oint_{S} \mathbf E_\cL[\delta_1 \phi, \delta_2 \phi] - (1 \leftrightarrow 2) = 0,\label{cond_alg}\end{aligned}$$ define a space of fields and parameters which we denote as the integrable space $\cI$. In this space, we define the charges difference between the reference field $\bar \phi$ and the field $\phi$ associated with $f= (\xi,\mathbf\Lambda)$ as $$\cQ_{(\xi,\mathbf\Lambda)}[\phi,\bar \phi] = \oint_{S} \int_ \gamma k_{(\xi,\mathbf\Lambda)}[\delta \phi,\phi] + \cN_{(\xi,\mathbf \Lambda)}[\bar \phi],$$ where $\gamma$ is a path in field space contained in $\cI$ and $\cN_ {(\xi,\mathbf\Lambda)}[\bar \phi]$ is an arbitrary normalization constant. The condition  ensures that the charge is independent on smooth deformations of the path $\gamma$. The condition [(\[cond\_alg\])]{} is a technical assumption needed for the representation theorem, see below. A candidate asymptotic symmetry $f[\phi]$ will be called an asymptotic symmetry of a given phase space at $\phi$ if the conserved charges associated to $f[\phi]$ around $\phi$ are all finite, conserved and integrable. Let us denote as $\cA$ the largest algebra of asymptotic symmetries $f[\phi] = (\xi[g,\mathbf A],\mathbf \Lambda[g,\mathbf A])$ such that for each field $\phi$ in the phase space the set of parameters $f[\phi]$ form a closed Lie algebra under the bracket defined in and such that all these algebras are isomorphic. Using the conditions -, one can then show that for any solutions $\bar \phi$ and $\phi$ in the integrable space, and for any $(\xi,\lambda)$, $(\xi^\prime, \lambda^ \prime)$ in $\cA$, the Dirac bracket defined by $$\left\{ \cQ_{(\xi,\mathbf\Lambda)}[\phi,\bar \phi], \cQ_{(\xi^\prime,\mathbf\Lambda^\prime)}[\phi,\bar \phi] \right\} \equiv \oint_{S^\infty} k_{(\xi, \mathbf\Lambda)}[(\cL_{\xi^\prime} g_{\mu\nu},\cL_{\xi^\prime} \mathbf A + d \mathbf\Lambda^\prime);\phi] \label{poissonbracket}$$ can be written as $$\left\{ \cQ_{(\xi,\lambda)}[\phi,\bar \phi], \cQ_{(\xi^\prime,\mathbf\Lambda^\prime)}[\phi,\bar \phi] \right\} = \cQ_{[(\xi,\mathbf\Lambda),(\xi^\prime,\mathbf\Lambda^\prime)]_{G}} [\phi,\bar \phi] - \cN_{[(\xi,\mathbf\Lambda),(\xi^\prime,\mathbf\Lambda^\prime)]_{G}} [\bar \phi] + \cK_{(\xi,\mathbf\Lambda),(\xi^\prime,\mathbf\Lambda^ \prime)}[\bar \phi],\label{formula}$$ where $$\begin{aligned} \cK_{(\xi,\mathbf\Lambda),(\xi^\prime,\mathbf\Lambda^\prime)}[\bar \phi] = \int_{S^\infty} k_{(\xi, \mathbf\Lambda)}[(\cL_{\xi^\prime} \bar g_{\mu\nu},\cL_{\xi^\prime} \bar{\mathbf A} + d \mathbf\Lambda^\prime);\bar \phi] \label{eq:cc}\end{aligned}$$ is a central extension which is considered as trivial if it can be reabsorbed in the normalization of the charges $\cN_{[(\xi, \mathbf\Lambda),(\xi^\prime,\mathbf\Lambda^\prime)]_{G}}[\bar \phi]$. Charge form for a more general Lagrangian ----------------------------------------- For a general action with $r$ scalar fields $\overrightarrow{\chi} = \{ \chi_1, ... \chi_r \}$ and any number of $p$-form fields, $$I = \frac{1}{16 \pi G} \int \, \left( R \, \star {\oneone} - \frac{1}{2} \star d \overrightarrow{\chi} \wedge d \overrightarrow {\chi} - \frac{1}{2} \sum_a e^{-\overrightarrow{\alpha_a}. \overrightarrow \chi } \star \mathbf F^a \wedge \mathbf F^a \right) ,\label{gaction}$$ the charge form is given in terms of the building blocks defined in section \[method1\] as k\_[ ,\^[ a]{}]{} \[; \] &=& k\^[g]{}\_[ ]{}\[g;g\] + \_a e\^[-. ]{} k\^[A\^a]{}\_[ , \^[a]{}]{}\[; \] + \_i k\^[\^i]{}\_[ ]{} \[;\]\ && + \_[a]{} k\^[A\_a suppl]{}\_ [, \_a]{}\[;\] , \[k\_totg\] where k\^[\^i]{}\_[ ]{}\[;\] &=& i\_((d \^i \^i ) ) i ,\ k\^[A\_a suppl]{}\_[, \_a]{}\[; \] &= & . e\^[- \_a . ]{} [Q]{}\^[A\_a]{}\_ [, \_a]{} . The last contribution can be understood by the fact that the charge form of the $p$-forms will have an expression similar to with a Komar term ${\cal Q}^{\mathbf A^a}_{\xi , \mathbf\Lambda^a}$ including the factor $e^{-\overrightarrow{\alpha_a}. \overrightarrow \chi }$. In section \[solM\], we compute charges for a solution of a five dimensional theory with a Chern-Simons term [(\[actionM\])]{} of the following form I\_[CS]{} = B d C. One can compute the corresponding contribution to the charge form and one gets k\^[CS]{} \[;\] = B i\_C - i\_B C . Note also that in the string frame $I = \frac{1}{16 \pi G} \int \, \left( e^{- 2 \chi} R \, \star {\oneone} - {1 \over 2} \star d \chi \wedge d \chi + ... \right) $, the gravitational contribution to the charge form is modified to k\^[g string frame]{}\_[ ]{}\[;\] = e\^[- 2 ]{} k\^[g]{}\_[ ]{}\[;\]- (e\^[- 2 ]{}) [Q]{}\^g\_+ () , where the last terms are proportional to at least one derivative of the dilaton and thus vanish if the dilaton is constant. They play no role for the solutions of interest in this paper. Candidate asymptotic symmetries for Lifshitz spacetimes {#Lifsec} ======================================================= Gravity duals to non-relativistic systems governed by Lifshitz symmetry have also been considered [@Kachru:2008yh]. The zero-temperature background ds\^2 = [dr\^2 r\^2]{} - r\^[-2z]{} dt\^2 + r\^[-2]{} dx\^i dx\^i (i = 1,..., d) can be described formally as the Kaluza-Klein reduction along the null direction $x^+$ of the background . The kinematics analysis of these spacetimes will therefore be very similar to the one performed in the main text. However, since the theory describing Lifshitz spacetimes is different than Schrödinger spacetimes, the analyses of conserved charges will be different. The lack of a Null Melvin Twist procedure will also prevent one to use correspondences with AdS to derive the conserved charges. Since the only black hole solutions known so far are numerical [@Danielsson:2009gi; @Bertoldi:2009vn; @Bertoldi:2009dt], we will not attempt to construct an analytical phase space in this paper and we will limit our discussion to kinematical aspects of the asymptotic symmetries. For simplicity, let us focus on the $d=2$ case. We solved the asymptotic Killing equations up to certain convenient orders and obtained the following candidate asymptotic Killing vectors, \_[asym]{} &=& [rz]{} L’(t) \_r + L(t) \_t + (X\^1(t) + x\^2 M + [x\^1 z]{} L’(t)) \_[x\^1]{}\ & &+ (X\^2(t)-x\^1 M + [x\^2z]{} L’(t)) \_[x\^2]{} . The exact symmetries are recovered when $L''(t)=0$, $X^{1\, \prime}(t) =0$ and $X^{2\, \prime}(t)=0$. The Hamiltonian corresponds to $L(t) = 1$, the dilations to $L(t) = 2 t$, the $x^1$-translations to $X^1(t) =1$, the $x^2$-translations to $X^2(t)=1$ and the rotations to $M$. Defining the generators $$\begin{aligned} L_n &=& \xi_{asym}( L(t)= -2^{-n/2}t^{n+1}) \, \, \, \, \text{for }n \in \mathbb Z, \\ X^i_n &=& \xi_{asym}(X^i(t)= -2^{-n/2}t^{n+1/2})\, \, \, \, \, \, \text{for }n \in \mathbb Z + \frac{1}{2},\end{aligned}$$ we obtain the infinite dimensional algebra i\[ L\_m , L\_n\] &=& (m-n) L\_[m+n]{} ,\ i\[L\_m, X\^i\_n\] &=& ([m z ]{} - n + ) X\^i\_[m+n]{} , \[Lialg\]\ \[X\^i\_m, X\^j\_n\] &=& 0 . The asymptotic symmetry algebra of Lifshitz spaces is a truncation of the Schrödinger algebra $\mathfrak{sch}_z(d)$ . The generalization of these candidate asymptotic symmetries to any dimensions is straightforward. It is amusing to observe that, for $d=1$ and $z=2$, [(\[Lialg\])]{} is precisely the symmetry algebra of the Burgers equation driven by an external force relevant in turbulence theory (see e.g. [@Ivashkevich:1996et]). On the other hand, the two-dimensional metric ($d=0$) is also a solution of Einstein-Maxwell theory with negative cosmological constant, like $AdS_2$. It might therefore be interesting to see whether the corresponding asymptotic algebras admit central extensions, in the spirit of [@Brown:1986nw; @Hartman:2008dq]. The realization of these symmetries on a phase space will lead to the same kind of difficulties we encountered for the Schrödinger case. Indeed, in order to compute the charges, we will have to integrate on the $x^i $-plane and therefore we will obtain infinite results. The infinite charges could then be regulated by introducing a ‘box’. The Dirac bracket will have to be modified in order to accomodate the action of generators on the regulator. We therefore expect that the infinite dimensional algebra will not be realized if we follow the same strategy as the one presented in this paper for the Schrödinger case. [10]{} J. M. Maldacena, [*[The large N limit of superconformal field theories and supergravity]{}*]{}, [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 231–252 \[[[hep-th/9711200]{}](http://arXiv.org/abs/hep-th/9711200)\]. S. S. Gubser, I. R. Klebanov and A. M. Polyakov, [*[Gauge theory correlators from non-critical string theory]{}*]{}, [*Phys. Lett.*]{} [**B428**]{} (1998) 105–114 \[[[hep-th/9802109]{}](http://arXiv.org/abs/hep-th/9802109)\]. E. Witten, [*[Anti-de Sitter space and holography]{}*]{}, [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 253–291 \[[[hep-th/9802150]{}](http://arXiv.org/abs/hep-th/9802150)\]. S. S. Gubser, [*[Breaking an Abelian gauge symmetry near a black hole horizon]{}*]{}, [*Phys. Rev.*]{} [**D78**]{} (2008) 065034 \[[[0801.2977]{}](http://arXiv.org/abs/0801.2977)\]. S. A. Hartnoll, C. P. Herzog and G. T. Horowitz, [*[Building a Holographic Superconductor]{}*]{}, [*Phys. Rev. Lett.*]{} [**101**]{} (2008) 031601 \[[[0803.3295]{}](http://arXiv.org/abs/0803.3295)\]. E. Keski-Vakkuri and P. Kraus, [*[Quantum Hall Effect in AdS/CFT]{}*]{}, [ *JHEP*]{} [**09**]{} (2008) 130 \[[[ 0805.4643]{}](http://arXiv.org/abs/0805.4643)\]. J. L. Davis, P. Kraus and A. Shah, [*[Gravity Dual of a Quantum Hall Plateau Transition]{}*]{}, [*JHEP*]{} [**11**]{} (2008) 020 \[[[0809.1876]{}](http://arXiv.org/abs/0809.1876)\]. M. Fujita, W. Li, S. Ryu and T. Takayanagi, [*[Fractional Quantum Hall Effect via Holography: Chern- Simons, Edge States, and Hierarchy]{}*]{}, [[0901.0924]{}](http://arXiv.org/abs/0901.0924). C. R. Hagen, [*[Scale and conformal transformations in galilean-covariant field theory]{}*]{}, [*Phys. Rev.*]{} [**D5**]{} (1972) 377–388. U. Niederer, [*[The maximal kinematical invariance group of the free Schrodinger equation]{}*]{}, [*Helv. Phys. Acta*]{} [**45**]{} (1972) 802–810. M. Henkel, [*[Schrodinger invariance in strongly anisotropic critical systems]{}*]{}, [*J. Stat. Phys.*]{} [**75**]{} (1994) 1023–1061 \[[[hep-th/9310081]{}](http://arXiv.org/abs/hep-th/9310081)\]. T. Mehen, I. W. Stewart and M. B. Wise, [*[Conformal invariance for non-relativistic field theory]{}*]{}, [*Phys. Lett.*]{} [**B474**]{} (2000) 145–152 \[[[hep-th/9910025]{}](http://arXiv.org/abs/hep-th/9910025)\]. D. T. Son and M. Wingate, [*[General coordinate invariance and conformal invariance in nonrelativistic physics: Unitary Fermi gas]{}*]{}, [*Annals Phys.*]{} [**321**]{} (2006) 197–224 \[[[cond-mat/0509786]{}](http://arXiv.org/abs/cond-mat/0509786)\]. Y. Nishida and D. T. Son, [*[Nonrelativistic conformal field theories]{}*]{}, [*Phys. Rev.*]{} [**D76**]{} (2007) 086004 \[[[0706.3746]{}](http://arXiv.org/abs/0706.3746)\]. N. Bobev, A. Kundu and K. Pilch, [*[Supersymmetric IIB Solutions with Schrödinger Symmetry]{}*]{}, [*JHEP*]{} [**07**]{} (2009) 107 \[[[0905.0673]{}](http://arXiv.org/abs/0905.0673)\]. A. Donos and J. P. Gauntlett, [*[Solutions of type IIB and D=11 supergravity with Schrodinger(z) symmetry]{}*]{}, [*JHEP*]{} [**07**]{} (2009) 042 \[[[0905.1098]{}](http://arXiv.org/abs/0905.1098)\]. D. T. Son, [*[Toward an AdS/cold atoms correspondence: a geometric realization of the Schroedinger symmetry]{}*]{}, [*Phys. Rev.*]{} [**D78**]{} (2008) 046003 \[[[0804.3972]{}](http://arXiv.org/abs/0804.3972)\]. K. Balasubramanian and J. McGreevy, [*[Gravity duals for non-relativistic CFTs]{}*]{}, [*Phys. Rev. Lett.*]{} [**101**]{} (2008) 061601 \[[[0804.4053]{}](http://arXiv.org/abs/0804.4053)\]. C. Duval, G. W. Gibbons and P. Horvathy, [*[Celestial Mechanics, Conformal Structures, and Gravitational Waves]{}*]{}, [*Phys. Rev.*]{} [**D43**]{} (1991) 3907–3922 \[[[ hep-th/0512188]{}](http://arXiv.org/abs/hep-th/0512188)\]. C. Duval, M. Hassaine and P. A. Horvathy, [*[The geometry of Schrödinger symmetry in gravity background/non-relativistic CFT]{}*]{}, [*Annals Phys.*]{} [**324**]{} (2009) 1158–1167 \[[[ 0809.3128]{}](http://arXiv.org/abs/0809.3128)\]. W. D. Goldberger, [*[AdS/CFT duality for non-relativistic field theory]{}*]{}, [*JHEP*]{} [**03**]{} (2009) 069 \[[[ 0806.2867]{}](http://arXiv.org/abs/0806.2867)\]. J. L. F. Barbon and C. A. Fuertes, [*[On the spectrum of nonrelativistic AdS/CFT]{}*]{}, [*JHEP*]{} [**09**]{} (2008) 030 \[[[0806.3244]{}](http://arXiv.org/abs/0806.3244)\]. C. P. Herzog, M. Rangamani and S. F. Ross, [*[Heating up Galilean holography]{}*]{}, [*JHEP*]{} [**11**]{} (2008) 080 \[[[0807.1099]{}](http://arXiv.org/abs/0807.1099)\]. J. Maldacena, D. Martelli and Y. Tachikawa, [*[Comments on string theory backgrounds with non- relativistic conformal symmetry]{}*]{}, [*JHEP*]{} [**10**]{} (2008) 072 \[[[0807.1100]{}](http://arXiv.org/abs/0807.1100)\]. A. Adams, K. Balasubramanian and J. McGreevy, [*[Hot Spacetimes for Cold Atoms]{}*]{}, [*JHEP*]{} [**11**]{} (2008) 059 \[[[0807.1111]{}](http://arXiv.org/abs/0807.1111)\]. P. Kovtun and D. Nickel, [*[Black holes and non-relativistic quantum systems]{}*]{}, [*Phys. Rev. Lett.*]{} [**102**]{} (2009) 011602 \[[[0809.2020]{}](http://arXiv.org/abs/0809.2020)\]. S. A. Hartnoll and K. Yoshida, [*[Families of IIB duals for nonrelativistic CFTs]{}*]{}, [*JHEP*]{} [**12**]{} (2008) 071 \[[[0810.0298]{}](http://arXiv.org/abs/0810.0298)\]. M. Schvellinger, [*[Kerr-AdS black holes and non-relativistic conformal QM theories in diverse dimensions]{}*]{}, [*JHEP*]{} [**12**]{} (2008) 004 \[[[0810.3011]{}](http://arXiv.org/abs/0810.3011)\]. L. Mazzucato, Y. Oz and S. Theisen, [*[Non-relativistic Branes]{}*]{}, [[0810.3673]{}](http://arXiv.org/abs/0810.3673). M. Rangamani, S. F. Ross, D. T. Son and E. G. Thompson, [*[Conformal non-relativistic hydrodynamics from gravity]{}*]{}, [*JHEP*]{} [**01**]{} (2009) 075 \[[[0811.2049]{}](http://arXiv.org/abs/0811.2049)\]. A. Adams, A. Maloney, A. Sinha and S. E. Vazquez, [*[1/N Effects in Non-Relativistic Gauge-Gravity Duality]{}*]{}, [[0812.0166]{}](http://arXiv.org/abs/0812.0166). A. Donos and J. P. Gauntlett, [*[Supersymmetric solutions for non-relativistic holography]{}*]{}, [*JHEP*]{} [**03**]{} (2009) 138 \[[[0901.0818]{}](http://arXiv.org/abs/0901.0818)\]. E. O. Colgain and H. Yavartanoo, [*[NR $CFT_3$ duals in M-theory]{}*]{}, [[0904.0588]{}](http://arXiv.org/abs/0904.0588). H. Ooguri and C.-S. Park, [*[Supersymmetric non-relativistic geometries in M-theory]{}*]{}, [[0905.1954]{}](http://arXiv.org/abs/0905.1954). A. Donos and J. P. Gauntlett, [*[Schrodinger invariant solutions of type IIB with enhanced supersymmetry]{}*]{}, [[ 0907.1761]{}](http://arXiv.org/abs/0907.1761). D. Brecher, J. P. Gregory and P. M. Saffin, [*[String theory and the classical stability of plane waves]{}*]{}, [*Phys. Rev.*]{} [**D67**]{} (2003) 045014 \[[[hep-th/0210308]{}](http://arXiv.org/abs/hep-th/0210308)\]. V. E. Hubeny and M. Rangamani, [*[Causal structures of pp-waves]{}*]{}, [ *JHEP*]{} [**12**]{} (2002) 043 \[[[ hep-th/0211195]{}](http://arXiv.org/abs/hep-th/0211195)\]. S. A. Hartnoll, [*[Lectures on holographic methods for condensed matter physics]{}*]{}, [[0903.3246]{}](http://arXiv.org/abs/0903.3246). D. Anninos, W. Li, M. Padi, W. Song and A. Strominger, [*[Warped AdS3 Black Holes]{}*]{}, [[0807.3040]{}](http://arXiv.org/abs/0807.3040). J. D. Brown and M. Henneaux, [*[Central Charges in the Canonical Realization of Asymptotic Symmetries: An Example from Three-Dimensional Gravity]{}*]{}, [ *Commun. Math. Phys.*]{} [**104**]{} (1986) 207–226. A. Strominger, [*[Black hole entropy from near-horizon microstates]{}*]{}, [ *JHEP*]{} [**02**]{} (1998) 009 \[[[ hep-th/9712251]{}](http://arXiv.org/abs/hep-th/9712251)\]. J. M. Maldacena and A. Strominger, [*[AdS(3) black holes and a stringy exclusion principle]{}*]{}, [*JHEP*]{} [**12**]{} (1998) 005 \[[[hep-th/9804085]{}](http://arXiv.org/abs/hep-th/9804085)\]. M. Henneaux and C. Teitelboim, [*[Asymptotically anti-De Sitter Spaces]{}*]{}, [*Commun. Math. Phys.*]{} [**98**]{} (1985) 391–424. M. Alishahiha, R. Fareghbal, A. E. Mosaffa and S. Rouhani, [*[Asymptotic symmetry of geometries with Schrodinger isometry]{}*]{}, [[0902.3916]{}](http://arXiv.org/abs/0902.3916). J. Unterberger, [*[On vertex algebra representations of the Schr[o]{}dinger- Virasoro Lie algebra]{}*]{}, [[ cond-mat/0703214]{}](http://arXiv.org/abs/cond-mat/0703214). V. Balasubramanian and P. Kraus, [*[A stress tensor for anti-de Sitter gravity]{}*]{}, [*Commun. Math. Phys.*]{} [**208**]{} (1999) 413–428 \[[[hep-th/9902121]{}](http://arXiv.org/abs/hep-th/9902121)\]. S. de Haro, S. N. Solodukhin and K. Skenderis, [*[Holographic reconstruction of spacetime and renormalization in the AdS/CFT correspondence]{}*]{}, [ *Commun. Math. Phys.*]{} [**217**]{} (2001) 595–622 \[[[hep-th/0002230]{}](http://arXiv.org/abs/hep-th/0002230)\]. D. Martelli and Y. Tachikawa, [*[Comments on Galilean conformal field theories and their geometric realization]{}*]{}, [[0903.5184]{}](http://arXiv.org/abs/0903.5184). T. Regge and C. Teitelboim, [*[Role of Surface Integrals in the Hamiltonian Formulation of General Relativity]{}*]{}, [*Ann. Phys.*]{} [**88**]{} (1974) 286. J. D. Brown and M. Henneaux, [*[ON THE POISSON BRACKETS OF DIFFERENTIABLE GENERATORS IN CLASSICAL FIELD THEORY]{}*]{}, [*J. Math. Phys.*]{} [**27**]{} (1986) 489–491. G. Barnich and F. Brandt, [*[Covariant theory of asymptotic symmetries, conservation laws and central charges]{}*]{}, [*Nucl. Phys.*]{} [**B633**]{} (2002) 3–82 \[[[hep-th/0111246]{}](http://arXiv.org/abs/hep-th/0111246)\]. G. Barnich, [*[Boundary charges in gauge theories: Using Stokes theorem in the bulk]{}*]{}, [*Class. Quant. Grav.*]{} [**20**]{} (2003) 3685–3698 \[[[hep-th/0301039]{}](http://arXiv.org/abs/hep-th/0301039)\]. G. Barnich and G. Compere, [*[Surface charge algebra in gauge theories and thermodynamic integrability]{}*]{}, [*J. Math. Phys.*]{} [**49**]{} (2008) 042901 \[[[0708.2378]{}](http://arXiv.org/abs/0708.2378)\]. S. Hollands, A. Ishibashi and D. Marolf, [*[Comparison between various notions of conserved charges in asymptotically AdS-spacetimes]{}*]{}, [*Class. Quant. Grav.*]{} [**22**]{} (2005) 2881–2920 \[[[hep-th/0503045]{}](http://arXiv.org/abs/hep-th/0503045)\]. I. Papadimitriou and K. Skenderis, [*[Thermodynamics of asymptotically locally AdS spacetimes]{}*]{}, [*JHEP*]{} [**08**]{} (2005) 004 \[[[hep-th/0505190]{}](http://arXiv.org/abs/hep-th/0505190)\]. S. F. Ross and O. Saremi, [*[Holographic stress tensor for non-relativistic theories]{}*]{}, [[0907.1846]{}](http://arXiv.org/abs/0907.1846). G. Compere and D. Marolf, [*[Setting the boundary free in AdS/CFT]{}*]{}, [ *Class. Quant. Grav.*]{} [**25**]{} (2008) 195014 \[[[0805.1902]{}](http://arXiv.org/abs/0805.1902)\]. T. Azeyanagi, G. Compere, N. Ogawa, Y. Tachikawa and S. Terashima, [ *[Higher-Derivative Corrections to the Asymptotic Virasoro Symmetry of 4d Extremal Black Holes]{}*]{}, [[ 0903.4176]{}](http://arXiv.org/abs/0903.4176). A. J. Amsel, D. Marolf and M. M. Roberts, [*[On the Stress Tensor of Kerr/CFT]{}*]{}, [[0907.5023]{}](http://arXiv.org/abs/0907.5023). S. Kachru, X. Liu and M. Mulligan, [*[Gravity Duals of Lifshitz-like Fixed Points]{}*]{}, [*Phys. Rev.*]{} [**D78**]{} (2008) 106005 \[[[0808.1725]{}](http://arXiv.org/abs/0808.1725)\]. G. Compere, [*[Symmetries and conservation laws in Lagrangian gauge theories with applications to the mechanics of black holes and to gravity in three dimensions]{}*]{}, [[0708.3153]{}](http://arXiv.org/abs/0708.3153). G. Barnich and G. Compere, [*[Classical central extension for asymptotic symmetries at null infinity in three spacetime dimensions]{}*]{}, [*Class. Quant. Grav.*]{} [**24**]{} (2007) F15 \[[[gr-qc/0610130]{}](http://arXiv.org/abs/gr-qc/0610130)\]. V. E. Hubeny, M. Rangamani and S. F. Ross, [*[Causal structures and holography]{}*]{}, [*JHEP*]{} [**07**]{} (2005) 037 \[[[hep-th/0504034]{}](http://arXiv.org/abs/hep-th/0504034)\]. G. Barnich and G. Compere, [*[Conserved charges and thermodynamics of the spinning Goedel black hole]{}*]{}, [*Phys. Rev. Lett.*]{} [**95**]{} (2005) 031302 \[[[hep-th/0501102]{}](http://arXiv.org/abs/hep-th/0501102)\]. E. G. Gimon, A. Hashimoto, V. E. Hubeny, O. Lunin and M. Rangamani, [*[Black strings in asymptotically plane wave geometries]{}*]{}, [*JHEP*]{} [**08**]{} (2003) 035 \[[[ hep-th/0306131]{}](http://arXiv.org/abs/hep-th/0306131)\]. M. Blau, J. Hartong and B. Rollier, [*[Geometry of Schroedinger Space-Times, Global Coordinates, and Harmonic Trapping]{}*]{}, [[0904.3304]{}](http://arXiv.org/abs/0904.3304). D. Yamada, [*[Thermodynamics of Black Holes in Schroedinger Space]{}*]{}, [ *Class. Quant. Grav.*]{} [**26**]{} (2009) 075006 \[[[0809.4928]{}](http://arXiv.org/abs/0809.4928)\]. M. Henkel, [*[Local Scale Invariance and Strongly Anisotropic Equilibrium Critical Systems]{}*]{}, [*Phys. Rev. Lett.*]{} [**78**]{} (1997) 1940–1943. J. Lukierski, P. C. Stichel and W. J. Zakrzewski, [*[Exotic Galilean conformal symmetry and its dynamical realisations]{}*]{}, [*Phys. Lett.*]{} [ **A357**]{} (2006) 1–5 \[[[ hep-th/0511259]{}](http://arXiv.org/abs/hep-th/0511259)\]. C. Duval and P. A. Horvathy, [*[Non-relativistic conformal symmetries and Newton-Cartan structures]{}*]{}, [[ 0904.0531]{}](http://arXiv.org/abs/0904.0531). A. Bagchi and R. Gopakumar, [*[Galilean Conformal Algebras and AdS/CFT]{}*]{}, [*JHEP*]{} [**07**]{} (2009) 037 \[[[ 0902.1385]{}](http://arXiv.org/abs/0902.1385)\]. M. Henneaux, [*[ASYMPTOTICALLY ANTI-DE SITTER UNIVERSES IN D = 3, 4 AND HIGHER DIMENSIONS]{}*]{}, . In \*Rome 1985, Proceedings, General Relativity, Pt. B\*, 959- 966. G. Barnich, F. Brandt and M. Henneaux, [*[Local BRST cohomology in the antifield formalism. 1. General theorems]{}*]{}, [*Commun. Math. Phys.*]{} [ **174**]{} (1995) 57–92 \[[[ hep-th/9405109]{}](http://arXiv.org/abs/hep-th/9405109)\]. G. Barnich and G. Compere, [*[Generalized Smarr relation for Kerr AdS black holes from improved surface integrals]{}*]{}, [*Phys. Rev.*]{} [**D71**]{} (2005) 044016 \[[[gr-qc/0412029]{}](http://arXiv.org/abs/gr-qc/0412029)\]. L. F. Abbott and S. Deser, [*Stability of gravity with a cosmological constant*]{}, [*Nucl. Phys.*]{} [**B195**]{} (1982) 76. V. Iyer and R. M. Wald, [*[Some properties of Noether charge and a proposal for dynamical black hole entropy]{}*]{}, [*Phys. Rev.*]{} [**D50**]{} (1994) 846–864 \[[[gr-qc/9403028]{}](http://arXiv.org/abs/gr-qc/9403028)\]. G. Compere, [*[Note on the First Law with p-form potentials]{}*]{}, [*Phys. Rev.*]{} [**D75**]{} (2007) 124020 \[[[hep-th/0703004]{}](http://arXiv.org/abs/hep-th/0703004)\]. U. H. Danielsson and L. Thorlacius, [*[Black holes in asymptotically Lifshitz spacetime]{}*]{}, [*JHEP*]{} [**03**]{} (2009) 070 \[[[0812.5088]{}](http://arXiv.org/abs/0812.5088)\]. G. Bertoldi, B. A. Burrington and A. Peet, [*[Black Holes in asymptotically Lifshitz spacetimes with arbitrary critical exponent]{}*]{}, [[0905.3183]{}](http://arXiv.org/abs/0905.3183). G. Bertoldi, B. A. Burrington and A. W. Peet, [*[Thermodynamics of black branes in asymptotically Lifshitz spacetimes]{}*]{}, [[0907.4755]{}](http://arXiv.org/abs/0907.4755). E. V. Ivashkevich, [*[Symmetries and instantons in stochastic Burgers equation]{}*]{}, [*J. Phys.*]{} [**A30**]{} (1997) L525–L533 \[[[hep-th/9610221]{}](http://arXiv.org/abs/hep-th/9610221)\]. T. Hartman and A. Strominger, [*[Central Charge for $AdS_2$ Quantum Gravity]{}*]{}, [*JHEP*]{} [**04**]{} (2009) 026 \[[[0803.3621]{}](http://arXiv.org/abs/0803.3621)\]. [^1]: We thank P. Hořava and Ch. Melby-Thompson for sharing that observation with us. [^2]: See [@Compere:2007az], page 142 and the appendix of [@Barnich:2006av] for a detailed explanation on how to solve the Killing equations at first order in the radial expansion. The resolution at a given subleading order is then straightforward. [^3]: This phase space is only a preliminary one. We should still consider all finite diffeomorphisms associated with the asymptotic Killing vectors to built the entire phase space in order for it to be invariant under the action of asymptotic symmetries. [^4]: The Mathematica code can be downloaded from the homepage of G.C. [^5]: We thank M. Rangamani for sharing his unpublished notes on these solutions analogous to the ones of [@Hubeny:2005qu]. [^6]: In the treatment of [@Barnich:2005kq], the integrating factor that was considered in order to define the energy was not compensated by an overall inverse integrating factor in front of the integrated charge. The current prescription would also be natural in that context, for it would reproduce the expectation that the energy of Gödel black holes and black strings in pp-waves spacetimes are equal since those solutions are related by dualities [@Gimon:2003xk]. [^7]: We thank A. Adams for a discussion on that issue. [^8]: We thank A. Maloney for sharing his thoughts on these questions and for his suggestions.
--- abstract: 'In these lectures, I describe a variety of efforts to identify or constrain the identity of dark matter by detecting the annihilation or decay products of these particles, or their effects. After reviewing the motivation for indirect searches, I discuss what we have learned about dark matter from observations of gamma rays, cosmic rays and neutrinos, as well as the cosmic microwave background. Measurements such as these have been used to significantly constrain a wide range of thermal relic dark matter candidates, in particular those with masses below a few hundred GeV. I also discuss a number of anomalies and excesses that have been interpreted as possible signals of dark matter, including the Galactic Center gamma-ray excess, the cosmic-ray antiproton excess, the cosmic-ray positron excess, and the 3.5 keV line. These lectures were originally presented as part of the 2018 Theoretical Advanced Study Institute (TASI) summer school on “Theory in an Era of Data”. Although intended for advanced graduate students, these lectures may be useful for a wide range of physicists, astrophysicists and astronomers who wish to get an overview of the current state of indirect searches for dark matter.' author: - Dan Hooper bibliography: - 'tasi.bib' title: TASI Lectures on Indirect Searches For Dark Matter --- The Origin of Dark Matter and Motivation for Indirect Searches {#sectionone} ============================================================== Over the past several decades, weakly interacting massive particles (WIMPs) have generally been considered the leading class of candidates for the dark matter of our universe [@Bertone:2016nfn]. With the goal of identifying the particle nature of this substance, experiments have been designed and carried out to detect the interactions of dark matter particles with atoms (direct detection), to produce particles of dark matter in collider environments, and to detect the products of dark matter annihilations or decays (indirect detection). Indirect searches for dark matter include efforts to detect the gamma rays, antiprotons, positrons, neutrinos, and other particles that are produced in the annihilations or decays of this substance. Across a wide range of models, the abundance of dark matter that emerged from the early universe is set by the dark matter’s self-annihilation cross section. As I will demonstrate below, a stable particle species with a thermally averaged annihilation cross section of $\langle \sigma v \rangle \simeq 2.2 \times 10^{-26}$ cm$^3/$s is predicted to freeze out of thermal equilibrium with an abundance equal to the measured cosmological density of dark matter [@Steigman:2012nb; @Kolb:1990vq; @Griest:1990kh]. In many simple models, the dark matter is predicted to annihilate with a similar cross section in the modern universe, providing us with an important benchmark and motivation for indirect searches. Within this context, the current era is an exciting one for indirect detection. In particular, gamma ray and cosmic ray searches for dark matter annihilation products have recently become sensitive to dark matter with this benchmark cross section for masses up to around the weak scale, $\mathcal{O}(10^2$ GeV). The Abundance of a Thermal Relic -------------------------------- Consider a stable particle, $X$, that can annihilate in pairs. Although I intend to identify this state with the dark matter of our universe, this calculation is quite general and applies to a wide range of stable particle species. The evolution of the number density of this species, $n_X$, is described by the following equation: $$\frac{dn_X}{dt} + 3 H n_{X} = -\langle \sigma v \rangle [n_X^2-(n^{\rm Eq}_X)^2],$$ where $H$ is the rate of Hubble expansion, $\langle \sigma v \rangle$ is the thermally averaged value of the annihilation cross section multiplied by the relative velocity of the two particles, and $n^{\rm Eq}_X$ is the equilibrium number density ([[*i.e.*]{}]{} the number density that would be predicted if the $X$ population were in chemical equilibrium with the thermal bath). The Hubble rate is given by $$\begin{aligned} H &=& \bigg(\frac{8 \pi \rho}{3 m^2_{\rm Pl}}\bigg)^{1/2} \nonumber \\ &=& \bigg(\frac{8 \pi}{3 m^2_{\rm Pl}} \frac{\pi^2 g_{\star} T^4}{30}\bigg)^{1/2}, \label{hubble}\end{aligned}$$ where $\rho$ is the total energy density and $m_{\rm Pl} \approx 1.22 \times 10^{19}$ GeV is the Planck mass. In the second line of this equation, we have related the energy density to the temperature of the bath, $\rho=\pi^2 g_{\star} T^4/30$, where $g_{\star}$ counts the number of effectively massless degrees-of-freedom: $$g_{\star} \equiv \sum_{i={\rm bosons}} g_i \bigg(\frac{T_i}{T}\bigg)^4 + \frac{7}{8} \sum_{i={\rm fermions}} g_i \bigg(\frac{T_i}{T}\bigg)^4.$$ Here $g_i$ is the number of internal degrees-of-freedom of state $i$. Among the particle content of the Standard Model, $g_{\star}$ varies from 106.75 at temperatures well above 100 GeV, to 10.75 at temperatures between 1 and 100 MeV, and to 3.36 at temperatures below the electron mass. ![The freeze-out of a thermal relic. The solid line denotes the equilibrium number density, as a function of the mass of the particle divided by the temperature of the bath. The dashed lines show the number density after the relic has fallen out of equilibrium. The greater the annihilation cross section of the relic, $\langle \sigma v \rangle$, the smaller will be the relic abundance that survives the Big Bang.[]{data-label="fig:FreezeOut"}](freezeout.png){width="0.55\linewidth"} At high-temperatures ($T \gg m_X$), the number density of a particle species $X$ that is in equilibrium is given by $$\begin{aligned} n^{\rm Eq}_X = \begin{cases} (\zeta(3)/\pi^2)g_XT^3 \,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\, ({\rm Bose}) \\ (3/4) (\zeta(3)/\pi^2)g_X T^3 \, \,\,\,\, ({\rm Fermi}), \end{cases} \label{highT}\end{aligned}$$ where $\zeta(3) \approx 1.20206$ and $g_X$ is the number of internal degrees-of-freedom of $X$. At low-temperatures ($T \ll m_X$), the equilibrium number density is instead given by $$n^{\rm Eq}_X = g_X \bigg(\frac{m_XT}{2\pi}\bigg)^{3/2} e^{-m_X/T}. \label{lowT}$$ In each of these expressions for $n_X^{\rm Eq}$, we have assumed that there is no appreciable chemical potential, such as that which might arise from a primordial asymmetry between dark matter and anti-dark matter, for example. Unless the interactions of $X$ with the Standard Model are extremely feeble (a case I will consider later), the $X$ population will be in chemical and kinetic equilibrium with the thermal bath in the early universe. As the temperature drops below $m_X$, however, the $X$ abundance becomes exponentially suppressed (see Eq. \[lowT\]) until the rate of Hubble expansion exceeds that of annihilation. At that point in time, the $X$ population freezes out of equilibrium, by which I mean that its co-moving number density ($n_X a^3$, where $a$ is the scale factor) stops appreciably changing. The Hubble rate, $H$, exceeds that of the annihilation rate, $n_{X} \langle \sigma v \rangle$, when the temperature drops to $T_{\rm F}$, given as follows: $$\begin{aligned} \frac{m_X}{T_{\rm F}} \approx 23 + \ln\bigg[\bigg(\frac{\sigma v}{2.2\times 10^{-26} \, {\rm cm}^3/{\rm s}}\bigg) \bigg(\frac{80}{g_{\star}}\bigg)^{1/2} \bigg(\frac{g_X}{2}\bigg)\bigg(\frac{m_X/T_{\rm F}}{23}\bigg)^{3/2} \bigg(\frac{T_{\rm F}}{10 \, {\rm GeV}}\bigg)\bigg]. \end{aligned}$$ In other words, the $X$ population freezes out when the temperature of the universe drops to a value $\sim$20 times smaller than $m_X$. Although $T_{\rm F}$ is a function of the particle’s annihilation cross section and number of internal degrees-of-freedom, the dependence on these quantities is only logarithmic, and $m_X/T_{\rm F} \sim 10-30$ across a wide range of values. After freeze-out, the total number of $X$ particles is approximately conserved, and the value of $n_X$ simply scales as $a^{-3}$ due to the expansion of the universe. The density of the $X$ population today is thus given by: $$\begin{aligned} \rho^{\rm today}_X &=& m_X n^{\rm today}_X \nonumber \\ &\approx& m_X n_X^{\rm Eq} (T_F) \, a^3_{\rm F},\end{aligned}$$ where $a_{\rm F}$ is the scale factor at freeze-out. Numerically, this results in the following abundance: $$\begin{aligned} \Omega_X h^2 \approx 0.12 \, \bigg(\frac{2.2\times 10^{-26} \, {\rm cm}^3/{\rm s}}{\langle \sigma v\rangle}\bigg)\bigg(\frac{80}{g_{\star}}\bigg)^{1/2}\bigg(\frac{m_X/T_{\rm F}}{23}\bigg),\end{aligned}$$ where $\Omega_X \equiv \rho_X/\rho_{\rm crit}$ is the density in terms of the critical density and $h$ is the current Hubble constant in units of 100 km/s/Mpc. For reference, cosmological measurements (including those of the cosmic microwave background) indicate that the average density of cold dark matter is near the benchmark value used in this expression, $\Omega_{\rm DM} h^2 \approx 0.11933 \pm 0.00091$ [@Aghanim:2018eyx]. Note that we have assumed in this calculation that $X$ freezes out of equilibrium at a temperature well below its mass, making $X$ a cold thermal relic. If the relic is very light or feebly coupled, this may not be the case. For a particle species that freezes out when relativistic, one would repeat this calculation using Eq. \[highT\] to determine the abundance at freeze-out, an arriving at a very different result. Standard Model neutrinos are a well known example of a hot relic, for which this calculation yields $\Omega_{\nu+\bar{\nu}} h^2 \approx 0.0011 \, (m_{\nu}/0.1 \, {\rm eV})$. Given that the observed large scale structure of our universe rules out the possibility that any sizable fraction of the dark matter is hot, however, I will focus here on the case of dark matter in the form of a cold thermal relic. General Considerations Regarding the Origin of Dark Matter ---------------------------------------------------------- In the calculation presented above, we made a number of assumptions regarding the nature of the dark matter and its origin. In particular, we assumed that: 1. [$X$ is stable, or at least cosmologically long-lived.]{} 2. [$X$ interacts with the Standard Model strongly enough to reach equilibrium at some point in the early universe.]{} 3. [There are no other mechanisms that contribution to the production of $X$ particles after freeze-out.]{} 4. [The early universe was radiation dominated, and space expanded at the rate predicted by general relativity.]{} Any of these conditions could plausibly be violated, of course. If the first of these conditions is not the case, however, then $X$ cannot be the dark matter, since the density of dark matter in the universe today has been measured to be similar to its abundance during the formation of the cosmic microwave background (CMB) [@PalomaresRuiz:2007ry; @Poulin:2016nat]. To satisfy the second condition listed above, the rate for interactions between the dark matter and the Standard Model must exceed that of Hubble expansion, $n^{\rm Eq}_X \sigma v \gsim H$. For interactions in the form of $X$ annihilations, for example, this condition can be written as follows (for $T\gg m_X)$: $$\begin{aligned} n^{\rm Eq}_X \sigma v &\gsim& H \\ \frac{a \zeta(3) g_X T^3}{\pi^2} \, \sigma v &\gsim& \bigg(\frac{8 \pi}{3 m^2_{\rm Pl}} \frac{\pi^2 g_{\star} T^4} {30}\bigg)^{1/2}, \nonumber\end{aligned}$$ where $a=1$ (3/4) for the case in which $X$ is a boson (fermion). This reduces to the following condition to reach equilibrium: $$\sigma v \gsim 10^{-39} \, {\rm cm}^3/{\rm s} \, \times \bigg(\frac{{\rm TeV}}{T}\bigg) \bigg(\frac{100}{g_{\star}}\bigg)^{1/2}.$$ This is a [*very*]{} small cross section, many orders of magnitude smaller that that required to generate an acceptable thermal relic abundance. This ensures that any particle species with anything but the feeblest of interactions with the Standard Model will be easily maintained at equilibrium in the early universe (until freeze-out occurs). A stable particle species which does not interact enough to reach equilibrium with the thermal bath of Standard Model particles could be produced through a variety of mechanisms. Such possibilities include the process of thermal freeze-in (in which the $X$ particles are produced though the interactions of Standard Model particles, without the $X$ abundance ever reaching equilibrium), production through out-of-equilibrium decays [@Gelmini:2006pq; @Gelmini:2006pw; @Merle:2015oja; @Merle:2013wta; @Kane:2015jia], misalignment production (such as in the case of the QCD axion), or through the oscillations of Standard Model neutrinos into a cosmologically long-lived sterile neutrino [@Dodelson:1993je; @Shi:1998km; @Merle:2013wta]. We also note that if the expansion history of the early universe were substantively different from that predicted in the standard radiation-dominated picture, the abundance of dark matter that emerges from the Big Bang could be altered in important ways. Examples include scenarios with an early matter dominated era [@Berlin:2016gtr; @Berlin:2016vnh; @Gelmini:2006pq; @Gelmini:2006pw] or a period of late-time inflation [@Davoudiasl:2015vba]. Taken together, these considerations force us to the conclusion that the particles that make up the dark matter must either, 1) interact at a level such that they freeze out of equilibrium to yield the measured abundance (or less, if the dark matter consists of multiple components), or 2) interact so little that they never became populated to the equilibrium abundance. Any stable particle species that interacts at a level in between these two cases will emerge from the early universe with an abundance that exceeds the measured cosmological dark matter density. This provides us with considerable motivation to consider dark matter in the form of a particle that annihilates (or is otherwise depleted) at a rate equivalent to $\langle \sigma v \rangle \simeq 2.2 \times 10^{-26}$ cm$^3/$s at the time and temperature of thermal freeze-out. This cross section thus represents an important benchmark for indirect searches. Similar arguments can also allow us to place upper and lower limits on the mass of such a thermal relic. In particular, the cross section that is required to generate the measured dark matter abundance violates partial wave unitarity unless $m_X \lsim 120$ TeV [@Griest:1989wd], while the successful predictions of Big Bang Nucleosynthesis require $m_{X} \gsim (1-10)$ MeV [@Boehm:2013jpa]. These two constraints provide us wth a natural range of masses for the class of dark matter candidates known as weakly interacting massive particles (WIMPs). Although dark matter with an annihilation cross section of around $\langle \sigma v \rangle \simeq 2.2 \times 10^{-26}$ cm$^3/$s is indeed well-motivated by the above arguments, there are many viable models in which the dark matter annihilates at a higher or lower rate. In the following subsection, I will summarize some of the ways in which dark matter might be predicted to annihilate with a larger or smaller cross section in the universe today than would be expected from the simple thermal relic abundance argument described above. Departures From $\langle \sigma v \rangle \approx 2 \times 10^{-26}$ cm$^3/$s ----------------------------------------------------------------------------- [*1. Velocity Dependent Processes*]{} Depending on the spin of a dark matter candidate and the nature of the interactions that lead to its annihilations, the resulting cross section may or may not depend on the relative velocity between the two annihilating particles. Far from any resonances or thresholds, it is often useful to write the annihilation cross section as a Taylor series expansion in powers of $v^2$: $$\begin{aligned} \sigma v = a +b v^2 + c \, \mathcal{O}(v^4),\end{aligned}$$ where $a$, $b$ and $c$ are the coefficients of this expansion. $s$-wave annihilation amplitudes contribute to all orders of this expansion, whereas $p$-wave amplitudes only contribute to the $v^2$ and higher order terms. For this reason, dark matter models which annihilate with a cross section that scales as $\sigma v \propto v^2$ are often referred to as being “$p$-wave suppressed”. Since the velocities of dark matter particles found in halos today are generally around $v \sim 10^{-3}c$ (compared to $v \sim 0.3\, c$ at the temperature of thermal freeze-out), we expect the current annihilation rate of a $p$-wave suppressed dark matter candidate to be suppressed by a factor of roughly $\sim [10^{-3}/0.3]^2 \sim 10^{-5}$. As a result, whereas thermal relics with a velocity-independent ([[*i.e.*]{}]{} $s$-wave) cross section are generally excluded by current experiments and telescopes for masses up to $\mathcal{O}(10^2$ GeV), indirect detection experiments are not generally sensitive to $p$-wave suppressed dark matter candidates. For concreteness, consider dark matter that annihilates to a pair of fermions, $f\bar{f}$, through an $s$-channel Feynman diagram. In Table \[pwave\], we summarize the velocity dependance of this annihilation cross section for a variety of couplings of the mediator to the dark matter and to the final state fermions (see Ref. [@Berlin:2014tja]). Of the 16 linearly independent combinations of couplings, 7 lead to a cross section that is $p$-wave suppressed ($\sigma v \propto v^2$). One should keep in mind that in many realistic dark matter models, more than one these interactions exist, leading to a combination of velocity-independent and velocity-suppressed contributions. ------------------------------------------------------------------- --------------------- --------------------- ----------------------- ------------------------------- Fermionic DM $\bar f f$ $\bar f \gamma^5 f$ $\bar f \gamma^\mu f$ $\bar f \gamma^\mu\gamma^5 f$ $\bar X X$ $\sigma v \sim v^2$ $\sigma v \sim v^2$ $-$ $-$ $\bar X \gamma^5 X$ $\sigma v \sim 1$ $\sigma v \sim 1$ $-$ $-$ $\bar X \gamma^\mu X$ $-$ $-$ $\sigma v \sim 1$ $\sigma v \sim 1$ $\bar X \gamma^\mu\gamma^5 X$ $-$ $-$ $\sigma v \sim v^2$ $\sigma v \sim 1$ Scalar DM $\phi^{\dagger} \phi$ $\sigma v \sim 1$ $\sigma v \sim 1$ $-$ $-$ $\phi^{\dagger} \overset{\leftrightarrow}{\partial_{\mu}} \phi$ $-$ $-$ $\sigma v \sim v^2$ $\sigma v \sim v^2$ Vector DM $X^\mu X_\mu^{\dagger}$ $\sigma v \sim 1$ $\sigma v \sim 1$ $-$ $-$ $X^\nu \partial_\nu X_\mu^{\dagger}$ $-$ $-$ $\sigma v \sim v^2$ $\sigma v \sim v^2$ ------------------------------------------------------------------- --------------------- --------------------- ----------------------- ------------------------------- [*2. Resonant Annihilations*]{} If the dark matter annihilates through or near a resonance, its cross section could be much higher or lower during freeze-out than at the very low velocities found in halos today [@Griest:1990kh; @Hooper:2013qjx]. Consider, for example, an annihilation cross section of the following form: $$\sigma v = \frac{\alpha^2 s}{(M^2_{\rm med}-s)^2+M^2_{\rm med}\Gamma^2_{\rm med}},$$ where $M_{\rm med}$ and $\Gamma_{\rm med}$ are the mass and the width of the particle mediating the annihilation process and $\alpha^2$ normalizes the cross section. The Mandelstam variable, $s=4m^2_{X}/(1-v^2)$, is equal to $s_{v \rightarrow 0} = 4 m^2_X$ in the low-velocity limit, and to a value roughly 10% larger at the temperature of thermal freeze-out, $s_{\rm FO} \simeq 4m^2_X/(1-0.1) \simeq 1.1 (4 m^2_X)$. As a first case, consider a scenario in which $M_{\rm Med} \simeq 2 m_X$, enabling the dark matter to annihilate resonantly at low velocities. In the narrow width approximation ($\Gamma_{\rm med} \ll M_{\rm med}$), this leads to an [*enhancement*]{} of the low-velocity cross section by a factor of $\sim 8 \times [(M_{\rm med}/\Gamma_{\rm med})/30]^2$ relative to the velocity-independent case. Alternatively, we could instead consider a case in which $M_{\rm Med} \simeq 2.1 m_X$, for which the dark matter annihilates on resonance during freeze-out. In this case, the low-velocity cross section is [*suppressed*]{} by a similar factor. [*3. Coannihilations*]{} Instead of being depleted through self-annihilations, the dark matter abundance could instead be established through coannihilations with another particle species, $X'$ [@Griest:1990kh; @Edsjo:1997bg; @Ellis:1998kh]. The relative abundance of such a state at the temperature of freeze-out can roughly be estimated as follows: $$\begin{aligned} \frac{n_{X'}}{n_X} &\sim& \frac{e^{-m_{X'}/T_{\rm F}}}{e^{-m_{X}/T_{\rm F}}} \\ &=& e^{-\Delta m_X/T_{\rm F}} \nonumber \\ &\sim& e^{-20 \Delta},\nonumber\end{aligned}$$ where $\Delta \equiv (m_{X'}-m_X)/m_X$ is the fractional mass splitting between the two states. For large mass splittings ($\Delta \gg 0.1$), $n_{X'} \ll n_{X}$, and the $X'$ population will play little role in the process of thermal freeze-out, or in determining the final $X$ abundance. For smaller splittings ($\Delta \lsim 0.1$), however, a significant number of $X'$ particles will be present during freeze-out, potentially assisting in the depletion of the $X$ population. To calculate the impact of coannihilations on the thermal relic abundance, we introduce the following effective cross section: $$\sigma_{\rm eff}(T) \equiv \sum_{i,j} \sigma_{i,j} \frac{g_i g_j}{g^2_{\rm eff} (T)} (1+\Delta_i)^{3/2} (1+\Delta_j)^{3/2} e^{-m_X(\Delta_i+\Delta_j)/T},$$ where $T$ is the temperature and $g_{i,j}$ and $\Delta_{i,j}$ are the number of internal degrees-of-freedom and the fractional mass splittings (relative to that of $X$) of state $i$, and $$g_{\rm eff}(T) \equiv \sum_i g_i (1+\Delta_i)^{3/2} e^{-m_X\Delta_i/T}.$$ As an example, consider two states ($X$ and $X'$) that are nearly degenerate ($\Delta_{X'} \ll 1$) and that have an equal number of internal degrees-of-freedom ($g_X=g_{X'}$). In this case, the effective annihilation cross section reduces to $\sigma_{\rm eff} \simeq 0.5 \sigma_{XX} + 0.5 \sigma_{X'X'} +\sigma_{XX'}$. If $\sigma_{XX'} \gsim \sigma_{XX}$, coannihilations will play a major role in the depletion of the $X$ abundance. In the opposite case ($\sigma_{XX'} \ll \sigma_{XX}, \sigma_{X'X'}$), the $X$ and $X'$ populations each freeze out and contribute to the final dark matter abundance independently. [*4. Asymmetric Dark Matter*]{} If you were to carry out the calculation of the thermal relic abundance as described above for the case of protons and electrons, you would find that almost no such particles should survive the conditions of the early universe. The baryon-antibaryon annihilation cross section is much larger than that needed to yield a cosmologically interesting abundance. The abundance of baryons found in our universe is instead determined by the presence of a primordial matter-antimatter asymmetry. Namely, for reasons that are not yet understood, the early universe contained slightly more baryons than antibaryons (and more quarks than antiquarks prior to the QCD phase transition). These particles stopped annihilating not when the expansion rate caused their abundance to freeze out, but instead when annihilations had destroyed all of antibaryons that had once been present in the universe. It is possible that there could have also been a primordial asymmetry between the number of dark matter particles, $X$, and antiparticles, $\bar{X}$, in the early universe [@Zurek:2013wia; @Graesser:2011wi; @Lin:2011gj; @Iminniyaz:2011yp]. If this is the case, then the $X\bar{X}$ annihilation cross section could in principle be much larger than our benchmark value of $2\times 10^{-26}$ cm$^3/$s. But despite this large cross section, the annihilation rate could still be very low in the universe today, as a result of the absence of $\bar{X}$ particles. In such a scenario, the prospects for indirect detection could be highly suppressed. Alternatively, $X-\bar{X}$ oscillations could repopulate this population, potentially leading to very high annihilation rates in the current epoch [@Cirelli:2011ac; @Buckley:2011ye]. [*5. Sommerfeld Enhancements*]{} In some dark matter models, long-range interactions can enhance the annihilation cross section at low velocities [@Hisano:2004ds; @ArkaniHamed:2008qn]. This effect, known as the “Sommerfeld enhancement”, is most pronounced in cases in which the mediator is much lighter than the dark matter itself, $M_{\rm med} \lsim m_X v$. A well studied example is that of dark matter in the form of a TeV-scale, wino-like neutralino. In this case, the low-velocity annihilation cross section can exceed the thermal relic benchmark value by up to 1 to 2 orders of magnitude. [*6. Out of Equilibrium Decays and Other Non-Thermal Production Mechanisms*]{} In addition to any thermal abundance of a particle species that might arise, an additional non-thermal population could be generated through, for example, the decays of another species that is not in equilibrium with the thermal bath [@Gelmini:2006pq; @Gelmini:2006pw; @Merle:2015oja; @Merle:2013wta; @Kane:2015jia]. Moduli are an example of a theoretically well-motivated state that is predicted to fall out of equilibrium before it decays, potentially leading to the production of a non-thermal dark matter population. In such scenarios, it is possible for the dark matter annihilation cross section to be considerably higher than generally predicted for a thermal relic. [*7. Non-Standard Cosmological Histories*]{} Unless altered by new physics, the energy density of our universe was dominated by radiation ([[*i.e.*]{}]{} relativistic particles) during the first $\sim$$10^5$ years of its (post-inflationary) history. If there exists a long-lived particle species that becomes non-relativistic in the early universe, the energy density of its population will evolve like $\rho \propto a^{-3}$, whereas radiation dilutes as $\rho \propto a^{-4}$. As a result, the non-relativistic species will increasingly come to dominate the energy density of the early universe, potentially leading to an era of matter domination [@Fornengo:2002db; @Gelmini:2006pq; @Kane:2015jia; @Berlin:2016gtr; @Berlin:2016vnh; @Gelmini:2006pw]. This could impact the abundance of dark matter in at least two different ways. First of all, when the long-lived particles ultimately decay, they could produce dark matter particles, as described in the paragraph directly above this one. Furthermore, such decays could dilute the dark matter’s thermal abundance, lowering the annihilation cross section that is required to generate the measured cosmological density. Alternatively, the expansion history of the early universe could be altered by the presence of an era of rapid expansion, known as late-time inflation [@Lyth:1995ka; @Cohen:2008nb; @Boeckel:2011yj; @Boeckel:2009ej; @Davoudiasl:2015vba]. If this occurs after freeze-out, such an event would dilute the abundance of dark matter and lower the expected annihilation cross section. Gamma-Ray Searches for Dark Matter Annihilation Products {#gammasec} ======================================================== If the dark matter annihilates with a cross section near the thermal relic benchmark value, $\langle \sigma v \rangle \simeq 2.2\times 10^{-26}$ cm$^3/$s, this could potentially lead to an observable flux of energetic particles, including gamma rays and cosmic rays. Searches for dark matter using gamma-ray telescopes benefit from the fact that these particles are not deflected by magnetic fields and are negligibly attenuated over Galactic distance scales, making it possible to acquire both spectral and spatial information, unmolested by astrophysical effects. The possibility that gamma-ray telescopes could be used to detect the annihilation products of dark matter particles was first suggested in a pair of papers published in 1978 by Jim Gunn, Ben Lee, Ian Lerche, David Schramm and Gary Steigman [@1978ApJ...223.1015G], and by Floyd Stecker [@1978ApJ...223.1032S]. Today, four decades later, gamma-ray searches for dark matter provide us with some of the most stringent and robust constraints on the dark matter’s annihilation cross section. The dark matter annihilation rate per volume is given by $\langle \sigma v \rangle \, \rho_X^2/2m^2_X$, where $\rho_X$ is the dark matter density and the factor of 1/2 is included to avoid double counting the annihilations of particle A with particle B, and particle B with particle A. Here we are assuming that the annihilating particles are their own antiparticle ($XX$). If we were instead to consider annihilations between dark matter and anti-dark matter ($X\bar{X}$), the annihilation rate would be half as large for a given value of the cross section. However, the annihilation cross section must also be twice as large in this case in order to obtain the desired relic abundance, and thus the overall annihilation rate of a thermal relic today remains the same, regardless of whether the dark matter candidate is or is not its own antiparticle. To calculate the spectrum and angular distribution of gamma rays from dark matter annihilations per unit time from within a solid angle, $\Delta \Omega$, we integrate the annihilation rate over the solid angle observed, and over the line-of-sight: $$\begin{aligned} \Phi_{\gamma} (E_\gamma, \Delta \Omega) = \frac{1}{2} \frac{dN_{\gamma}}{dE_{\gamma}} \frac{ \langle \sigma v \rangle}{4\pi m^2_{X}} \int_{\Delta \Omega} \int_{los} \rho_X^2(l,\Omega) dl d\Omega, \label{gamma}\end{aligned}$$ where $dN_{\gamma}/dE_{\gamma}$ is the spectrum of gamma rays produced per annihilation, which depends on the mass of the dark matter particle and on the types of particles that are produced in this process. In practice, such spectra are often calculated using software such as PYTHIA [@Sjostrand:2006za]. In addition to prompt gamma rays, dark matter annihilations can produce electrons and positrons which generate gamma rays through inverse Compton and bremsstrahlung processes [@Belikov:2009cx; @Profumo:2009uf; @Cirelli:2013mqa]. The basic characteristics of $dN_{\gamma}/dE_{\gamma}$ depend primarily on the dominant annihilation channels of the dark matter particle. For annihilations to quark-antiquark pairs, the resulting jets produce photons through the decays of neutral pions, resulting in a spectrum that typically peaks at an energy around $\sim$$m_X/20$ (in $E_{\gamma}^2 dN_{\gamma}/dE_{\gamma}$ units). For dark matter that is heavy enough to produce $W$ or $Z$ pairs in their annihilations, the resulting gamma-ray spectrum is similar. In contrast, if the dark matter annihilates to charged lepton pairs, the resulting spectrum is predicted to be quite different. Annihilations to $\tau^+ \tau^-$ produce a gamma-ray spectrum that is fairly sharply peaked around $\sim$$m_X/3$ (due to the harder spectrum of neutral pions). In the case of annihilations to $e^+ e^-$ or $\mu^+ \mu^-$ the gamma-ray spectrum is dominated by final state radiation (rather than pion decay) and inverse Compton scattering, generally resulting in a smaller flux of higher-energy photons. The quantity described by the integrals in Eq. \[gamma\] is often referred to as the $J$-factor, which encodes all of the relevant astrophysical information. To build some intuition for the annihilation $J$-factor, consider the simple example of dark matter particles annihilating in a spherical dwarf galaxy of radius $r$, uniform density $\rho$, and located at a distance $d$. For $d \gg r$, this $J$-factor is given by: $$\begin{aligned} J \equiv \int_{\Delta \Omega} \int_{los} \rho_X^2(l,\Omega) dl d\Omega \simeq \frac{4 \pi r^3 \rho^2_X}{3d^2}.\end{aligned}$$ From this simple example, we can see that the most promising targets of gamma-ray searches for dark matter are those that: 1. [Have a high density of dark matter ($J \propto \rho_X^2$)]{} 2. [Are nearby ($J \propto d^{-2}$)]{} 3. [Are extended across a large volume ($J \propto V$)]{} 4. [Are accompanied by low and/or well-understood astrophysical backgrounds.]{} The first three of these conditions are best satisfied by the inner volume of the Milky Way, sometimes referred to as the Galactic Center. The Galactic Center is almost certain to be the brightest single source of dark matter annihilation products on the sky. This direction, however, is also plagued by large and imperfectly understood astrophysical backgrounds. At the other extreme are the Milky Way’s dwarf galaxies, which have much smaller $J$-factors than the Galactic Center, but are accompanied by much smaller gamma-ray backgrounds. Intermediate strategies include observations of other promising targets, including galaxy clusters [@Lisanti:2017qlb], the halo of the Milky Way, and the isotropic gamma-ray background [@Ando:2015qda; @Cholis:2013ena; @Ackermann:2015tah; @DiMauro:2015tfa; @Ajello:2015mfa]. Modern gamma-ray astronomy is conducted using a combination of space-based and ground-based telescopes, each of which offer various advantages and disadvantages. At energies between 0.1 and 100 GeV, this field is dominated by the Fermi Gamma-Ray Space Telescope, which has been in orbit around Earth since 2008. Fermi observes the entire sky with an angular resolution on order of a degree and an energy resolution of around 10%. Among other science goals, Fermi was designed to offer unprecedented sensitivity to dark matter annihilation products [@Gehrels:1999ri], in particular from the direction of the center of the Milky Way [@Bergstrom:1997fj; @Berezinsky:1994wva; @Gondolo:1999ef; @Ullio:2001fb; @Cesarini:2003nr; @Peirani:2004wy; @Dodelson:2007gd]. At higher energies, ground-based air Cherenkov telescopes offer the greatest sensitivity, including as HESS [@Abdallah:2016ygi; @Abdalla:2018mve; @Abdallah:2018qtu], VERITAS [@Archambault:2017wyh; @Zitzer:2015eqa] and MAGIC [@Ahnen:2017pqx; @Ahnen:2016qkx] (and in the future, CTA [@Consortium:2010bc]). While these instruments have far greater angular resolution than Fermi, they must be pointed at specific targets and are only sensitive to gamma rays with energies above around $\sim$$10^2$ GeV. The HAWC telescope is also sensitive in the case of very heavy dark matter particles [@Abeysekara:2017jxs; @Blanco:2017sbc]. Dwarf Galaxies -------------- The Milky Way’s dark matter halo contains a large number of smaller subhalos, the largest of which contain stars and constitute satellites of our galaxy. The satellite population of the Milky Way includes the classical dwarfs (Draco, Ursa Minor, Sculptur, Fornax, etc.), as well as several dozen ultra-faint galaxies that were discovered using data from modern surveys, including the Sloan Digital Sky Survey (SDSS) and the Dark Energy Survey (DES) [@Bechtol:2015cbp; @Koposov:2015cua]. Although dwarf galaxies are typically discovered using photometric data, spectroscopic follow up observations can measure the line-of-sight velocities of the brightest stars in these systems. This information can then be used to infer information about the underlying dark matter distribution, and to estimate the annihilation $J$-factor of each dwarf. In making such estimates, most groups assume that a given dwarf galaxy is, 1) in steady state, 2) spherically symmetric, and 3) negligibly rotationally supported. Under such assumptions, one can derive the 2nd order Jeans equation, which can be solved and projected along the line-of-sight to produce the predicted velocity dispersion as a function of angular radius. This is then compared to the measured distribution of velocities to generate constraints on the $J$-factor of the dwarf [@Martinez:2013els; @Bonnivard:2014kza; @Bonnivard:2015vua]. There are a number of challenges involved in deriving constraints on dwarf galaxy $J$-factors. First of all, it is not obvious that the three assumptions mentioned in the previous paragraph are valid. In particular, dwarf galaxies are not expected to be perfect spherical, a factor which can non-negligibly skew the value of the inferred $J$-factor. Furthermore, for many ultra-faint dwarfs, spectroscopic measurements exist for only a small number of stars. Making this more perilous is the fact that it is not always clear which stars are in fact gravitationally bound to a given dwarf. In the case of Segue 1, for example, quite different $J$-factor determinations can result depending on how the question of stellar membership is precisely treated [@Bonnivard:2015vua]. ![image](J_All_Comp.pdf){width="95.00000%"} In Fig. \[Jfactors\] (from Ref. [@Bonnivard:2015xpq]), we show the $J$-factor determinations for 21 Milky Way dwarf galaxies, as presented by several groups [@Bonnivard:2015xpq; @Charbonnier:2011ft; @Geringer-Sameth:2014yza; @Ackermann:2013yva]. Note that the largest $J$-factors are generally found for those dwarfs that are most nearby, and that the error bars associated with ultra-faint dwarfs are typically much larger than those of the classical dwarfs. As an example, consider the classical dwarf galaxy Draco, which has a measured $J$-factor of $J \simeq 10^{18.8}$ GeV$^2/$cm$^5$. Combining this number with Eq. \[gamma\] leads to the following estimate for the gamma-ray flux from this satellite: $$\begin{aligned} \label{draco} \Phi_{\gamma} &\simeq& \frac{1}{2} \frac{ \langle \sigma v \rangle}{4\pi m^2_{X}} \, J \, \int \frac{dN_{\gamma}}{dE_{\gamma}} dE_{\gamma} \\ &\approx& 5 \times 10^{-12} \, {\rm cm}^{-2} {\rm s}^{-1} \bigg(\frac{\langle \sigma v \rangle}{2\times 10^{-26} \, {\rm cm}^3/{\rm s}}\bigg) \bigg(\frac{\int \frac{dN_{\gamma}}{dE_{\gamma}} dE_{\gamma}}{10}\bigg)\bigg(\frac{100 \, {\rm GeV}}{m_X}\bigg)^2 \bigg(\frac{J}{10^{18.8}\, {\rm GeV}^2/{\rm cm}^5}\bigg). \nonumber\end{aligned}$$ Multiplying the above flux by Fermi’s effective area of $\simeq$8500 cm$^2$, and by the fact that this telescope observes a given portion of the sky $\sim$20% sky of the time, we arrive at an estimate that this instrument would detect approximately 0.3 photons per year from dark matter annihilations in Draco (for the parameters shown in the brackets). Given that this is much smaller than the flux associated with the extragalactic gamma-ray background (in addition to the contribution from diffuse emission mechanisms in the Milky Way), we conclude that Fermi is not sensitive to dark matter annihilation in Draco, at least for this choice of parameters. More optimistically, we could instead consider dark matter in the form of a thermal relic with a mass of 30 GeV, increasing the predicted gamma-ray flux by more than an order of magnitude. Over ten years of observation, one would expect such a scenario to lead to a few dozen signal events, which could constitute a modest excess ($\sim$1-2$\sigma$) over known backgrounds. ![image](ts_vs_mass_bb.pdf){width="0.47\linewidth"} ![image](ts_vs_mass_tautau.pdf){width="0.47\linewidth"} ![image]({upper_limits_composite_desy2_standard_jsigma0.6_bb}.pdf){width="0.49\linewidth"} ![image]({upper_limits_composite_desy2_standard_jsigma0.6_tautau}.pdf){width="0.49\linewidth"} In practice, constraints are placed by stacking many dwarf galaxies as a part of a combined analysis. Some of the main results from the Fermi Collaboration’s most recent dwarf galaxy analysis are shown in Fig. \[fig:TSvsMass\] (see also Refs. [@Geringer-Sameth:2014qqa; @Ackermann:2015zua; @Abramowski:2014tra; @Ahnen:2017pqx; @Ahnen:2016qkx; @Archambault:2017wyh]). This analysis is based on a stack of 15 dwarfs, and it excludes dark matter candidates with $\langle \sigma v \rangle = 2\times 10^{-26}$ cm$^3/$s up to masses of $\sim$60 GeV for the case of annihilations to $b\bar{b}$. It is also interesting to note that statistically modest gamma-ray excesses have been detected from the directions of a few dwarf galaxies, including Recticulum II and Tucana III [@Geringer-Sameth:2015lua; @Drlica-Wagner:2015xua; @Hooper:2015ula]. If these are authentic signals of dark matter, it would suggest a mass in the range of $\sim 50-100$ GeV (for annihilations to $b\bar{b}$) and a cross section near $\sim$$10^{-26}$ cm$^3/$s. Looking forward, the constraints on annihilating dark matter based on gamma-ray observations of dwarf galaxies are expected to improve due to, 1) the growing data set from Fermi (and future gamma-ray telescopes, such as AMIGO or e-ASTROGAM), and 2) the discovery of new ultra-faint dwarf galaxies that are expected from LSST and other surveys. It is anticipated that Fermi’s sensitivity to dark matter annihilation in dwarf galaxies will improve substantially in the LSST era. The Galactic Center ------------------- The Galactic Center is expected to be the single brightest source of dark matter annihilation products on the sky, but is also plagued by bright and imperfectly understood astrophysical backgrounds. Furthermore, the prospects for detecting dark matter from this region depend critically on the distribution of dark matter in the central volume of the Milky Way. In fact, the flux of dark matter annihilation products that is predicted from the innermost degree or so around the Galactic Center (corresponding to approximately the angular resolution of Fermi’s Large Area Telescope) can vary by orders of magnitude, depending on the halo profile that is adopted [@Hooper:2012sr; @Gomez-Vargas:2013bea]. The sensitivity of ground-based gamma-ray telescopes, with much greater angular resolution, can depend even more strongly on the halo profile’s inner slope [@Abramowski:2011hc; @Aharonian:2006wh; @Silverwood:2014yza; @Pierre:2014tra]. Numerical simulations of cold, collisionless dark matter particles yield profiles with high central densities [@Navarro:2008kc; @Diemand:2008in]. A common parameterization for this distribution is the generalized Navarro-Frenk-White (NFW) halo profile [@Navarro:1995iw; @Navarro:1996gj]: $$\rho( r)\propto \frac{(r/R_s)^{-\gamma}}{(1 + r/R_s)^{3-\gamma}}, \label{gennfw}$$ where $R_s \sim 20$ kpc is the scale radius of the Milky Way. While the canonical NFW profile is defined such that $\gamma=1$, other values for the inner slope are also commonly adopted (as well as other parameterizations, such as the Einasto profile [@Springel:2008cc]). In particular, modern simulations which include the effects of baryonic processes have been found to yield a wide range of inner profiles, $\gamma \sim 0.5-1.4$ [@Gnedin:2011uj; @Gnedin:2004cx; @Governato:2012fa; @Kuhlen:2012qw; @Weinberg:2001gm; @Weinberg:2006ps; @Sellwood:2002vb; @Valenzuela:2002np; @Colin:2005rr; @Scannapieco:2011yd; @Calore:2015oya; @Schaller:2014uwa; @DiCintio:2014xia; @DiCintio:2013qxa; @Schaller:2015mua; @Bernal:2016guq]. Empirically speaking, we have only a modest degree of information about the shape of the Milky Way’s dark matter halo profile. More specifically, although many groups have presented dynamical evidence in support of dark matter’s presence in the Milky Way [@Weber:2009pt; @Catena:2009mf; @Iocco:2011jz; @Bovy:2012tw; @Garbari:2012ff; @Bovy:2013raa; @Read:2014qva; @Iocco:2015xga; @Pato:2015dua], these measurements provide relatively little information about dark matter in the innermost kiloparsecs of the Galaxy. We also note that although dark matter halos are expected to exhibit some degree of triaxiality (see, for example, Ref. [@Kuhlen:2007ku]), the Milky Way’s dark matter halo is generally predicted to produce an annihilation signal that is approximately radially symmetric with respect to the Galactic Center [@Bernal:2016guq]. As a simple example, consider dark matter that is distributed according to a standard NFW profile with $R_s=20$ kpc and a local density of 0.4 GeV/cm$^3$. Using Eq. \[gamma\], this yields the following flux of gamma-ray annihilation products originating from the innermost 2 kpc around the Galactic Center: $$\begin{aligned} \label{draco} \Phi_{\gamma} \sim 10^{-8} \, {\rm cm}^{-2} {\rm s}^{-1} \bigg(\frac{\langle \sigma v \rangle}{2\times 10^{-26} \, {\rm cm}^3/{\rm s}}\bigg) \bigg(\frac{\int \frac{dN_{\gamma}}{dE_{\gamma}} dE_{\gamma}}{10}\bigg)\bigg(\frac{100 \, {\rm GeV}}{m_X}\bigg)^2. \end{aligned}$$ The first thing to notice about this flux is that it is more than three orders of magnitude larger than that predicted from the brightest dwarf galaxies. The problem, of course, is that of astrophysical backgrounds. The dominant gamma-ray backgrounds from this region of the sky consist of diffuse emission resulting from, 1) pion production via cosmic-ray proton scattering with gas, 2) cosmic-ray electron scattering with radiation via inverse Compton scattering, and 3) cosmic-ray electron scattering with gas via Bremsstrahlung. Models for these backgrounds are built using inputs such as gas maps, and models of cosmic-ray transport. And while such models are often capable of describing the broad features of the observed Galactic diffuse emission, they cannot (and should not be expected to) account for the detailed spectral or morphological characteristics of this background. In addition, significant backgrounds also arise from gamma-ray point sources, such as supernova remnants, pulsars, blazars and the Milky Way’s central supermassive black hole (Sgr A$^*$). Fermi’s observations of the Galactic Center have been used to place some of the most stringent constraints on the dark matter annihilation cross section, and in Fig. \[limits\] these results are shown [@TheFermi-LAT:2017vmf] (see also Ref. [@Hooper:2012sr]). Results are presented for annihilations to $b\bar{b}$ and $\tau^+ \tau^-$ final states and for the case of an NFW profile ($\gamma=1$) or a generalized NFW profile with $\gamma=1.25$ (in each case with $R_s=20$ kpc and a local density of 0.4 GeV/cm$^3$). These constraints are compared to those derived from stacked observations of Milky Way dwarf galaxies. In evaluating such results, it is important to keep in mind that the constraints based on the Galactic Center can vary considerably depending on the assumptions made regarding the Milky Way’s halo profile ([*i.e.*]{}the values of $\gamma$, $R_s$, $\rho_{\rm local}$). ![image](f29a.pdf){width="49.00000%"} ![image](f29b.pdf){width="49.00000%"} [*1. The Galactic Center Gamma-Ray Excess*]{} ![image](heatmaps_negative.pdf){width="75.00000%"} In 2009, Lisa Goodenough and I began to analyze the publicly available Fermi data in an effort to place constraints on any contribution from annihilating dark matter. In October of that year, we posted to the arXiv the first paper to identify what would become known as the Galactic Center gamma-ray excess [@Goodenough:2009gk]. Over the following years, a number of studies [@Hooper:2010mq; @Hooper:2011ti; @Abazajian:2012pn; @Gordon:2013vta; @Hooper:2013rwa; @Huang:2013pda] improved upon this early work. By 2014 or so [@Daylan:2014rsa], a consensus had begun to form that the excess is in fact present, and exhibited the following characteristics: - [The spectrum of the excess peaks at an energy of $\sim$1-5 GeV and falls off at both higher and lower energies (in $E^2 dN/dE$ units). The spectrum also appears to be uniform, without detectable variations throughout the Inner Galaxy [@Calore:2014xka]. If interpreted as dark matter annihilation products, the spectral shape implies a dark matter candidate with a mass in the range of $\sim$40-70 GeV (for the case of annihilations to $b\bar{b}$). See Figs. \[spectrum\] and \[calore2\].]{} - [The angular distribution of the excess is approximately azimuthally symmetric with respect to the Galactic Center, with a flux that scales as $F_{\gamma} \propto r^{-\Gamma}$ with $\Gamma=2.0-2.7$, where $r$ is the distance to the Galactic Center [@Daylan:2014rsa; @Abazajian:2014fta; @Calore:2014xka; @TheFermi-LAT:2015kwa; @Linden:2016rcf; @TheFermi-LAT:2017vmf]; see Fig. \[calore2\]. The emission continues with roughly this profile out to at least $10^{\circ}-20^{\circ}$ away from the Galactic Center (where it becomes too faint to reliably characterize). If interpreted in terms of dark matter annihilation, the observed morphology implies a halo profile with an inner slope of $\gamma = \Gamma/2 \sim 1.0-1.35$ (see Eq. \[gennfw\]).]{} - [The overall intensity of the excess is consistent with that expected from a dark matter candidate with an annihilation cross section of roughly $\langle \sigma v \rangle \sim 10^{-26}$ cm$^3/$s. See Fig. \[calore2\].]{} From early on, it was appreciated that these characteristics are each broadly consistent with the expectations of dark matter in the form of a simple annihilating relic. It was also realized, however, that the astrophysical backgrounds from this region were not particularly well understood. Of particular concern were those potential backgrounds associated with gamma-ray pulsars [@Hooper:2010mq; @Abazajian:2010zy; @Hooper:2010mq; @Abazajian:2012pn; @Hooper:2013nhl; @Gordon:2013vta; @Yuan:2014rca; @Abazajian:2014fta; @Cholis:2014lta] and recent cosmic-ray outburst events [@Carlson:2014cwa; @Petrovic:2014uda; @Cholis:2015dea]. ![image](f4c.pdf){width="32.00000%"} ![image](f4d.pdf){width="32.00000%"} ![image](f4e.pdf){width="32.00000%"} ![image](f4f.pdf){width="32.00000%"} ![image](f9b.pdf){width="32.00000%"} ![image](f14b.pdf){width="32.00000%"} ![image](ROIspectra.pdf){width="60.00000%"} ![image](ROIs.pdf){width="30.00000%"}\ ![image](cs_vs_gamma.pdf){width="35.00000%"} ![image](fit_DM.pdf){width="35.00000%"} The gamma-ray emission observed from pulsars exhibits a spectral shape that is, in most cases, similar to that of the observed excess. Motivated by the possibility that the Galactic Center gamma-ray excess might originate from a population of unresolved gamma-ray pulsars, statistical tests were performed on the Fermi data to search for evidence of sub-threshold sources. In particular, Bartels, Krishnamurthy and Weniger utilized a wavelet-based technique designed to test for the presence of a large number of sub-threshold point sources [@Bartels:2015aea], while Lee, Lisanti, Safdi, Slatyer and Xue employed a non-Poissonian template fitting technique to a similar end [@Lee:2015fea]. Each of these groups reported the detection of small-scale power in the photon distribution in the Inner Galaxy, and interpreted these results as evidence for a significant population of unresolved gamma-ray point sources. Today there is a consensus that the Fermi data from this region of the sky does, in fact, exhibit significant small scale power, possibly indicative of such a population. In my opinion, however, it is not at all clear that pulsars are responsible for the observed excess. While the small scale power reported in Refs. [@Bartels:2015aea; @Lee:2015fea] might reflect a large population of unresolved point sources, it is also entirely plausible that such a feature could arise as the result of imperfect modeling of diffuse backgrounds. Furthermore, if pulsars are responsible for the excess, it is somewhat surprising that we have not yet detected any individual pulsars from this region of the Galaxy [@Hooper:2016rap; @Cholis:2014lta; @Bartels:2017xba], or observed more low-mass X-ray binaries [@Haggard:2017lyq]. In any case, it is clear that more data will be required to clarify this situation. Particularly promising are further gamma-ray observations of dwarf galaxies, as well as future radio searches for millisecond pulsars in the Inner Galaxy [@Calore:2015bsx]. The Isotropic Gamma-Ray Background ---------------------------------- Dark matter annihilations could produce significant contributions to the isotropic gamma-ray background (IGRB). In particular, the IGRB is expected to receive contributions from dark matter annihilating in the halo of the Milky Way, as well as from the integrated annihilation rate over the large scale structure of the universe. Furthermore, over cosmological distances, a significant fraction of high-energy gamma rays scatter with the cosmic radiation backgrounds, producing $e^+ e^-$ pairs which then go on to generate additional gamma rays as part of an electromagnetic cascade. The Fermi Collaboration has measured the IGRB at energies between 100 MeV to 820 GeV [@Ackermann:2014usa]. Although previously detected by other instruments [@1978ApJ...222..833F; @Sreekumar:1997un], Fermi’s measurement of the IGRB has provided a more detailed description of its characteristics and led to a more complete understanding of its origin. It has long been speculated that the majority of the IGRB is produced by a large number of unresolved sources, such as active galactic nuclei (AGN) [@Stecker:1993ni; @1993MNRAS.260L..21P; @Salamon:1994ku; @Stecker:1996ma; @Mukherjee:1999it; @Narumoto:2006qg; @Giommi:2005bp; @Dermer:2006pd; @Pavlidou:2007dv; @Inoue:2008pk] and star-forming galaxies [@Pavlidou:2002va; @Thompson:2006qd; @Fields:2010bw; @Makiya:2010zt], perhaps along with contributions from the annihilations or decays of dark matter particles [@Stecker:1978du; @Gunn:1978gr; @Gao:1991rz; @Ullio:2002pj]. In recent years, Fermi’s detection of gamma-ray emission from both non-blazar AGN [@Inoue:2011bm] and star-forming galaxies [@Ackermann:2012vca], combined with the observed correlations of the emission at gamma-ray and radio/infrared wavelengths, has revealed that these source classes each contribute significantly to the IGRB. Even more recent studies have shown that the combination of these source classes dominates the observed IGRB [@hooper2016radio; @Linden:2016fdd], leaving relatively little room for the presence of dark matter annihilation products. The high-latitude gamma-ray sky receives contributions from several different processes associated with the annihilation of dark matter particles. First, dark matter particles annihilating in the halo of the Milky Way generate a flux of gamma rays that can be calculated using Eq. \[gamma\]. While this emission is not strictly isotropic, the line-of-sight integral in Eq. \[gamma\] departs by less than 10% from the average value within the range of angles that contribute to Fermi’s measurement of the IGRB ($|b| > 20^{\circ}$). A component of gamma-ray emission with such a small degree of variation across the high-latitude sky would be indistinguishable from the overall IGRB. Dark matter annihilations beyond the boundaries of the Milky Way also contribute to the IRGB. Over cosmological distances, however, gamma rays are much more likely to be attenuated via pair production, thereby initiating electromagnetic cascades. Neglecting attenuation for the moment, the spectrum of gamma rays per area per time per solid angle from annihilating dark matter is given by: $$\frac{dN_{\gamma}}{dE_{\gamma}}(E_{\gamma}) = \frac{c}{8\pi} \int \frac{\langle \sigma v \rangle \rho^2_{\rm X}(z) \, dz}{H(z) (1+z)^3 \, m^2_{\rm X}} \,\bigg(\frac{dN_{\gamma}}{dE'}\bigg)_{E' = E_{\gamma}(1+z)},$$ where $H(z) = H_0 [\Omega_M (1+z)^3 + \Omega_{\Lambda}]^{0.5}$ is the expansion rate of the universe in terms of the cosmological parameters $\Omega_M=0.31$, $\Omega_{\Lambda} =0.69$ and $H_0=67.7$ km/s [@Aghanim:2018eyx]. Although the average dark matter density evolves as $\rho_{\rm X}(z) \propto (1+z)^3$, the clumping of dark matter into halos plays a very important role in this integral, effectively boosting the annihilation rate to potentially observable levels. Lastly, the quantity $dN_{\gamma}/dE'$ is the gamma-ray spectrum produced per annihilation, after accounting for the effects of cosmological redshift. High-energy gamma rays are significantly attenuated through their scattering with infrared, optical and microwave radiation fields [@Murase:2011yw; @Murase:2012xs; @Murase:2011cy; @Murase:2012df; @Berezinsky:2016feh]. The inverse mean free path of these interactions is given by: $$\begin{aligned} l^{-1} (E_{\gamma},z)&=& \int \sigma_{\gamma\gamma}(E_{\gamma},\epsilon) \, \frac{dn}{d\epsilon}(\epsilon,z) \, d\epsilon, \end{aligned}$$ where $\sigma_{\gamma\gamma}$ is the total pair-production cross section [@aharonian1983] and $dn(\epsilon,z)/d\epsilon$ is the differential number density of target photons at redshift, $z$ [@Dominguez:2010bv]. In practice, such interactions make the universe opaque to photons with energies greater than a few hundred GeV, causing the photons and electrons above this threshold to have their energy transferred into an electromagnetic cascade with a universal spectrum that peaks mildly at $\sim$100 GeV and extends across the entire range of energies measured by Fermi. Fermi’s measurement of the IGRB has been used to produce constraints on the dark matter’s annihilation cross section that are competitive with, although slightly weaker than, those derived from observations of dwarf galaxies and the Galactic Center [@Ackermann:2015tah; @DiMauro:2015tfa; @Ajello:2015mfa; @Cholis:2013ena; @Ando:2015qda]. Given that these constraints rely on a different set of astrophysical uncertainties, they remain relevant through their complementarity to these other techniques. Cosmic-Ray Searches for Dark Matter Annihilation Products ========================================================= The cosmic-ray spectrum is dominated by protons and nuclei. At energies below the “knee” ($E_{\rm knee} \sim 10^{6}$ GeV), most of these particles are thought to originate from Galactic supernova remnants. At higher energies, this spectrum is instead dominated by particles that originate from sources beyond the boundaries of the Milky Way, perhaps including active galactic nuclei. At all energies, the cosmic-ray spectrum is dominated by matter over antimatter. It has long been appreciated that if dark matter particles are annihilating or decaying in the halo of the Milky Way, such processes would (in most models) produce equal amounts of matter and antimatter, leading to an excess of antimatter relative to that predicted by standard astrophysical mechanisms. In this sense, indirect searches for dark matter using cosmic rays are often (but not always) searches for cosmic-ray antimatter, such as antiprotons, positrons, or anti-deuterons [@Silk:1984zy; @Ellis:1988qp; @Stecker:1985jc; @Turner:1989kg; @Kamionkowski:1990ty; @Bergstrom:1999jc; @Donato:2003xg; @Bringmann:2006im; @Donato:2008jk; @Fornengo:2013xda; @Hooper:2014ysa; @Pettorino:2014sua; @Boudaud:2014qra; @Cembranos:2014wza; @Cirelli:2014lwa; @Bringmann:2014lpa; @Giesen:2015ufa; @Evoli:2015vaa]. Once cosmic rays are injected into the halo, they move via diffusion, random walking through the Milky Way’s tangled magnetic field, while also undergoing interactions that can lead to energy losses, decay, etc. These processes are collectively described by the cosmic-ray transport equation [@Strong:2007nh]: $$\begin{aligned} \frac{\partial{}}{\partial{t}}\frac{dn}{dE}(E,\vec{x},t) = \vec{\bigtriangledown} \cdot \bigg[D(E,\vec{x}) \vec{\bigtriangledown} \frac{dn}{dE}(E,\vec{x},t) \bigg] + \frac{\partial}{\partial E} \bigg[\frac{dE}{dt}(E) \, \frac{dn}{dE}(E,\vec{x},t) \bigg] + Q(E,\vec{x},t), \nonumber \\ \label{diffusionlosseq}\end{aligned}$$ where $dn/dE$ is the differential number density of cosmic rays, $D$ is the diffusion coefficient, and the source term, $Q$, describes the spectrum, distribution, and time profile of cosmic rays injected into the halo (or removed from the halo in case of decay or spallation). This equation is generally solved in the steady-state limit (setting the left side equal to zero), and for a set of boundary condition. More specifically, a cylindrical geometry is generally adopted, enclosing a volume with a half-thickness of $L_{\rm z} \sim (1-6) \, {\rm kpc}$ and a radius of $\sim$20 kpc, beyond which the particles are not confined by the Galactic Magnetic Field and freely escape. The source term in Eq. \[diffusionlosseq\] includes contributions from individual sources of cosmic-ray electrons (supernova remnants, pulsars, etc.), as well as secondary particles, which are produced through the interactions of other cosmic rays. Secondary electrons and positrons, for example, are generated in the decays of pions and kaons that are produced in the collisions of hadronic cosmic rays with gas. The flux of cosmic-ray secondaries can be calculated from Eq. \[diffusionlosseq\] by setting $Q=\int J_p n_{\rm gas}(d\sigma/dE) dE_p$, where $J_p$ is the flux of hadronic cosmic rays, $n_{\rm gas}$ is the gas density, and $d\sigma/dE$ is the differential cross section for the production of secondaries [@Moskalenko:1997gh; @Moskalenko:2001ya]. Detailed models of cosmic-ray transport have many free parameters, including the spatial distribution and spectrum of sources, the energy and spatial dependance of the diffusion constant, the boundary conditions of the diffusion zone, as well as those which account for effects such as convection, diffusive reacceleration and solar modulation. Fortunately, there are also many independent observations that can be used to constrain these parameters. In particular, by measuring the energy-dependent ratios of secondary-to-primary cosmic rays, we can infer a great deal about cosmic-ray transport. In particular, stable secondary-to-primary ratios (such as boron-to-carton) inform us about the average amount of matter traversed by cosmic rays as a function of energy. Unstable secondary-to-primary ratios, in contrast, serve as a measurement of the amount of time that cosmic rays have been propagating. Beryllium-10 is particularly useful in this regard, being the longest lived and best measured unstable secondary. The measurement of $^{10}$Be/$^{9}$Be thus serves as a clock, since the ratio of the radioactive isotope to the stable one is directly related to the amount of time elapsed since the creation of the particles. When global fits are performed to the current set of cosmic-ray data, one finds $D(E) \simeq (3.9 \times 10^{28} \, {\rm cm}^3/{\rm s})\,E^{0.3}$ and $L_z \simeq 4$ kpc. To build some intuition for cosmic-ray transport, consider the simple example of a burst-like source of cosmic-ray protons that occurs at a particular place and time within the diffusion zone of the Galaxy. In this case, the source term is given by $Q = Q_0 \delta(\vec{x})\delta(t)$, and for protons we can safely neglect any energy losses. The transport equation then reduces to the following: $$\frac{\partial{}}{\partial{t}}n(\vec{x},t) = \vec{\bigtriangledown} \cdot \bigg[D(\vec{x}) \vec{\bigtriangledown} n(\vec{x},t) \bigg] + Q_0 \delta(\vec{x})\delta(t),$$ whose solution is $n \propto (Dt)^{-3/2} \exp(-r^2/4Dt)$, where $r$ is the radial distance to the source. The main feature of this solution is that these particles diffuse outward from the source a distance of order $L_{\rm dif} \sim \sqrt{D t}$, which scales with the square root of time as one would expect for a random walk. For a diffusion constant of $D = 3.9 \times 10^{28} \times (E/\rm GeV)^{0.3}$, the diffusion length is given by $L_{\rm dif} \sim 800 \, {\rm pc} \times (E/100 \, {\rm GeV})^{1/6} (t/{\rm Myr})^{1/2}$. Note that this result is easily generalizable to the case of an arbitrary injection history by summing the results for a series of burst-like events. In the case of cosmic-ray electrons and positrons, it is important to include the effects of energy losses from inverse Compton scattering and synchrotron [@Blumenthal:1970gc]: $$\begin{aligned} \label{losses} -\frac{dE_e}{dt}(r) &=& \sum_i \frac{4}{3}\sigma_T \rho_i(r) S_i(E_e) \bigg(\frac{E_e}{m_e}\bigg)^2 + \frac{4}{3}\sigma_T \rho_{\rm mag}(r) \bigg(\frac{E_e}{m_e}\bigg)^2 \\ & \approx & 1.02 \times 10^{-16} \, {\rm GeV}/{\rm s} \, \times \bigg[ \sum_i \bigg(\frac{\rho_{i}(r)}{{\rm eV}/{\rm cm}^3}\bigg) \, S_{i}(E_e) + 0.224 \,\bigg(\frac{B}{3\, \mu \rm{G}}\bigg)^2 \bigg] \, \bigg(\frac{E_e}{{\rm GeV}}\bigg)^2, \nonumber\end{aligned}$$ where $\sigma_T$ is the Thomson cross section and the sum is carried out over the various components of the radiation backgrounds, such as the cosmic microwave background (CMB) and starlight, as well as infrared and ultraviolet emission. The quantity, $S$, is the Klein-Nishina factor, which suppresses inverse Compton scattering at very high energies ($E_e \gsim m^2_e/2T$) [@Longair]: $$S_i (E_e) \approx \frac{45 \, m^2_e/64 \pi^2 T^2_i}{(45 \, m^2_e/64 \pi^2 T^2_i)+(E^2_e/m^2_e)}.$$ If we consider a burst-like source of cosmic-ray electrons/positrons, we find that energy losses limit the distance that such particles can propagate, especially at high energies. It follows from Eq. \[losses\] that a 100 GeV (1 TeV) electron will lose most of its energy over a timescale of $t_{\rm loss} \sim 3$ Myr (300 kyr), over which $L_{\rm dif} \sim 1.4 \, {\rm kpc}$ (400 pc). From this exercise, we conclude that only a relatively small volume of the local Galaxy contributes to the observed high-energy cosmic-ray electron/positron spectrum. Cosmic-Ray Positrons -------------------- In 2008, the collaboration of scientists operating the satellite-based experiment PAMELA reported that the cosmic-ray positron fraction (the ratio of positrons to positrons-plus-electrons) rises between approximately 10 GeV and 100 GeV [@Adriani:2008zr] (see also Ref. [@Adriani:2013uda]). While consistent with previous indications from HEAT [@Barwick:1997ig] and AMS-01 [@Aguilar:2007yf], this rise is in stark contrast to the behavior expected for a positron spectrum dominated by secondary particles produced during cosmic-ray propagation [@Moskalenko:1997gh]. Within this context, the possibility that annihilating dark matter might be responsible for this signal generated a great deal of interest [@Cholis:2008hb; @Bergstrom:2008gr; @Zurek:2008qg; @Harnik:2008uu; @Cirelli:2008jk], although it was also pointed out that nearby pulsars [@Hooper:2008kg] or the acceleration of secondary positrons in supernova remnants [@Blasi:2009hv; @Blasi:2009bd] could potentially account for the excess positrons. In any case, the rising positron fraction requires a source (or sources) of cosmic-ray positrons beyond that associated with standard secondary production. When the data from PAMELA was combined with that from AMS-02, as well as the electron+positron spectrum from Fermi, it became clear that in order for dark matter to generate this signal, the particle would have to be quite heavy ($\sim$1-3 TeV) and annihilate into light intermediate states [@ArkaniHamed:2008qn; @Cholis:2008qq]; see Fig. \[fig:cholis\] [@Cholis:2013psa]. Light mediators could also induce Sommerfeld enhancements, thereby allowing a heavy thermal relic to generate a large observed flux of positrons. ![image](positronRatio_L8kpc_Ebreak_XDMmodels.pdf){width="3.30in"} ![image](Fermi_leptonsFlux_L8kpc_Ebreak_XDMmodels.pdf){width="3.30in"}\ If astrophysical sources were responsible for the rising positron fraction, those sources must reside within several hundred parsecs of the Solar System, due to the rapid energy losses of high-energy electrons/positrons. Within this context, the nearby pulsars Geminga and Monogem (also known as B0656+14) are particularly interesting. In fact, it was shown in Ref. [@Hooper:2008kg] that if these sources deposited on the order of 10% of their energy budget into high-energy electron-positron pairs, they could account for the observed positron excess. In 2017, the HAWC Collaboration released their first measurements of the very high-energy (multi-TeV) gamma-ray emission from the Geminga and Monogem pulsars [@Abeysekara:2017hyn], finding that the emission from these sources follows a diffusive profile extending out to at least $\sim$5$^\circ$ in radius (corresponding to a physical extent of $\sim$25 pc) [@Abeysekara:2017old]. The spatially extended nature of this emission indicates that it is generated through the inverse Compton scattering of very high-energy electrons and positrons with the cosmic microwave background and other radiation fields. Furthermore, the fluxes of very high-energy gamma-rays observed from Geminga and Monogem indicate that these sources inject a flux of positrons into the local interstellar medium that is approximately equal to the value required to account for the observed positron excess. This new information strongly favors the conclusion that the positron excess is generated by nearby pulsars, diminishing the motivation for annihilating dark matter or other exotic mechanisms [@Hooper:2017gtd]. ![Left: The AMS positron fraction as measured by AMS-02 and background+signal fit for dark matter annihilating directly to $e^+ e^-$, for dark matter masses of 10 or 100 GeV. The normalization of the dark matter signal in each case was chosen such that it is excluded at the $95\%$ confidence level. For visibility, the contribution from dark matter (lower lines) has been rescaled as indicated. Right: Upper limits ($95\%$ confidence level) on the dark matter annihilation cross section, as derived from the AMS-02 positron fraction, for various leptonic final states. The dotted portions of the curves are potentially affected by solar modulation, and are thus subject to sizable systematic uncertainties. From Ref. [@Bergstrom:2013jra].[]{data-label="fig:fraction"}](plot_fraction_combined.pdf "fig:"){width="0.49\linewidth"} ![Left: The AMS positron fraction as measured by AMS-02 and background+signal fit for dark matter annihilating directly to $e^+ e^-$, for dark matter masses of 10 or 100 GeV. The normalization of the dark matter signal in each case was chosen such that it is excluded at the $95\%$ confidence level. For visibility, the contribution from dark matter (lower lines) has been rescaled as indicated. Right: Upper limits ($95\%$ confidence level) on the dark matter annihilation cross section, as derived from the AMS-02 positron fraction, for various leptonic final states. The dotted portions of the curves are potentially affected by solar modulation, and are thus subject to sizable systematic uncertainties. From Ref. [@Bergstrom:2013jra].[]{data-label="fig:fraction"}](AMSlimits.pdf "fig:"){width="0.49\linewidth"} Even if dark matter is not responsible for the excess positrons observed by PAMELA and AMS-02, it is possible to use these measurements to place constraints on annihilating dark matter, in particular in the case of annihilations to charged leptons. In Fig. \[fig:fraction\] we show the constraints that result from the lack of a distinctive feature in the cosmic-ray positron spectrum [@Bergstrom:2013jra; @Ibarra:2013zia]. For dark matter that annihilates to $e^+ e^-$ ($\mu^+ \mu^-$), this constraint rules out the thermal relic benchmark cross section for masses up to $\sim$170 GeV ($\sim$100 GeV). Cosmic-Ray Antiprotons ---------------------- In addition to positrons, the AMS-02 experiment has also produced a high-precision measurement of the cosmic-ray antiproton spectrum [@Aguilar:2016kjl]. Analysis of the antiproton-to-proton ratio, in conjunction with other secondary-to-primary ratios, has found overall agreement with the expectations for standard secondary production over much of the measured energy range. At energies between 10 and 20 GeV and above $100$ GeV, however, there appears to be an excess of antiprotons (see Fig. \[fig:antiprotonlimits\]) [@Cuoco:2016eej; @Cui:2016ppb] (see also Refs. [@Cui:2018klo; @Cuoco:2017rxb; @Cuoco:2017iax]). At the highest energies, this excess could quite plausibly be the result of the reacceleration of antiproton secondaries produced in supernova remnants [@Cholis:2017qlb]. The excess at 10-20 GeV has no simple explanation, however, and has been interpreted as a possible signal of annihilating dark matter [@Cuoco:2016eej; @Cui:2016ppb; @Cui:2018klo; @Cuoco:2017rxb; @Cuoco:2017iax]. That being said, systematic uncertainties related to the antiproton production cross section, solar modulation and cosmic-ray transport make the significance of this feature difficult to assess at this time [@Reinert:2017aga; @Winkler:2017xor]. Even so, it is intriguing to note that the range of dark matter models favored by the Galactic Center gamma-ray excess are also well suited to produce an antiproton excess similar to that measured by AMS-02 ($m_X\sim 60-80$ GeV, $\sigma v \sim 2 \times 10^{-26}$ cm$^3/$s). In Fig. \[fig:antiprotonlimits\] we show the antiproton-to-proton ratio as measured by AMS-02 and the resulting constraints on annihilating dark matter (as well as the region of parameter space favored to produce the observed excess). Anti-Deuterium and Anti-Helium ------------------------------ Although AMS-02 has not yet published the results of their searches for anti-deuterium or anti-helium events (see, however, Ref. [@AMSLaPalma]), these channels could potentially provide a powerful probe of annihilating dark matter [@Donato:1999gy; @Carlson:2014ssa; @Cirelli:2014qia; @Coogan:2017pwt; @Korsmeier:2017xzj]. Given the very low fluxes of anti-deuterium and anti-helium that are predicted from astrophysical sources or mechanisms, even a handful of such events could constitute a strong signal of annihilating dark matter (or other new physics). It has even been argued that the observation of a single cosmic-ray anti-deuteron with a rigidity below 1 GV would constitute a compelling signal of annihilating dark matter [@Donato:1999gy; @Fuke:2005it; @Donato:2008yx; @Ibarra:2013qt; @Hryczuk:2014hpa; @Carlson:2014ssa; @Aramaki:2015laa; @Reinert:2017aga]. There exist, however, very substantial uncertainties related to the anti-nuclei fluxes predicted from standard secondary production, as well as from annihilating dark matter. Although these uncertainties make the prospects for such searches somewhat difficult to assess, measurements of cosmic-ray nuclei by AMS-02 (as well as GAPS [@Aramaki:2015laa; @Ong:2017szd]) are generally expected to be among the most exciting channels for indirect dark matter searches in the years ahead. ![Left: The cosmic-ray antiproton-to-proton ratio as a function of rigidity as measured by AMS-02 compared to that predicted from standard secondary production in the interstellar medium. The lower panel shows the corresponding residual, with the grey bands representing the 1 and 2$\sigma$ uncertainties. Although an excess appears at energies between 10 and 20 GeV, systematic uncertainties associated with the antiproton production cross section, solar modulation and cosmic-ray transport make the significance of such features difficult to assess. Right: Constraints on the dark matter annihilation cross section (for annihilations to $b\bar{b}$) from the $\bar{p}/p$ ratio. In this frame the grey bands represent the range of constraints that are derived for various assumptions, and can be treated as an estimate of the systematic uncertainties. From Ref. [@Cuoco:2016eej].[]{data-label="fig:antiprotonlimits"}](fit_pbaroverp_noDM.pdf "fig:"){width="0.49\linewidth"} ![Left: The cosmic-ray antiproton-to-proton ratio as a function of rigidity as measured by AMS-02 compared to that predicted from standard secondary production in the interstellar medium. The lower panel shows the corresponding residual, with the grey bands representing the 1 and 2$\sigma$ uncertainties. Although an excess appears at energies between 10 and 20 GeV, systematic uncertainties associated with the antiproton production cross section, solar modulation and cosmic-ray transport make the significance of such features difficult to assess. Right: Constraints on the dark matter annihilation cross section (for annihilations to $b\bar{b}$) from the $\bar{p}/p$ ratio. In this frame the grey bands represent the range of constraints that are derived for various assumptions, and can be treated as an estimate of the systematic uncertainties. From Ref. [@Cuoco:2016eej].[]{data-label="fig:antiprotonlimits"}](summary_gamma_limits.pdf "fig:"){width="0.49\linewidth"} Neutrino Searches for Dark Matter Annihilation Products ======================================================= In addition to gamma rays and cosmic rays, dark matter annihilations can generate high-energy neutrinos, potentially detectable by telescopes such as IceCube [@Aartsen:2016zhm; @Aartsen:2016fep] or, at lower energies, Super-Kamiokande [@Desai:2004pq]. Strategies similar to those described for gamma-ray telescopes in Sec. \[gammasec\] have been employed to use neutrino telescopes to search for dark matter annihilation products from the Galactic Center [@Aartsen:2017ulx], or the Galactic Halo  [@Aartsen:2016pfc]. Due to the small interaction cross sections of neutrinos, however, such constraints are in most cases much weaker than those derived from gamma-ray or cosmic-ray based searches, generally leading to upper limits on the annihilation cross section that lie between $\langle \sigma v \rangle \sim 10^{-21} - 10^{-23}$ cm$^3/$s, depending on the mass of the dark matter candidate and the annihilation channel. Neutrinos do, however, have a potential advantage over gamma rays and cosmic rays in that they can penetrate large quantities of matter. As a result, it may be possible to detect neutrinos that are produced through dark matter annihilations in the core of the Sun or Earth [@Silk:1985ax; @Hagelin:1986gv; @Freese:1985qw; @Krauss:1985aaa; @Gaisser:1986ha]. Unlike most other indirect searches, which depend primarily on the dark matter’s annihilation cross section, the prospects for detecting such annihilations in the core of the Sun or Earth also depend in large part on the dark matter’s capture rate, and thus on its elastic scattering cross section with nuclei. Although the full calculation of the capture rate is involved [@Gould:1987ir], we can make a simple back-of-the-envelope estimate for the solar capture rate as follows: $$\begin{aligned} C^{\odot} &\sim& \phi_X (M_{\odot}/m_p) \sigma_{Xp}, \\ &\sim& 10^{20} \, {\rm s}^{-1} \times \bigg(\frac{100 \, {\rm GeV}}{m_X}\bigg) \bigg(\frac{\sigma_{Xp}}{10^{-42} \, {\rm cm}^2}\bigg), \nonumber\end{aligned}$$ where $\phi_X$ is the flux of dark matter particles in the Solar System, $M_{\odot}$ is the mass of the Sun, and $\sigma_{Xp}$ is the dark matter-proton elastic scattering cross section. In the lower line of this expression, we have adopted reasonable values for the local density ($\rho_X = 0.3$ GeV/cm$^3$) and velocity distribution ($\bar{v}=270$ km/s) of dark matter particles. A more careful calculation, including the effects of gravitational focusing and the probability that a scattered dark matter particle will ultimately be gravitationally bound, leads to the following solar capture rate [@Gould:1987ir]: $$\begin{aligned} C^{\odot} &\approx& 1.3\times 10^{21} \, {\rm s}^{-1} \times \bigg(\frac{100 \, {\rm GeV}}{m_X}\bigg) \sum_i \bigg(\frac{A_i \, \sigma_{Xp} \,S(m_X/m_i)}{10^{-42} \, {\rm cm}^2}\bigg),\end{aligned}$$ where $A_i$ denotes the relative abundance of each nuclear species, $A_{\rm H}=1.0$, $A_{\rm He}=0.07$, $A_{\rm O} = 0.0005$, etc. The quantity $S(m_X/m_i)$ is a kinematic factor, defined as follows: $$S(x) = \bigg[\frac{A(x)^{3/2}}{1+A(x)^{3/2}}\bigg]^{2/3},$$ where $$A(x) =\frac{3}{2} \frac{x}{(x-1)^2} \bigg(\frac{v_{\rm esc}}{\bar{v}}\bigg)^2,$$ and $v_{\rm esc} \simeq 1156$ km/s is the escape velocity of the Sun. Notice that for dark matter particles much heavier than their nuclear targets, $S \propto 1/m_X$, kinematically suppressing the overall capture rate. The number of dark matter particles present in the Sun as a function of time is given as follows: $$\dot{N}(t) = C^{\odot} -A^{\odot} N(t)^2 -E^{\odot} N, \label{diffeq}$$ where $C^{\odot}$ is the capture rate described above, $A^{\odot}$ is the dark matter’s annihilation cross section, $\langle \sigma v \rangle$, divided by the effective volume that is occupied by the captured dark matter, and $E^{\odot}$ is inverse time for a dark matter particle to escape the Sun by evaporation. The effective volume is determined by matching the temperature of the Sun’s core to the gravitational potential energy of a single dark matter particle: $$V_{\rm eff} \simeq 5.7 \times 10^{27} \, {\rm cm}^3 \, \bigg(\frac{100 \, {\rm GeV}}{m_X}\bigg)^{3/2}.$$ For dark matter particles heavier than a few GeV, evaporation is negligible. In this case, the solution to Eq. \[diffeq\] can be written as follows: $$\Gamma(t) = \frac{1}{2}A^{\odot}N(t)^2 = \frac{1}{2}C^{\odot} \tanh^2(t \sqrt{C^{\odot} A^{\odot}}),$$ where $\Gamma(t)$ is the present annihilation rate of dark matter particles as a function of the age of the Sun. Notice that for $t \gg (C^{\odot} A^{\odot})^{-1/2}$ the annihilation rate becomes a constant, $\Gamma = C^{\odot}/2$, having reached an equilibrium between the rates of capture and annihilation. ![Constraints on the dark matter’s spin-dependent elastic scattering cross section with protons as derived from observations of the Sun by IceCube, Antares and Super-Kamiokande. From Ref. [@Aartsen:2016zhm].[]{data-label="fig:IceCube"}](SDcrosssection-eps-converted-to.pdf){width="0.7\linewidth"} In Fig. \[fig:IceCube\], we show constraints from the IceCube, Antares and Super-Kamiokande experiments on dark matter particles annihilating in the core of the Sun [@Aartsen:2016zhm]. Notice that these constraints are not on the dark matter’s annihilation cross section, but on its elastic scattering cross section with nuclei. In fact, these constraints are derived under the assumption of capture-annihilation equilibrium, $\Gamma = C^{\odot}/2$, which one should expect to be realized for the range of elastic scattering cross sections shown so long as $\langle \sigma v \rangle \gsim 10^{-27}$ cm$^3/$s. Furthermore, the constraints shown are for the case of spin-dependent scattering with nuclei, as the constraints from direct detection experiments on spin-independent scattering are very stringent, and rule out most models that neutrino telescopes would be sensitive to. From this figure, we see that the constraints from neutrino telescopes can exceed those from direct detection experiments in cases in which the dark matter annihilates to final states that produce large numbers of high-energy neutrinos, such as $W^+ W^-$ or $\tau^+ \tau^-$. Constraints on Annihilating Dark Matter from the Cosmic Microwave Background ============================================================================ Thus far in these lectures, I have focused on ways in which we could potentially observed the annihilation products of dark matter directly. But it is also possible to place constraints on dark matter by studying the impact of their annihilation products on the universe during various eras of cosmic history. In particular, dark matter annihilation products could alter the light element abundances that are produced during Big Bang Nucleosynthesis (BBN), or change the ionization history of our universe during and after the formation of the cosmic microwave background (CMB). Consider a thermal relic dark matter candidate. By the definition of what it means for a particle to undergo freeze-out, an order one fraction of the total dark matter population underwent annihilations in a Hubble time during this process. As the universe continued to expand, the annihilation rate dropped rapidly. We can write the number of dark matter annihilations per comoving volume per Hubble time as follows: $$N_{\rm ann} = \frac{1}{2}\frac{\rho^2_X \langle \sigma v \rangle V_c}{m^2_X H},$$ where $V_c$ is the moving volume and $H$ is expansion rate (making $1/H$ the Hubble time). Since $\rho_X \propto a^{-3}$, $V_c \propto a^3$, and $H \propto g^{1/2}_{\star} a^{-2}$ (during radiation domination), we conclude that the fraction of annihilations per Hubble time evolved as $N_{\rm ann} \propto g^{-1/2}_{\star} a^{-1}$ up until matter-radiation equality, and as $N_{\rm ann} \propto g^{-1/2}_{\star} a^{-3/2}$ during matter domination. For concreteness, consider a dark matter candidate with a mass of 100 GeV and that froze-out when the temperature was $T_{\rm FO} \simeq 100\, {\rm GeV} /20 \simeq 5$ GeV. From the scaling relationship described above, we estimate that by the time that the universe has cooled to a temperature of 1 eV, on the order of 0.1 eV per baryon was injected into the universe through dark matter annihilations per Hubble time. This is enough energy to ionize up to $\sim (0.1 \,{\rm eV}) /(13.7 \,{\rm eV}) \sim 10^{-3}$ of the hydrogen atoms, substantially impacting the process of recombination, and well as the observed characteristics of the CMB. In fact, measurements of the CMB allow us to place stringent and robust constraints on thermal relic dark matter candidates, excluding those with velocity independent annihilation cross sections with masses up to $\sim$10-30 GeV [@Slatyer:2015jla; @Galli:2013dna; @Finkbeiner:2011dx; @Galli:2011rz; @Galli:2009zc; @Slatyer:2009yq]. ![Constraints on the dark matter annihilation cross section (for a variety of annihilation channels) from the Planck Collaboration’s measurements of the cosmic microwave background. From Ref. [@Aghanim:2018eyx].[]{data-label="fig:planck"}](ann_2col.pdf){width="1.0\linewidth"} In Fig. \[fig:planck\], we show the most recent constraints on annihilating dark matter from the Planck Colaboration [@Aghanim:2018eyx]. Although these constraints do not extend to masses as high as some of the others discussed in these lectures, they are very robust and suffer from negligible astrophysical or systematic uncertainties. Furthermore, whereas gamma-ray and cosmic-ray searches for dark matter are generally less sensitive at masses below $\sim$10 GeV or so, CMB constraints rely only on the total electromagnetic power injected and thus extend to masses well below the range shown in Fig. \[fig:planck\]. The CMB-based constraints are strongest for dark matter candidates which annihilate to electrons or photons, as these channels deposit the largest quantities directly into heating and ionizing the intergalactic medium. Lastly, this figure also identifies regions of parameter space in which dark matter could account for the Galactic Center gamma-ray excess, the cosmic-ray antiproton excess, or cosmic-ray positron excess, as discussed earlier in these lectures. Decaying Dark Matter ==================== So far, these lectures have focused on searches for dark matter annihilation products. This choice was motivated in part by the arguments presented in Sec. \[sectionone\], which relate the abundance of dark matter to the annihilation cross section of a thermal relic. But despite these arguments, there are many examples of viable dark matter candidates which do not appreciably annihilate. Alternatively, the particles that make up the dark matter could be unstable, and produce potentially observable fluxes of decay products. Observations of the the cosmic microwave background (CMB) and large scale structure indicate that the abundance of dark matter has not appreciably changed over the course of the matter-dominated era of our universe’s history. In fact, even if the decay products of dark matter are invisible, such measurements can be used to constrain $\tau_{X} \gsim 2 \times 10^{19}$ s [@Poulin:2016nat]. Much stronger constraints can be placed on dark matter candidates that decay into detectable particles. Unlike in the case of dark matter annihilation, there is no clear benchmark target for the lifetime of a long-lived but unstable dark matter particle. That being, arguments have been made which favor some ranges of lifetimes. For example, the lifetime of a particle that decays through a dimension-5 operator suppressed by the GUT scale ($M_{\rm GUT} \sim 10^{16}$ GeV) can be estimated as follows: $$\tau \sim \frac{M^2_{\rm GUT}}{m^3_{X}} \sim 10^{17} \, {\rm s} \times \bigg(\frac{{\rm MeV}}{m_{\rm DM}}\bigg)^3.$$ From this we learn that dimension-5 operators, even if suppressed by a very high-scale, tend to cause dark matter particles to decay on timescales that are already ruled out by cosmological considerations, unless very light. On the other hand, if we consider a decay that results from a dimension-6 operator, we arrive at the following estimate: $$\tau \sim \frac{M^4_{\rm GUT}}{m^5_{X}} \sim 10^{25} \, {\rm s} \times \bigg(\frac{{\rm TeV}}{m_{\rm DM}}\bigg)^3.$$ This lifetime is not excluded on cosmological grounds, but could potentially be tested through searches for the dark matter’s decay products (for a review, see Ref. [@Ibarra:2013cra]). Searches for dark matter decay products in the form of gamma-rays [@Blanco:2018esa; @Cohen:2016uyg; @Ando:2015qda; @Hutsi:2010ai; @Murase:2015gea; @Murase:2012xs; @Ackermann:2015lka; @Ackermann:2012rg; @Kalashev:2016cre; @Cirelli:2012ut; @Esmaili:2015xpa; @Liu:2016ngs], X-rays [@Boyarsky:2007ge; @Yuksel:2007xh; @Perez:2016tcq], neutrinos [@Murase:2012xs; @PalomaresRuiz:2007ry] and cosmic rays [@Ibarra:2013zia] have each been carried out. To calculate the flux of gamma-rays from decaying dark matter, we modify Eq. \[gamma\], replacing the annihilation rate per volume ($ \langle \sigma v \rangle \rho_X/2m_X^2$) with the decay rate per volume ($\rho_X/m_X \tau_X$), and substituting the gamma-ray spectrum produced per annihilation with that produced per decay: $$\frac{dN_{\gamma}}{dE_{\gamma}} (E_{\gamma}, \Delta \Omega) = \bigg(\frac{dN_{\gamma}}{dE_{\gamma}}\bigg) \frac{1}{4\pi \tau_{X} m_{X}} \int_{\Delta \Omega} \int_{los} \rho_{X}(l,\Omega) dl d\Omega. \label{los}$$ Because this flux is proportional to only one power of the dark matter density (as opposed to two in the case of dark matter annihilation), the best strategy is generally to study large regions of the sky, searching for decay products from throughout the halo of the Milky Way, and throughout the integrated volume of the observable universe. Due to the universe’s opacity to gamma rays above $\sim$1 TeV, the constraints from Fermi on the dark matter’s lifetime are approximately flat from the GeV scale to EeV masses and above, excluding decays to (non-neutrino) Standard Model particles for $\tau \lsim 10^{28}$ s. X-Ray Lines from Decaying Sterile Neutrinos ------------------------------------------- [cc]{} (a1) [$\nu_S$]{}; (a2); (c1); (b1); (c2) [$\gamma$]{}; (b2) [$\nu$]{}; ; & (a1) [$\nu_S$]{}; (a2); (c1); (b1); (c2) [$\gamma$]{}; (b2) [$\nu$]{}; ; \ \[fig:neutrino\] The origin of neutrino masses remains one of the most important outstanding puzzles in particle physics. Although the Standard Model does not accommodate masses for these particles, natural extensions can easily generate small masses for these species through variations of the see-saw mechanism [@Schechter:1980gr; @GellMann:1980vs; @Mohapatra:1979ia; @Yanagida:1979as; @Minkowski:1977sc]. Such scenarios predict the existence of sterile neutrinos, which do not interact through the weak force. If the degree of mixing between the sterile and active neutrinos is very small, the sterile neutrinos will not reach thermal equilibrium with the Standard Model bath in the early universe. As pointed out by Dodelson and Widrow [@Dodelson:1993je], however, even a very small degree of mixing can generate a significant population of sterile neutrinos through the collisions of active neutrinos with other Standard Model particles (see also Refs. [@Barbieri:1989ti; @Kainulainen:1990ds]). Sterile neutrinos with masses in the range of $\sim$1-100 keV have long been considered as potentially viable candidates for dark matter (for a review, see Ref. [@Adhikari:2016bei]). In recent years, this framework has become increasingly constrained. In particular, sterile neutrinos can decay to a final state that includes a distinctive mono-energetic photon through diagrams of the kind shown in Fig. \[fig:neutrino\]. With this signal in mind, searches for X-ray [@Boyarsky:2007ge; @Yuksel:2007xh; @Perez:2016tcq] and gamma-ray [@Ackermann:2015lka] lines have resulted in strong upper limits on the lifetime of sterile neutrinos, which in turn constrains the mixing angle between the sterile and active species. When these results are combined with observations associated with structure formation [@Horiuchi:2013noa; @Schneider:2016uqi], one finds that sterile neutrinos within the context of the standard Dodelson-Widrow scenario are unable to account for the entirety of the cosmological dark matter abundance. In light of these constraints, a number of less minimal scenarios have been proposed in which the production rate of sterile neutrinos is enhanced in the early universe, allowing for smaller mixing angles and thus relaxing the constraints from astrophysical observations. Model-building efforts in this direction have generally relied on either resonant enhancements or additional out-of-equilibrium processes. The former can be realized with the inclusion of a non-negligible lepton asymmetry in the early universe, which effectively modifies the matter potential of the Standard Model neutrinos [@Shi:1998km]. In this case, the successful predictions of Big Bang nucleosynthesis limit the degree to which the mixing can be suppressed, and only a small window of parameter space remains phenomenologically viable, corresponding to sterile neutrinos in the mass range of approximately 7–25 keV [@Perez:2016tcq]. Alternatively, the second class of models explicitly incorporates new particle species, such as additional scalars that decay directly into dark matter [@Kusenko:2006rh; @Merle:2013wta; @Frigerio:2014ifa; @Merle:2015oja; @Shaposhnikov:2006xi; @Petraki:2007gq; @Adulpravitchai:2014xna; @Frigerio:2014ifa; @Kadota:2007mv; @Abada:2014zra; @Shuve:2014doa]. In these scenarios, the connection between the production and late-time decays of sterile neutrinos is blurred, essentially at the cost of introducing additional degrees-of-freedom that are not directly tied to sterile-active oscillations. Interest in sterile neutrino dark matter has been bolstered in recent years by the reported detection of a 3.5 keV X-ray line from a stacked collection of galaxy clusters using data from XMM-Newton [@Bulbul:2014sua; @Boyarsky:2014jta]. More recently, the presence of a similar line has been detected from the center of the Milky Way [@Boyarsky:2014ska] and in deep-field observations [@Cappelluti:2017ywp]. The analysis of X-ray data from the direction of the Draco dwarf galaxy as described in Ref. [@Jeltema:2015mee] appears to rule out the presence of such a signal, while the authors of Ref. [@Ruchayskiy:2015onc] claim to have detected a faint 3.5 keV line signal in the same dataset. The lack of such a line feature in the emission from the Andromeda Galaxy [@Horiuchi:2013noa], a stacked sample of galaxies [@Anderson:2014tza], and dwarf galaxies [@Malyshev:2014xqa] has been used to establish strong limits on dark matter related interpretations of this signal. While some groups have argued that spectral lines from hot potassium or chlorine gas in the intercluster medium might be responsible for this signal, this interpretation remains actively debated [@Jeltema:2014qfa; @Boyarsky:2014paa; @Bulbul:2014ala; @Jeltema:2014mla]. For a review, see Ref. [@Abazajian:2017tcc]. Although a 7 keV decaying sterile neutrino is among the most well-motivated explanations for the observed 3.5 keV line, the constraints mentioned in the above paragraph have cast some doubt on this interpretation. With this in mind, a number of alternatives have been proposed. For example, scenarios have been considered in which pairs of dark matter particles can scatter to excite one (or both) into a slightly heavier state, which then produces a 3.5 keV photon in its subsequent decay into the ground state [@Finkbeiner:2014sja; @Cline:2014vsa]. But whereas a decaying sterile neutrino would produce a 3.5 keV signal in proportion to its density, such an “exciting dark matter” scenario leads to a signal that scales with the square of the density, along with some dependence on the dark matter’s velocity distribution. This could potentially provide an explanation for why no 3.5 keV signal has been observed from dwarf galaxies. Summary ======= In these lectures, I have presented a overview of indirect searches for dark matter, describing searches for gamma rays, cosmic rays and neutrinos from dark matter annihilations or decays, and the impact of such particles on the cosmic microwave background. It should be noted that these lectures are far from exhaustive, and there are many efforts to detect dark matter indirectly that I have not discussed here. A few takeaways from these lectures are the following: - [The measured abundance of dark matter provides us with motivation to consider dark matter candidates that annihilate with a cross section near the benchmark value of $\langle \sigma v \rangle \simeq 2 \times 10^{-26}$ cm$^3/$s. Furthermore, we can restrict the mass of thermal relics to be heavier than a few MeV (in order to satisfy constraints from Big Bang Nucleosynthesis) and lighter than 120 TeV (in order to not violate partial wave unitarity).]{} - [Measurements of the cosmic microwave background have been used to place constraints on annihilating dark matter, excluding most candidates with the thermal relic benchmark cross section for masses up to $m_X \simeq 10-30$ GeV.]{} - [Gamma-ray observations of dwarf galaxies and the Galactic Center extend these limits up to $m_X \sim 60-140$ GeV. If taken at face value, the cosmic-ray antiproton spectrum appears to exclude such candidates for masses between $m_X \sim160-500$ GeV, although significant systematic uncertainties apply to this channel. For the case of dark matter annihilations to $e^+ e^-$, the cosmic-ray positron spectrum also provides strong constraints.]{} - [A number of excesses and anomalies have been reported which could be the result of dark matter annihilations or decays. In particular, the Galactic Center gamma-ray excess and the cosmic-ray antiproton excess each point toward dark matter annihilating with a cross section near the benchmark value of $\langle \sigma v \rangle \simeq 2 \times 10^{-26}$ cm$^3/$s and with a mass in the approximate range of $\sim$50-80 GeV (for the representative example of annihilations to $b\bar{b}$).]{} - [There are many viable models in which the dark matter annihilates with a cross section that is significantly smaller that the thermal relic benchmark (as a consequence of $p$-wave annihilations, coannihilations, non-standard cosmological histories, etc.). At present, indirect searches for dark matter are not generally sensitive to such scenarios.]{} Looking forward, we expect indirect searches for dark matter to be bolstered by a range of new experiments and observations. The CTA is an array of ground-based gamma-ray telescopes scheduled for construction between 2020 and 2025, offering unprecedented sensitivity to the very high-energy gamma-ray sky [@Carr:2015hta; @Doro:2012xx]. At lower gamma-ray energies are the proposed satellite-based AMIGO and e-ASTROGAM telescopes, which are designed to be significantly more sensitive than Fermi at energies below 1 GeV [@Chou:2017wrw; @DeAngelis:2017gra; @Bartels:2017dpb]. Searches for gamma rays from dark matter annihilation in dwarf galaxies will be further enhanced by LSST, which is expected to discover many new dwarfs. There is much yet to be learned about dark matter from the measurements of AMS-02, in particular in regards to their search for anti-deuterium and anti-Helium in the cosmic-ray spectrum. I expect this to be a exciting topic in the years ahead. Lastly, I will also mention that plans are underway to launch a satellite-based X-ray telescope with the spectral resolution required to strongly constrain the origin of the 3.5 keV line [@Speckhard:2015eva] (a replacement for the Hitomi satellite, which was lost in 2016). Acknowledgements {#acknowledgements .unnumbered} ================ These lectures were originally presented at TASI 2018: “Theory in an Era of Data”, which was supported by the U.S. National Science Foundation. I would like to thank the organizers of TASI 2018 – Tracy Slatyer, Tilman Plehn and Tom DeGrand – as well as the students that participated. These lectures have been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.